subfolder
stringclasses 367
values | filename
stringlengths 13
25
| abstract
stringlengths 1
39.9k
| introduction
stringlengths 0
316k
| conclusions
stringlengths 0
229k
| year
int64 0
99
| month
int64 1
12
| arxiv_id
stringlengths 8
25
|
---|---|---|---|---|---|---|---|
1403 | 1403.6308_arXiv.txt | {The required high sensitivities and large fields of view of } \authorrunning{C. Tasse} \titlerunning{Non-linear Kalman filters and regularisation techniques in radio interferometry} | 14 | 3 | 1403.6308 |
||
1403 | 1403.4627_arXiv.txt | We present a 2D kinematic analysis out to $\sim 2 - 5$ effective radii ($R_e$) of 33 massive elliptical galaxies with stellar velocity dispersions $\sigma > 150$~\kms. Our observations were taken using the Mitchell Spectrograph (formerly VIRUS-P), a spectrograph with a large $107 \times 107$ arcsec$^2$ field-of-view that allows us to construct robust, spatially resolved kinematic maps of $V$ and $\sigma$ for each galaxy extending to at least 2 $R_e$. Using these maps we study the radial dependence of the stellar angular momentum and other kinematic properties. We see the familiar division between slow and fast rotators persisting out to large radius in our sample. Centrally slow rotating galaxies, which are almost universally characterised by some form of kinematic decoupling or misalignment, remain slowly rotating in their halos. The majority of fast rotating galaxies show either increases in specific angular momentum outwards or no change beyond $R_e$. The generally triaxial nature of the slow rotators suggests that they formed through mergers, consistent with a ``two-phase'' picture of elliptical galaxy formation. However, we do not observe the sharp transitions in kinematics proposed in the literature as a signpost of moving from central dissipationally-formed components to outer accretion-dominated haloes. | \label{Sec:Introduction} \begin{figure*}[!htb] \begin{center} \includegraphics[width=0.45\textwidth,angle=0,clip]{f1a.pdf} \includegraphics[width=0.45\textwidth,angle=0,clip]{f1b.pdf} \includegraphics[width=0.45\textwidth,angle=0,clip]{f1c.pdf} \includegraphics[width=0.45\textwidth,angle=0,clip]{f1d.pdf} \caption{ Characteristics of our galaxy sample (black) as compared to the volume-limited ATLAS$^{\rm 3D}$ survey of ETG's \citep[grey][]{Cappellari2011} and the 22 massive galaxies in the SLUGGS survey \citep[blue][]{Arnold2013}. We show the K-band magnitude and half-light radii (top left panel), the distribution of central dispersions ($\sigma_c$) as a function of luminosity (top right panel), the distribution of maximum observed radii (bottom left) and the distribution of Hubble Types (bottom right). We note that the radii are measured by the SDSS for our galaxies using a deVaucouleurs fit to the light profile, while they are based on RC3 for the ATLAS$^{\rm 3D}$ and SLUGGS galaxies. For clarity, we have truncated the histogram at bottom left. In the truncated bins, there are 120, 95 and 30 ATLAS$^{\rm 3D}$ galaxies respectively.} \label{Fig:Sample} \end{center} \end{figure*} Much attention has been paid recently to the formation and evolution of Early-Type Galaxies [ETGs, including both elliptical (E) and lenticular (S0) galaxies], driven in large part by the discovery that ETG's at $z \sim 2$ are $\sim 2 - 4$ times smaller at fixed mass than their present day counterparts \citep{vanderWel2005, diSeregoAlighieri2005, Daddi2005, Trujillo2006, Longhetti2007, Toft2007, vanDokkum2008, Cimatti2008, Buitrago2008, vanderWel2008, Franx2008, vanDokkum2008, Damjanov2008, CenarroTrujillo2009, Bezanson2009, vanDokkum2010, vandeSande2011, Whitaker2012}. To explain the rapid size evolution from $z \sim 2$ until today, a two-phase picture of ETG growth has emerged. At early times, ETG's form in a highly dissipative environment, with rapid star formation creating massive, compact cores, where most of the stars formed in situ \citep{Keres2005, KhochfarSilk2006, DeLucia2006, Krick2006, Naab2007, Naab2009, Joung2009, Dekel2009, Keres2009, Oser2010, Feldmann2010, DominguezSanchez2011, Feldmann2011, Oser2012}. The second phase, dry accretion, is dominated by collisionless dynamics during which star formation is suppressed and most of the stellar mass increase occurs in the galactic outskirts \citep{Hopkins2009, vanDokkum2010, Szomoru2012, Saracco2012}. While such a two-phase picture is generally compelling, it is uncertain precisely how and when mass is added (e.g., the balance of major to minor mergers). Simple virial arguments \citep{Cole2000, Naab2009, Bezanson2009} as well as recent cosmological simulations \citep{Hilz2012, OogiHabe2013, Hilz2013} suggest that major and minor mergers have very different effects. Violent relaxation in major mergers generally results in moderate, factor of $\sim 2-3$, increases in the half-mass radius for every merger event. Meanwhile, mass build-up via minor mergers deposits more mass in the outskirts, resulting in $\sim 5-$fold increases in the radius for similar growth in mass \citep{Hilz2012}. Simulations therefore currently favor a $1:5$ mass ratio in mergers \citep{Oser2012, Lackner2012, GaborDave2012}. However, incomplete modelling of feedback processes (e.g., AGN and supernovae winds) makes these results uncertain. Kinematic observations of local ellipticals also contain important information. It has long been known that ETG's are well separated into those that rotate and those that do not \citep[e.g.,][]{bertolacapaccioli1975,Illingworth1977,Davies1983}. The former tend to have lower stellar mass, disky isophotes and cuspy light profiles, while the latter are triaxial, cored, and massive \citep[e.g.,][]{Bender1989,KormendyBender1996, deZeeuw1985, Franx1991, deZeeuwFranx1991, vandenBosch2008}. Modern integral-field studies have provided strong confirmation of this general bimodal picture with excellent statistics \citep[e.g.,][]{Emsellem2004, Cappellari2007, Emsellem2007, Krajnovic2011, Cappellari2011, Emsellem2011} and have made interesting comparisons with cosmological simulations \citep{Khochfar2011, Davis2011, Serra2014}. In the context of two-phase assembly, it is thought that the global properties of each family can be linked to their formation history. Slow Rotators (SRs) are thought to accrete most of their mass in minor dry mergers with up to $\sim 3$ major mergers \citep{Khochfar2011}. This explains both their low net rotation and their preponderance of kinematically decoupled cores that are likely long-lived remnants of mergers \citep[KDC, e.g.,][]{Kormendy1984, Forbes1994, Carollo1997, Emsellem2004, Emsellem2007, Krajnovic2008}. In contrast Fast Rotators (FRs) likely grew predominantly through cold gas accretion with at most one major merger \citep{Bois2011,Khochfar2011, Davis2011, Serra2012}, and thus have high rotation velocities. However, this picture remains uncertain since most observations are limited to within the half-light radius of the galaxy. In contrast, if late-stage growth occurs through dry accretion, then most of the dynamical changes occur beyond the half-light radius, where stars have longer relaxation times and so carry a record of the merger history \citep{vanDokkum2005, Duc2011, RomanowskyFall2012}. It is also only in the outer regions that observations become sensitive to dark matter, for which there are concrete predictions from cosmological simulations. Therefore, wide-field kinematic data are required to provide more direct signatures of two-phase growth. A number of kinematic measurements of ETG's out to large radius have been made using spatially sparse measurements of planetary nebulae (PNe) and globular clusters (GCs) \citep{Mendez2001, Coccato2009, Strader2011, McNeil-Moylan2012, Arnold2011, Pota2013}. Most recently, \citet{Arnold2013} presented spatially well-sampled measurements of 22 massive ETG's out to $\sim 4 R_e$ as part of the SLUGGS survey. They showed that a significant fraction of their galaxies (particularly Es) show a transition from rotation to dispersion-dominated beyond $\sim R_e$. They interpreted this as a transition between a central dissipational component, formed at early times, and an outer halo-dominated region formed through later dry merging. However, without full 2D kinematic coverage from integral-field spectroscopic (IFS) studies of stellar continua, these results alone can be difficult to interpret. Thus far, at large radius, most studies of stellar kinematics either utilize one or two long-slit positions \citep[][]{CarolloDanziger1994, thomas2011}, or focus on individual objects with IFS \citep[e.g.,][]{Weijmans2009, Proctor2009, Coccato2010,Murphy2011}. By contrast, \cite{Greene2012, Greene2013} assembled a sample of 33 massive, local ETG's with observations extending over $\sim 2 - 4 R_e$. They studied stellar population gradients, finding that most stars in the outskirts were comparatively old and metal-poor, consistent with accretion from much smaller galaxies. While they were able to constrain when the stars at large radius formed, dynamical studies are much better suited to revealing where they were formed and how they were assembled. In this study, we therefore extend the Greene et al.\ survey by studying the stellar kinematics in conjunction with the stellar populations. We begin in \S\ref{Sec:Sample} by briefly discussing the galaxy sample, before describing in \S\ref{Sec:Observations} our observations, reduction methods and dynamical modelling. In \S\ref{Sec:Observed Kinematics} we discuss the basic kinematic characteristics of our galaxies at large radius, with particular reference to the Slow and Fast Rotator paradigm. We then go on to explore, in \S\ref{Sec:Analysis}, the possible theoretical implications of our results before concluding in\S\ref{Sec:Conclusions}. | \label{Sec:Analysis} \subsection{Expectations for Large Radius Kinematics} \label{subsec:Expectations} Before we examine the kinematics of our galaxy sample at large radii, we begin by reviewing the possible formation paths for ETG's and the results we may expect from any given formation scenario. The so-called two-phase picture of elliptical galaxy formation \citep{vanderWel2008, Naab2009, Oser2010, Khochfar2011, vandeSande2013} posits that the central $\sim 1 - 5$ kpc of galaxies are initially formed by a fast, dissipational phase, which leaves behind a compact stellar disk with relatively high rotational support $\lambda \sim 0.5$ \citep{Elmegreen2008, Dekel2009, Ceverino2010, Khochfar2011}. At later times dry merging expands the galaxy's outskirts in a manner that reduces $\lambda$ and leaves behind rounder and kinematically hotter remnants {\citep[e.g.,][]{Naab2009, Hilz2013, Taranu2013}}. The two-phase picture predicts that ETG's are inherently multi-component systems, with rotationally supported disks comprised primarily of in situ stars at their center and much rounder halos made up of accreted material. However, observations at large radius remain limited. While KDCs on small scales are interpreted as evidence of prior dissipational merging, most observed ETG's are FRs for which no evidence of such transition has been found, e.g., by ATLAS$^{\rm 3D}$. We thus focus on the MC galaxies discussed in Section~\ref{Sec:Observed Kinematics} and whether or not the transitions we observe beyond $R_e$ fit into the two-phase formation picture. We consider kinemetric transitions between rotation-supported and dispersion-supported regions, how similar they are to the KDCs of \cite{Krajnovic2011} and whether they are accompanied by any similar transitions in $\lambda_R$. Finally, we consider the stellar populations associated with each subgroup, and whether they are characteristic of a move from in situ to accreted stars. We are also interested in comparing to the picture presented by \cite{Arnold2013}, who were able to use the SLUGGS survey to measure kinematics out to $\sim 5~R_e$. They reported falling profiles {in local angular momentum}, perhaps reflecting transitions in some FRs from an inner disk to an outer halo at $\sim 5$ kpc, most dramatically in NGC~3377. They also found that S0s with more extended disks are most likely to show rising $\lambda$ profiles at large radius while elliptical galaxies are most likely to have falling $\lambda$ profiles. Finally they reported signs of PA alignment between inner disk and outer halo. Together these were used to argue for the two-phase picture and against the formation of disks by late-time major mergers \citep{Hoffman2009}, since {1:1 mergers} result in significant kinematic decoupling between the inner disk and outer halo \citep{Hoffman2010}. {However, we note that \cite{Naab2013} present a more nuanced view of the origin of SRs and FRs, in which either class can emerge from either a recent major merger, or a series of minor mergers, depending on the fraction of in-situ star formation and gas-richness of the last major merger.} We also compare with the simulations of \cite{Wu2014}. This work derives galaxy kinematics at large radii from cosmological simulations of galaxy formation. They focus on a lower-mass sample (stellar masses of $\sim 3 - 5 \times 10^{10} M_{\odot}$ compared to our $\sim 2 - 20 \times 10^{10} M_{\odot}$) with kinematics that extend out to $\sim 6~R_e$. However, they present simulated rotation and angular momentum profiles that correspond quite well with our observations. \subsection{Galaxies with Changing Kinematics} \label{subsec:Changing Kinematics} In order to emphasize radial changes, \cite{Arnold2013} consider a spatially varying specific angular momentum $\Lambda$, defined in elliptical annuli rather than full elliptical apertures. Since a local determination largely removes the effect of radial weighting, $\Lambda$ is very similar to the flux-weighted ratio of velocity to dispersion, $\langle V^2 \rangle / \langle \sigma^2 \rangle$ used by \cite{Binney2005} and \cite{Wu2014}. Our elliptical annuli in the central regions are calculated using 5\arcsec windows, and outside of this region are aligned with our previously described spatial bins. Additionally, instead of flux-weighting, which does not vary much in each elliptical bin, we weight by the measurement errors. Since S/N is correlated with flux, the two methods do not differ much, but our approach is more robust to outlying measurements. Figure~\ref{Fig:LambdaAllC} shows rotation curves ($k_1$), normalized velocity dispersions, and $\Lambda$ profiles for galaxies split into SRs and FRs. To highlight the different kinematic transitions observed, we further subdivide our sample into SC systems, MC galaxies with KD's, and other MC galaxies. In all cases, the local measure $\Lambda$ naturally shows much greater variation than $\lambda$ out to large radius. Partly this is due to the lower quality spectra in these regions, which means that errors increase outwards, rather than decreasing as in the cumulative case. However, we truncate the $\Lambda$ profiles where the errors exceed $\pm 0.025$, while the changes we observe in $\Lambda$ are larger than this, and thus likely real. \subsubsection{FRs at Large Radius} \label{subsubsec:FRR} \begin{figure*} \begin{center} \includegraphics[width=0.3\textwidth,angle=0,clip]{f10a.pdf} \includegraphics[width=0.3\textwidth,angle=0,clip]{f10b.pdf} \includegraphics[width=0.3\textwidth,angle=0,clip]{f10c.pdf} \caption{ Velocity, normalized velocity dispersion and angular momentum profiles for all galaxies in our sample. We show the first order harmonic velocity term $k_1$ (the rotation curve, top), the velocity dispersion in elliptical annuli (middle), and the local measure of angular momentum, $\Lambda_R$ as a function of radius (bottom) for both SRs (Black) and FRs (Red). Results are additionally divided into SC galaxies (left), MC galaxies with KD's (middle) and other MC galaxies (right). For simplicity, we classify the 3 galaxies NGC~677, IC~301 and NGC~3837 as SRs with KDs, since their kinematic behaviour is most similar to this class.} \label{Fig:LambdaAllC} \end{center} \end{figure*} \begin{figure*} \begin{center} \includegraphics[width=0.8\columnwidth,angle=0,clip]{f11a.pdf} \includegraphics[width=0.8\columnwidth,angle=0,clip]{f11b.pdf} \includegraphics[width=0.8\columnwidth,angle=0,clip]{f11c.pdf} \includegraphics[width=0.8\columnwidth,angle=0,clip]{f11d.pdf} \caption{ Radial gradient in the angular momentum between $R_e$ and 0.5$R_e$ (top) and the outermost measured radius $R_{\rm max}$ and $R_e$ (bottom). We show the gradient vs. both $\lambda(R_e)$ and the morphological T-type number, where $T > -3.5$ indicates a lenticular galaxy. In all cases, we show only FRs from our sample (black diamonds), SLUGGS (red triangles) and where appropriate ATLAS$^{\rm 3D}$ (grey squares). The ATLAS$^{\rm 3D}$ values were calculated from their published stellar kinematics, while for SLUGGS, the relevant values were drawn from \cite{Arnold2013}. For the ATLAS$^{\rm 3D}$ sample, the large subset of galaxies clustered around zero gradient arises from those galaxies observed with relatively small apertures.} \label{Fig:LambdaGrad} \end{center} \end{figure*} \begin{figure} \begin{center} \includegraphics[width=0.95\columnwidth,angle=0,clip]{f12a.pdf} \includegraphics[width=0.95\columnwidth,angle=0,clip]{f12b.pdf} \caption{2D velocity (Top) and dispersion (Bottom) maps for UGC~4051.} \label{Fig:KinematicsUGC4051} \end{center} \end{figure} For FRs, the distribution of $\Lambda$ looks qualitatively similar to the 22 galaxies observed by the SLUGGS survey and the numerical results of \cite{Wu2014}. In higher mass FRs, $\Lambda$ tends to decline slightly or remain flat, while the majority of the FRs with lower mass tend to have rising $\Lambda$ profiles. Since we can only reach $\sim 2 R_e$ in these lower-mass galaxies, it is possible that we simply have not reached a large enough radius to see the $\Lambda$ profile flatten/fall. The galaxies with declining $\Lambda$ profiles are predominantly SC disk-like FRs, as can be seen from the leftmost panel of Figure~\ref{Fig:LambdaAllC}. This subset includes the galaxies with the sharpest declines in $\Lambda$: NGC~774 at low mass and UGC~4051 at high mass, both of which have $\delta \Lambda \sim 0.1 - 0.2$. These SC FRs also almost all show a decline in the rotation curve beyond $\sim R_e$, which is typically accompanied by increases in $k_5 / k_1$ to $\sim 0.3$. The declining S/N and large spatial bins also contribute to the large $k_5 / k_1$ values. However, given the rough correspondence between drops in $\Lambda$ and $k_1$, both of which are calculated independently, it seems unlikely that S/N alone is behind the radial changes. We now ask whether these galaxies are showing signs of the transition from inner disk to outer halo detected by \cite{Arnold2013}. As mentioned earlier, none show nearly as rapid a decline in $\Lambda$ as that seen in NGC~3377, so it is not clear on a galaxy-by-galaxy basis that we are seeing this transition. However, statistically, we may ask whether we see the correlation between angular momentum gradients and Hubble Type seen by \cite{Arnold2013}. In Figure~\ref{Fig:LambdaGrad} we show the radial variations in $\Lambda$ between $R_{\rm max}$ and $R_e$ (top) and also between $R_e$ and 0.5$R_e$ (bottom) for all FRs as a function of the Hubble Type and of $\lambda(R_e)$. In this case, we omit the two galaxies with $R_{\rm max} \lesssim 2R_e$ as these lack sufficient data for a robust measurement of $\Lambda(>R_e)$. We also show the corresponding measurements for the SLUGGS, and where possible, ATLAS$^{\rm 3D}$, surveys. The former are taken directly from \cite{Arnold2013}, while for the ATLAS$^{\rm 3D}$ survey, values of $\Lambda$ within $R_e$ are calculated from the full 2D stellar kinematics. In both cases, Hubble types are taken from the HyperLeda\footnote{http://leda.univ-lyon1.fr.} database \citep[][]{Paturel2003}. As described in \cite{Arnold2013}, the fastest declining SLUGGS galaxies tend to be elliptical, while most of those that rotate more outwards are S0's. However, we do not notice any such trend for our sample. If anything the reverse holds true, with our S0s having the fastest declining $\Lambda$ profiles while the ellipticals show the largest $\Lambda$ increases. Hubble type is not a continuous quantity, but if we naively fit lines to the radial gradient $\Lambda(R_{\rm max}) - \Lambda(R_e)$ as a function of $T$, then we obtain a positive Pearson correlation coefficient of $r = 0.45$ {($p = 0.036$)} for the SLUGGS sample as opposed to $r = -0.18$ {($p = 0.32$)} for ours and $r = -0.01$ {($p = 0.94$)} for the joint sample. This seems to point to a lack of correlation between declining $\Lambda$ and disky galaxies. We lack the statistical significance to make any strong statement about correlations between morphology and large scale kinematics. However, as an interesting exercise, we may ask the same question of the entire ATLAS$^{\rm 3D}$ sample, as shown in the top right panel of Figure~\ref{Fig:LambdaGrad}. Naturally, in this case we are restricted to $<R_e$, but even within this smaller aperture, we already see gradients comparable to, or exceeding, the changes out to $\sim 4R_e$. Equally, within this much larger sample, we see no evidence of any difference in $\Lambda$ gradients between the E and S0 galaxies, and a simple fit gives a correlation coefficient of $r = 0.01$, entirely consistent with zero. As a final comparison between the two samples, we may consider the kinematics and morphology of our fastest declining FR, UGC~4051. Figure~\ref{Fig:KinematicsUGC4051} shows the velocity and dispersion maps for this galaxy. If there were an embedded disk we may expect that along the major axis, where the disk is located, there would be lower velocity dispersion with respect to the minor axis, which contains mostly halo stars. We see no such evidence of such a feature. Kinematic maps of other rapidly declining galaxies (particularly NGC~774) also show no such behaviour, {although this effect may only be pronounced if the galaxy were edge-on. Given also our low kinematic resolution this does not necessarily preclude the presence of stellar disks in these systems.} There are a number of key differences between our sample and SLUGGS that may explain the differences in our results. Firstly from a methodological perspective, our galaxies are binned at much lower spatial resolution, particularly at large radius. However, it seems unlikely that this could explain our dearth of galaxies with pronounced declines in $\Lambda$ as compared to SLUGGS. If anything, averaging over large spatial bins would tend to artificially lower the measured velocity and thus also $\Lambda$. More physically, our sample covers more massive galaxies, which may tend to have smaller $\Lambda$ gradients. For instance, the simulated galaxies in \cite{Wu2014} show a trend with stellar mass, in the sense that the low-mass FRs are more likely to show declining $\Lambda$ profiles. Perhaps we need to probe even larger radii to see the transition to a halo component in these more massive galaxies. Thus, our differences with the SLUGGS sample may be simply explained by the bias towards higher mass in our sample. \subsubsection{SRs at Large Radius} \label{subsubsec:SRR} For at least half the SRs, the picture is comparatively simple. Aside from the completely non-rotating SC SR IC~1152, five SRs show central kinematically decoupled components characteristic of a transition from an inner disky structure to an outer halo. The decoupled components seem to be similar to the KDCs described in \cite{Krajnovic2011}, which were interpreted as remnants of old, wet, major mergers. If these kinematic transitions actually signal a component with a different formation history, then we could be seeing the remnant of an early dissipational component transitioning to an outer halo \citep{Arnold2013}. On the other hand, these components are large (1-7 kpc) and have low amplitude rotation ($\Lambda \lesssim 0.2$ as compared to $\Lambda \sim 0.6$). Furthermore, the kinematic and photometric position angles are generally misaligned. For all of these reasons, we believe we are instead seeing signs of triaxiality \citep[e.g.,][]{Statler1991}. This triaxiality also likes results from merging, as pointed out for NGC~5982 by \citet{Oosterloo1994}. In fact, simulations suggest that triaxiality is strongly correlated with the box orbits that result from specifically dry major mergers \citep{Jesseit2005, Jesseit2007, Hoffman2009}. In the same way, we have argued that the SRs with rising $\Lambda$ profiles also show clear signs of triaxiality (as typified by NGC~5982 and NGC~6482). They generally show some evidence of a central LV component that transitions to slow disk-like rotation. In addition, the PA tends to be misaligned with the photometric axis in the central regions. NGC~6482 particularly shows strong kinematic misalignment of between $20^{\circ}$ and $50^{\circ}$ out to at least $\sim 2 R_e$. Based on their complicated kinematics, both galaxies have been put forward as recent merger remnants \citep{Statler1991, Oosterloo1994, DelBurgo2008}. While more detailed comparisons are needed, it seems likely from simulations that a series of minor mergers are needed to reproduce both the low $\lambda$ and generic triaxial properties of the MC SRs \citep[e.g.,][]{Bois2011}. \subsection{Correlations with Stellar Populations} \label{subsec:Changing SSPs} \begin{figure*} \begin{center} \includegraphics[width=0.8\textwidth,angle=270,clip]{f13.pdf} \caption{Radial gradients in age, [Fe/H], [Mg/Fe], [C/Fe], [N/Fe], and [Ca/Fe] as calculated by {\it EZ\_Ages} from the Lick indices measured in the composite spectra. We show both the measurements for SR (circles) and FR (squares) galaxies as a function of R in kpc (left) or ${\rm R} / {\rm R_e}$ (right).} \label{Fig:SSP} \end{center} \end{figure*} We now ask whether there are any differences in the stellar populations of our sample as a function of $\lambda$. For instance, if high $\lambda$ is a signpost of dissipational formation, we might expect younger, more metal-rich stellar populations in the outer parts of FRs. Following \cite{Greene2013}, we construct composite spectra as a function of radius, dividing the sample into FRs and SRs. {To try and mitigate the strong impact of $\sigma$, we restrict our attention to galaxies with central stellar velocity dispersion $\sigma_{c}$, as measured by the SDSS, greater than 200~\kms}. There are 10 SRs and 12 FRs included in our stacked spectra. We construct composite spectra as described in \cite{Greene2013}. In brief, we first substract emission-lines iteratively using continuum fits \citep[e.g.,][]{Graves2007}. Then, we divide each spectrum by a heavily smoothed version of itself to remove the continuum, and combine them using the biweight estimator \citep{Beers1990}. We then measure the Lick indices, and invert them to infer the ages, metallicities, and abundance ratios at each radial bin for the SRs and FRs, using {\it EZ\_Ages} \citep{GravesSchiavon2008}. In addition to stellar age, [Fe/H], and [$\alpha$/Fe] abundance ratios, the code also iteratively solves for the [C/Fe] and [N/Fe] abundance ratios, the former based mostly on the C$_2~\lambda 4668$ Swann band, and the latter on the blue CN bands. We note that the absolute values of [C/Fe] and [N/Fe] are uncertain because they depend directly on the oxygen abundance. Oxygen, as the most abundant heavy element, has a large indirect impact on the spectra but as there are no broad-band O indices, we must assume a value for [O/Fe]. Here we assume that it tracks the other $\alpha$ elements. Because the C gets bound up in CO molecules, the assumed oxygen abundance has a significant effect on the modeled [C/Fe] and therefore [N/Fe] \citep{Graves2007, Greene2013}. Specifically, if we lowered the assumed [O/Fe] to a solar value, the [C/Fe] and the [N/Fe] would fall, while their relative trend is robust \citep[see discussion in][]{Greene2013}. The radial profiles of our measured stellar population properties are shown in Figure \ref{Fig:SSP}. There are no significant differences between SRs and FRs. However, there are some intriguing hints. First of all, the FRs appear to have a slight tendency to get older in the outermost bins. In fact, we see a weak trend for positive age gradients as well when we consider individual galaxies, but it is not statistically significant. If true, we may be seeing the transition from stellar disk to stellar halo in the FRs. Over the past year, we have gathered data for twice as many galaxies, which will allow us to bin in both $\sigma_{c}$ and $\lambda$. We are left with a slightly ambiguous picture of how our galaxy sample ties into two-phase galaxy formation. Our observed FRs may show signs of a transition from inner disk to outer halo through small drops in the net rotation. However, these are typically not accompanied by the significant drops in angular momentum reported in \citet{Arnold2013} or any significant change in stellar populations. Nor are the observed drops in angular momentum correlated with E galaxies, as we might expect if S0's were characterised by more extended disks. Perhaps this is entirely a function of mass, since simulations of two-phase galaxy assembly by \cite{Wu2014}, with which our observations seem to agree quite well, show fewer angular momentum transitions as we move to higher mass. | 14 | 3 | 1403.4627 |
1403 | 1403.3141_arXiv.txt | The distance to the Large Magellanic Cloud (LMC) represents a key local rung of the extragalactic distance ladder. Yet, the galaxy's distance modulus has long been an issue of contention, in particular in view of claims that most newly determined distance moduli cluster tightly---and with a small spread---around the ``canonical'' distance modulus, $(m-M)_0 = 18.50$ mag. We compiled 233 separate LMC distance determinations published between 1990 and 2013. Our analysis of the individual distance moduli, as well as of their two-year means and standard deviations resulting from this largest data set of LMC distance moduli available to date, focuses specifically on Cepheid and RR Lyrae variable-star tracer populations, as well as on distance estimates based on features in the observational Hertzsprung--Russell diagram. We conclude that strong publication bias is unlikely to have been the main driver of the majority of published LMC distance moduli. However, for a given distance tracer, the body of publications leading to the tightly clustered distances is based on highly non-independent tracer samples and analysis methods, hence leading to significant correlations among the LMC distances reported in subsequent articles. Based on a careful, weighted combination, in a statistical sense, of the main stellar population tracers, we recommend that a slightly adjusted canonical distance modulus of $(m-M)_0 = 18.49 \pm 0.09$ mag be used for all practical purposes that require a general distance scale without the need for accuracies of better than a few percent. | \label{intro.sec} The distance to the Large Magellanic Cloud (LMC) is a key stepping stone in establishing an accurate extragalactic distance ladder. The LMC is the nearest extragalactic environment that hosts statistically significant samples of the tracer populations that are commonly used for distance determinations, including Cepheid and RR Lyrae variable stars, eclipsing binaries (EBs), and red-giant-branch (RGB) stars, as well as supernova (SN) 1987A, among others. These could thus potentially link the fairly well-understood local (solar-neighborhood and Galactic) tracers to their counterparts in more distant and more poorly resolved galaxies. In fact, at a distance of approximately 50 kpc, the LMC represents the only well-studied environment linking Galactic distance tracers to those in other large spiral and elliptical galaxies at greater distances, including M31 at a distance of $\sim 750$--780 kpc or a distance modulus of $(m-M)_0 = 24.38$--24.47 mag (e.g., Freedman et al. 2001; McConnachie et al. 2005). Yet, despite the plethora of studies claiming to have determined independent distance measurements to the LMC, lingering systematic uncertainties remain. This has prompted significant concern in the context of using the LMC distance as a calibrator to reduce the uncertainties in the Hubble constant (cf. Freedman et al. 2001; Schaefer 2008; Pietrzy\'nski et al. 2013). It has also led to persistent claims of ``publication bias'' affecting published distances to the galaxy (cf. Schaefer 2008, 2013; Rubele et al. 2012; Walker 2012). In general, publication bias is the tendency of researchers to publish results that conform to some extent to the norm, while ignoring outputs that may be of low(er) significance or that deviate significantly from what is considered common knowledge in the relevant field. In other words, the strongest or most significant results are included for publication, while the rest of a presumably much larger set of results remain unseen. This also means that this effect is notoriously difficult to correct for, because the underlying null results are usually not published. The phenomenon of publication bias is well-known to occur in statistics and among medical trials (e.g., Sterling 1959; Rosenthal 1979; Begg \& Berlin 1988; Naylor 1997; Stern \& Simes 1997; Sterne et al. 2000), where it could have potentially devastating effects on people's lives, or lead to ineffectual or even counterproductive treatments. Liddle (2004) explains that ``[p]ublication bias comes in several forms, for example if a single paper analyses several parameters, but then focusses attention on the most discrepant, that in itself is a form of bias. The more subtle form is where many different researchers examine different parameters for a possible effect, but only those who, by chance, found a significant effect for their parameter, decided to publicize it strongly.'' Publication bias has also been claimed to occur in various fields related to astrophysics and cosmology, where in some cases efforts have also been made to correct for these effects (see, e.g., Slosar \& Seljak 2004; Slosar et al. 2004; Schaefer 2008, 2013; Vaughan \& Uttley 2008; Bailer-Jones 2009; Sternberg et al. 2011; Foley et al. 2012). In the context of the present paper, Schaefer (2008) focused his analysis on published LMC distance determinations. He specifically addressed the question as to whether or not the publication of the final results of the {\sl Hubble Space Telescope} ({\sl HST}) Key Project on the Extragalactic Distance Scale (Freedman et al. 2001) had resulted in an unwarranted tightening up of the LMC's distance scale. He considered as possible causes of such a tightening correlations among published results, widespread overestimation of uncertainty ranges, bandwagon effects, or a combination of these scenarios. He concluded with a warning that the community would do well to be vigilant and redress the effects of publication bias, which he considered the most likely cause of the clustering of LMC distance measurements he reported to have occurred during the period from 2002 until June 2007. Upon careful examination, however, we realized that Schaefer's (2008) analysis---as well as his subsequent persistence in support of his 2008 conclusion that publication bias may have severaly affected the body of LMC distance measurements (e.g., Schaefer 2013)---was based on a number of simplifying assumptions: \begin{enumerate} \item He concludes that the uncertainties in the post-2002 distance moduli are not distributed according to a Gaussian distribution, which he flags as a problem. However, in such a scenario, the underlying assumptions are that (i) the pre-2001 values were, in fact, distributed in a Gaussian fashion (they are not, however, as we will show in Section \ref{lmcdist.sec}) and (ii) conditions were comparable before and after the benchmark date. This latter assumption is likely also too simplistic, as we will argue in the context of Cepheid-based distance determinations in Section \ref{cepheids.sec}. We recommend---and pursue in this paper---a more detailed analysis of the individual distance moduli contributing to the overall trends observed to assess whether or not publication bias truly is to blame. \item Schaefer (2008) based his results on published values and their uncertainties; the latter are, however, predominantly statistical uncertainties and the majority do {\it not} include systematic errors. Only a few authors include the systematic errors affecting their LMC distance estimates, however. In Section \ref{statistics.sec} we apply statistical tools to assess whether the distance moduli based on different tracer populations are statistically consistent with the ``canonical'' distance modulus and the recently published geometric distance based on late-type EB systems (Pietrzy\'nski et al. 2013). We also compare the consistency among a number of different tracers and the entire body of distance measurements (see Sections \ref{bias.sec} and \ref{conclusions.sec}). \item The conclusions reached by Schaefer (2008) are, in essence, based on application of a statistical Kolmogorov--Smirnov (KS) test, assuming a Gaussian distribution of LMC distance measurements, to a data set that should not {\it a priori} be expected to be distributed in a Gaussian fashion. Astrostatisticians have become more vocal in recent years in their opposition to the use of KS tests in astronomy if not done with due caution (e.g., Feigelson \& Babu 2013). KS tests are only applicable to samples that consist of independent and identically distributed values. In the context of LMC distance measurements, both conditions are violated. In this paper we will show that the close match between subsequent LMC distance determinations is most likely owing to the use of highly non-independent tracer samples, analysis methods, and calibration relations. \item As Schaefer (2008) points out himself, his database of LMC distance measurements is incomplete. He declares that this does not affect his inferences, but we found that gaps in the data set may, in fact, hide the presence of correlations among subsequent publications (cf. Section \ref{bias.sec}). For the analysis presented in this paper, we have collected the most complete and comprehensive database of published LMC distance moduli to date,\footnote{Schaefer (2008) lists 44 articles containing as many new LMC distance moduli published between July 2001 and April 2007. In that same period, our database includes 49 articles with a total of 67 new LMC distance determinations. Note that for this comparison we did not consider the final entry in Schaefer's (2008) list, which at the time of his publication had just appeared on the arXiv preprint server (http://www.arXiv.org/archive/astro-ph), but which did not appear in the printed literature until June 2008 (Ngeow \& Kanbur 2008).} so that our results will not be unduly affected by ``gaps'' in the coverage of our metadata. \end{enumerate} These concerns, combined with the significantly longer period (compared with that accessible by Schaefer 2008) that has elapsed since Freedman et al.'s (2001) seminal paper, prompted us to embark on a detailed (re-)analysis of the full set of LMC distance determinations, claimed by many of their authors to be based on independent approaches (but see Section \ref{bias.sec}). The primary goal of the analysis presented in this paper is to explore the reasons behind the apparent tightening of the biennially (two-year) averaged distance moduli and the associated decrease in their standard deviations during specific periods of time. We aim at exploring whether ``publication bias'' is likely to play a significant role in driving this behavior or whether other effects may be at work. The longer time span we have access to compared with previous work also allows us to verify whether any clustering of data points persisted beyond the period range of Schaefer's (2008) analysis and whether new clusters of data points may have materialized. Our detailed analysis of the complete body of published LMC distance moduli from 1990 until the end of 2013, both in full and as a function of distance tracer, is ideally suited to derive statistically robust constraints on the most appropriate mean distance modulus (projected to the LMC's center) and its uncertainties for future use ({\it modulo} the quality of the individual determinations). This paper is organized as follows. In Section \ref{data.sec}, we present the full compilation of LMC distance moduli published between 1990 and the present time. Section \ref{lmcdist.sec} addresses general trends in the LMC distance moduli with time, while in Section \ref{bias.sec} we consider such trends for individual distance tracers and discuss the independence (or lack thereof) of the results. We discuss the statistical basis of our analysis in Section \ref{statistics.sec}. In Section \ref{conclusions.sec} we place these results in a more general context, and we conclude with our recommendations of the most suitable distance modulus for common use, which naturally results from the analysis presented here. In Paper II (de Grijs et al. 2014) we apply a similar analysis to our compilation of the equivalent sets of distance measurements for M31, M33, and a number of dwarf galaxies associated with the M31 system (and slightly beyond). | 14 | 3 | 1403.3141 |
|
1403 | 1403.6552_arXiv.txt | We examine the consequences of a model for the circulation of solids in a protoplanetary nebula in which aerodynamic drag is counterbalanced by the recycling of material to the outer disc by a protostellar outflow or a disc wind. This population of circulating dust eventually becomes unstable to the formation of planetesimals by gravitational instability, and results in the ultimate deposition of $\sim$ 30--50 $M_{\oplus}$ in planetesimals on scales $R< 1 AU$. Such a model may provide an appropriate justification for the approximately power law initial conditions needed to reproduce observed planetary systems by in situ assembly. | The detection of planets orbiting other stars has revealed a great diversity of both planetary mass and location, including several populations which have no analogue in our own solar system. Of particular interest is the discovery of substantial numbers of sub-Jovian planets with orbital periods shorter than that of Mercury, using both the radial velocity and transit techniques (Howard et al. 2010; Mayor et al. 2011; Borucki et al. 2011; Batalha et al. 2013). This population proved to be a surprise for models of planetary systems whose short period populations are generated by migration inwards from larger radii (e.g. Ida \& Lin 2008). However, the properties of the observed systems do match the expectations of simple in situ assembly models (Hansen \& Murray 2012, 2013; Chiang \& Laughlin 2013), although the amount of mass in rocky material required for such models is sometimes larger than what one might anticipate from a simple minimum-mass solar nebula model. Hansen \& Murray (2012) suggested that such conditions could be realised if solid material is concentrated radially in the inner parts of the gas disc prior to the late-stage assembly into solid planets. This is not an outrageous expectation as it is well known that small bodies in gas discs are potentially subject to dynamically important aerodynamic drag forces (Whipple 1972; Weidenschilling 1977a; Takeuchi \& Lin 2002; Bai \& Stone 2010). However, the particular details of how such a model might set the stage for planet formation are still unclear. In this paper we attempt to outline a simple model that provides such a framework. | We have presented a simple model for the evolution of a protoplanetary disc in which dust particles undergo radial drift inwards, but are then recycled to the outer parts of the nebula through the action of a stellar or disc wind. Although this model is quite simplistic, it provides a natural framework for the deposition of tens of earth masses of material into planetesimals on scales of 0.1--1~AU. This matches the required mass inventory to assemble the observed planets in situ. There are also a variety of subsidiary issues that suggest further study is warranted. The retention of solids while gas is lost produces a natural evolution of the solid/gas ratio towards the limit where gravitational instability and planetesimal formation is likely to set in, obviating the need to invoke other physical mechanisms that require the existence of large particles or anomalously low viscosities. Much of the planetesimal reservoir is deposited within 1~Myr, which allows for the capture of residual gas from the nebula to explain the observed low mass Hydrogen envelopes, and matches the timescales inferred from the cosmochemical age dating of solar system meteoritic components. Furthermore. the gas mass on these scales is less than the mass in planetesimals, so that the resulting planets are likely to be as observed -- with substantial Hydrogen envelopes that are nevertheless a minority constituent by overall mass. Furthermore, we find that increasing the metallicity of the disk has a larger effect on the mass of planetesimals formed on scales of several AU, and thus provides a rationale for why the giant planet frequency correlates with metallicity (Gonzalez 1997; Santos et al. 2004; Fischer \& Valenti 2005; Johnson et al. 2010) more strongly than the frequency of lower mass planets (Sousa et al. 2008; Bouchy et al. 2009; Mayor et al. 2011; Buchave et al. 2012). If the solid retention is not perfect, and loss rate is size dependant, it provides an aerodynamic sorting mechanism that may explain the characteristic sizes of chondrules in the solar system. Large particles (dimensions of cm or larger) make more passages through the disc and are thus likely to be more depleted via loss at the inner edge. Similarly, entrainment in the outflows is more likely to remove small particles (Hu 2010), which suggests that particles in the size range 0.01-1mm may have the greatest chances of retention and survival. The circulation of solid material also helps to explain the apparent chemical homogeneity of the solar system solid inventory (e.g. Villeneuve et al. 2009) and the ubiquity of material processed at high temperatures (e.g. Brownlee et al. 2012). There several ways in which this calculation could be improved. The size evolution of the dust component has been ignored, although this is likely to provide an important feedback loop that may help to regulate the radial profile of the eventual formed planetesimals. We have also not extended the model forward to consider the formation of larger protoplanets and planets from our initial conditions. Nevertheless, we consider the above results encouraging in the sense that they manage to generate conditions that may plausibly be used to match to observed systems, at the reasonable price of invoking an assumption that has already proven useful in other contexts and may be required anyway to explain the well-mixed compositions of solar system bodies. The author thanks Phil Armitage, Andrew Youdin and the referee for helpful comments. This research has made use of the NASA Exoplanet Archive, which is operated by the California Institute of Technology, under contract with the National Aeronautics and Space Administration under the Exoplanet Exploration Program. | 14 | 3 | 1403.6552 |
1403 | 1403.6078_arXiv.txt | Higgs inflation can occur if the Standard Model (SM) is a self-consistent effective field theory up to inflationary scale. This leads to a lower bound on the Higgs boson mass, $M_h \geq M_{\text{crit}}$. If $M_h$ is more than a few hundreds of MeV above the critical value, the Higgs inflation predicts the universal values of inflationary indexes, $r\simeq 0.003$ and $n_s\simeq 0.97$, independently on the Standard Model parameters. We show that in the vicinity of the critical point $M_{\text{crit}}$ the inflationary indexes acquire an essential dependence on the mass of the top quark $m_t$ and $M_h$. In particular, the amplitude of the gravitational waves can exceed considerably the universal value. | The most economic inflationary scenario is based on the identification of the inflaton with the SM Higgs boson~\cite{Bezrukov:2007ep} and the use of the idea of chaotic initial conditions \cite{Linde:1983gd}. The theory is nothing but the SM with the non-minimal coupling of the Higgs field to gravity with the gravitational part of the action \be S_G =\int d^4x \sqrt{-g} \Bigg\{-\frac{M_P^2}{2}R - \frac{\xi h^2}{2}R \Bigg\}. \label{action} \ee Here $R$ is the scalar curvature, the first term is the standard Hilbert-Einstein action, $h$ is the Higgs field, and $\xi$ is a new coupling constant, fixing the strength of the ``non-minimal'' interaction. The presence of non-minimal coupling is required for consistency of the SM in curved space-time (see, e.g.~\cite{Feynman:1996kb}). The value of $\xi$ cannot be fixed theoretically within the SM. The presence of the non-minimal coupling insures the flatness of the scalar potential in the Einstein frame at large values of the Higgs field. If radiative corrections are ignored, the successful inflation occurs for any values of the SM parameters provided $\xi \simeq 47000\sqrt{\lambda}$, where $\lambda$ is the Higgs boson self-coupling. This condition comes from the requirement to have the amplitude of the scalar perturbations measured by the COBE satellite. After fixing the unknown constant $\xi$ the theory is completely determined. It predicts the tilt of the scalar perturbations given by $n_s\simeq 0.97$ and the tensor-to-scalar ratio $r\simeq 0.003$. After inflationary period, the Higgs field oscillates, creates particles of the SM, and produces the Hot Big-Bang with initial temperature in the region of $10^{13\text{--}14}$\GeV{} \cite{Bezrukov:2008ut,GarciaBellido:2008ab}. The quantum radiative corrections can change the form of the effective potential and thus modify the predictions of the Higgs inflation. The most significant conclusion coming from the analysis of the quantum effects is that the Higgs inflation can only take place if the mass of the Higgs boson is greater than some critical number $M_{\text{crit}}$ \cite{Bezrukov:2008ej,DeSimone:2008ei,Barvinsky:2008ia, Bezrukov:2009db,Barvinsky:2009fy}, \begin{align} M_h>M_{\text{crit}}. \label{inflcond} \end{align} Roughly speaking, the Higgs self-coupling constant must be positive at the energies up to the inflationary scale, leading to this constraint. In numbers \cite{Bezrukov:2012sa,Degrassi:2012ry,Buttazzo:2013uya}, \begin{align} \nonumber M_{\text{crit}}= \Big[129.6 + \frac{y_t^\text{phys} - 0.9361}{0.0058}\times 2.0 -\\ \frac{\alpha_s-0.1184}{0.0007}\times 0.5\Big]\GeV. \label{mcrit} \end{align} Here $y_t^\text{phys}$ is the top Yukawa coupling in \MSb{} renormalisation scheme taken at $\mu_t=173.2\GeV$\footnote{For precise relation between $y_t^\text{phys}$ and the pole top mass $m_t$ see \cite{Bezrukov:2012sa,Buttazzo:2013uya} and references therein.}, $y_t^\text{phys}\equiv y_t^\text{phys}(\mu_t)$ and $\alpha_s$ is the QCD coupling at the $Z$-boson mass. Thanks to complete two-loop computations of \cite{Buttazzo:2013uya} and three-loop beta functions for the SM couplings found in \cite{Mihaila:2012fm,Mihaila:2012pz,Chetyrkin:2012rz,Chetyrkin:2013wya, Bednyakov:2012en,Bednyakov:2013eba} this formula may have a very small theoretical error, $0.07$\GeV, with the latter number coming from an ``educated guess'' estimates of even higher order terms (see the discussion in \cite{Bezrukov:2012sa} and more recently in \cite{Shaposhnikov:2013ira}). The main uncertainty in determination of $M_{\text{crit}}$ is associated with experimental and theoretical errors in determination of $y_t^\text{phys}$ from available data. Accounting for those, the value of $M_{\text{crit}}$ is about 2 standard deviations from the mass of the Higgs boson observed experimentally at CERN \cite{Aad:2012tfa,Chatrchyan:2012ufa}. The determination of the inflationary indexes accounting for radiative corrections is somewhat more subtle and depends on the way the quantum computations are done (the SM with gravity is non-renormalizable, what introduces the uncertainty). In \cite{Bezrukov:2008ej,Bezrukov:2009db} we formulated the natural subtraction procedure (called ``prescription I'') which uses the field independent subtraction point in the Einstein frame (leading to scale-invariant quantum theory in the Jordan frame for large Higgs backgrounds) and computed $n_s$ and $r$ for the Higgs masses that exceeded $M_{\text{crit}}$ by just a small amount of few hundreds of MeV\footnote{We also performed the computation with the use of another subtraction procedure (called ``prescription II''), which has a field-independent subtraction point in the Jordan frame \cite{Barvinsky:2008ia,DeSimone:2008ei,Barvinsky:2009fy}.}. We have shown that the values of $n_s$ and $r$ are remarkably stable in this domain and coincide with the tree estimates. However, we did not analyse what happens in the close vicinity of the critical point. Partially, this has been studied in \cite{Allison:2013uaa}, but the peculiar inflationary behaviour found in the present work was not discussed in \cite{Allison:2013uaa}. The aim of the present paper is to study the behaviour of the inflationary indexes close to the critical point. In what follows we will use the prescription I. We expect to have qualitatively the same results in the prescription II, though the numerical values will be somewhat different. We will see that $n_s$ and $r$ acquire a strong dependence on the mass of the Higgs boson and the mass of the top quark. Thus, if the cosmological observations will show that one or both indexes do not coincide with those given by the tree analysis, they will indicate that in instead of inequality (\ref{inflcond}) we must have an equality between the Higgs mass and its critical value, $M_h=M_{\text{crit}}$. | The Higgs inflation for $M_h>M_{\text{crit}}$ is a predictive theory for \emph{cosmology}, as the values of the inflationary indexes are practically independent of the SM parameters. Near the critical point the situation completely changes, and we get a strong dependence of $n_s$ and $r$ on the precise values of the \emph{inflationary masses} of the top quark and the Higgs boson $m_t^*$ and $M_h^*$. In this regime the Higgs inflation becomes a predictive theory for \emph{high energy domain of particle physics}, as any deviation of inflationary indexes from the tree values tells that we are at the critical point, fixing thus the inflationary values of masses of the top quark and the Higgs boson $m_t^*$ and $M_h^*$. It is amazing that a possible detection of large tensor-to-scalar ratio $r$ in \cite{Ade:2014xna} gives the inflationary top quark and Higgs boson masses close to their experimental values $m_t$ and $M_h$. This tells that the uncertainties related to the transition from low and high energies corresponding to the Higgs field $h^* \propto M_P/\xi$ are quite small. We conclude with a word of caution. All results here are based on the assumption of the validity of the SM up to the Planck scale. If this hypothesis is removed, the Higgs inflation remains a valid cosmological theory, but its predictability is lost even far from the critical point. For example, the modification of the kinetic term of the Higgs field at large values of $H$, leads to a considerable modification of $r$ \cite{Germani:2010gm,Germani:2010ux,Nakayama:2014koa} (see also \cite{Kamada:2010qe,Kamada:2012se,Kamada:2013bia} for generalized Higgs inflation with Horndenski type terms). The change of the structure of the Higgs-gravity interaction to, for instance, \be M_P^2 R \sqrt{1+\xi |H|^2/M_P^2}, \ee will make the potential in the Einstein frame quadratic with respect to the field $\chi$ and thus would modify $r$ and $n_s$, making them the same as in the chaotic inflation with free massive scalar field. Another assumption is about the absense of operators suppressed by the Planck scale (or various tree level unitarity violation scales \cite{Bezrukov:2010jz}), which may be justified by a special scale (or shif in the Einstein frame) symmetry of the gravitational physics. Adding them would change the inflationary physics, cf.\ \cite{Branchina:2013jra} for importance of such terms for the stability of electroweak vacuum. While this paper was in preparation, the article \cite{Hamada:2014iga} appeared, where the possibility to have large value of $r$ for the Higgs inflation close to the critical point was also pointed out. \bigskip The authors would like to thank CERN Theory Division, where this paper was written, for hospitality. We thank Dmitry Gorbunov for helpful comments. The work of M.S. is supported in part by the European Commission under the ERC Advanced Grant BSMOXFORD 228169 and by the Swiss National Science Foundation. | 14 | 3 | 1403.6078 |
1403 | 1403.4885_arXiv.txt | Interaction between a central outflow and a surrounding wind is common in astrophysical sources powered by accretion. Understanding how the interaction might help to collimate the inner central outflow is of interest for assessing astrophysical jet formation paradigms. In this context, we studied the interaction between two nested supersonic plasma flows generated by focusing a long pulse high-energy laser beam onto a solid target. A nested geometry was created by shaping the energy distribution at the focal spot with a dedicated phase plate. Optical and X-ray diagnostics were used to study the interacting flows. Experimental results and numerical hydrodynamic simulations indeed show the formation of strongly collimated jets. Our work experimentally confirms the ``shock-focused inertial confinement" mechanism proposed in previous theoretical astrophysics investigations. | 14 | 3 | 1403.4885 |
||
1403 | 1403.1372_arXiv.txt | {} {Kepler-9 was the first case where transit timing variations have been used to confirm the planets in this system. Following predictions of dramatic TTVs - larger than a week - we re-analyse the system based on the full Kepler data set.} {We re-processed all available data for Kepler-9 removing short and long term trends, measured the times of mid-transit and used those for dynamical analysis of the system.} {The newly determined masses and radii of Kepler-9b and -9c change the nature of these planets relative to the one described in Holman et al. 2010 (hereafter H10) with very low, but relatively well charcterised (to better than 7\%), bulk densities of 0.18 and 0.14 g cm$^3$ (about 1/3 of the H10 value). We constrain the masses (45.1 and 31.0 M$_\oplus$, for Kepler-9b and -9c respectively) from photometry alone, allowing us to see possible indications for an outer non-transiting planet in the radial velocity data. At $2R_\oplus$ Kepler-9d is determined to be larger than suggested before - suggesting that it is a low-mass low-density planet.} {The comparison between the H10 analysis and our new analysis suggests that small formal error in the TTV inversion may be misleading if the data does not cover a significant fraction of the interaction time scale.} | Transit timing variations (TTVs) are deviations from strict periodicity in extra solar planetary transits, caused by non-Keplerian forces -- usually the interaction with other planets in the system. These TTVs are particularly important in multi-transiting systems since they can allow learning about dynamics in the system, which in turn can confirm the exoplanetary origin of the transit signals with no further observations (e.g. \citealt{2010Sci...330...51H}, H10 hereafter, or \citealt{2013ApJS..208...22X}), and sometimes even allow deriving the planets' mass from photometry alone \citep[Kepler-87,][]{2014A&A...561A.103O}. For these reasons TTVs had attracted a lot of attention since they were first predicted by \citet{2005Sci...307.1288H} and \citet{2005MNRAS.359..567A}, and especially since they where first observed by H10 in the prototypical Kepler-9 system. Kepler-9 is prototypical not just because it was the first object detected with TTVs, but also since it is a textbook-like example of TTVs: exhibiting very large TTVs on very deep transits, making the effect abundantly clear. The first study of the Kepler-9 system also included a prediction for the expected TTVs during the following few years (their Figure S4) which included dramatic TTV spanning up to about $^{+4}_{-8}d$ relative to the nominal ephemeris, accumulated over long interaction times scales (e.g. $\sim 1000d$ from first maximum to first minimum TTV excursion of Kepler-9c). These very large TTVs are easy to compare to the observed ones in later Kepler data. Indeed by the time we re-analysed this object much more data were available, revealing that the actual TTVs, while still large, were much less extreme than initially predicted. We observed TTV spans of about $^{+0.6}_{-0.9}d$ for the same features as above, and TTV time scale about half as long as predicted. These prompted us to revisit the analysis of Kepler-9. This paper is therfore divided in the following way: in sections \S \ref{photometry} and \S\ref{modeling} we describe the input data and TTV analysis procedures we used. In \S \ref{recovery} we made sure we are able to recover the H10 results when using only the data that was available at the time, showing consistent analysis, which then allowed us to perform a full analysis using the full data set in \S \ref{FullTTV}, before discussing the updated analysis in \S \ref{Discussion}. | \label{Discussion} \subsection{Partial vs. full dataset} We re-analysed the Kepler-9 system using both the partial Kepler data set that was available to H10 and the full data set available today. The comparison between the previous and new results show, that a very good fit to a planetary system in first order mean motion resonance can be misleading if only a fraction of the interaction time scale is covered. Even the much longer currently available Kepler data set might not be sufficiently long for that. We therefore follow H10 and extrapolated our best fit model into the future (Fig.\,\ref{FigTTV} and Table\,\ref{TTVFuture}). Given the large TTVs, ground based observations even with a marginal detection of the transit should be able to check the solution proposed in this work. H10 could confirm the Kepler-9b and Kepler-9c as planets from photometry alone, but could only place weak constraints on their masses without using RV data. They therefore included a few RV measurements in their fit, and it comes as no surprise that the RV fit is good since the partial photometry of the time did not have the constraining power to match the RV data. They also predicted that future Kepler data would be more constraining of the planetary masses, and indeed our results have smaller formal error bars on both planets' masses from photometry alone. We note, however,that the systematic residuals shown in Fig.\,\ref{FigTTVres}, and especially Fig.\,\ref{FigRVNewres} cause us to warn of unmodeled phenomena, such as other planets in the system or longer time-scale interaction between the planets or stellar activity. \subsection{The revised planets} The scaled radii r$_{b,c}/R_*$ we determined are slightly larger than the ones obtained by H10 by $\sim 3 \sigma$ and $\sim 4.5 \sigma$ for Kepler-9b and Kepler-9c, respectively. The new values are much more constrained with formal errors 5 to 8 times smaller. Actually, Kepler's data allows in principle to determine the planets' mass to $2.8\%$ and the planets' radii to better than 0.2\% -- but those are limited by our knowledge of the host star properties. Furthermore, Kepler-9 was measured in short cadence mode (1 minute sampling instead of the regular 30-minute sampling) starting from Quarter 7, which allows for an even better timing precision (and thus mass determination). While we did not use short cadence data, using this data would have had little effect on the global uncertainty which is dominated by stellar parameters errors. The newly determined masses and radii of Kepler-9b and -9c change the nature of these planets relative to the one described in H10. Both planets are now detetmined to have size similar to Jupiter's but they are 7 to 10 times less massive than Jupiter, i.e. have densities about 1/3 of the density given in H10. Consequently, both planets have very low derived densities of $\rho_b\sim=0.18\,g\,cm^{-3}$ and $\rho_c\sim=0.14\,g\,cm^{-3}$ -- among the lowest known. H10 specifically excluded coreless models for the planets, but the more abundant data we have today forces us to reconsider that Kepler-9b and -9c may not have cores at all. This result is of special interest in the context of the core accretion theory \citep{1996Icar..124...62P}: with masses of 30.6 and 44.5 $M_\oplus$ these planets have apparently just started their runaway growth when it stopped at this relatively rare intermediate mass. Figure\,\ref{FigRM.ps} shows the masses and radii of lower-mass ($M<100 M_\oplus$) planets that have both mass and radius known to better than $3\,\sigma$ \footnote{Extracted from the NASA Exoplanet Archive (\texttt{http://exoplanetarchive.ipac.caltech.edu/}) on January 21, 2014}. It is evident that the new locations of Kepler-9b and -9c put them at the edge of the mass-radius distribution, with very low density and in a mass range that is very poorly sampled, and yet -- both planets are now among the best-characterized exoplanets known with bulk densities known to $7\%$ or better. The recent successful launch of the GAIA mission further highlights that last point: the knowledge about both Kepler-9b and -9c in both radius and mass is limited by the knowledge about their host star. GAIA's observations will fix Kepler-9's properties to high precision, allowing to use other data (such as the available short cadence data) to further reduce the uncertainty on the physical parameters of Kepler-9b and -9c, and significantly so. Finally, we note that Kepler-9d is now determined to have a radius of $2.00 \pm 0.05 R_\oplus$, an increase relative to H10. The increased size, together with the low metal content of its neighboring planets, suggest that Kepler-9d may not be rocky, or at least that it may have a significant volatiles fraction, again unlike the initial suggestion by H10. If this is true, then Kepler-9d is perhaps similar to the new and exciting subgroup of low-mass low-density planets \citep[e.g. Kepler-87c or GJ\,1214][]{2014A&A...561A.103O,2009Natur.462..891C,2013ApJ...775...80F} \begin{figure} \includegraphics[width=0.5\textwidth]{Mass-radius.ps} \caption{ The mass-radius distribution of all well determined planets (both mass and radii determined to better than 3$\sigma$). For each planet the mass- and radius- semi-major axes represent the 1-$\sigma$ error bar, and the transparency is such that better determined planets are more opaque. Contours of constant bulk density are shown in dashed gray lines. The names of some of the better-determined planets are indicated. All planets are shown in shades of blue, but Kepler-9 which is shown in shades of red: larger (and more transparent) symbols for the H10 values, and smaller (and more opaque) symbols for the current study's values. Solar system planets are shown as letters.} \label{FigRM.ps} \end{figure} | 14 | 3 | 1403.1372 |
1403 | 1403.1199_arXiv.txt | {} {Most gamma-ray bursts (GRBs) detected by the Fermi Gamma-ray Space Telescope exhibit a delay of up to about 10 seconds between the trigger time of the hard X-ray signal as measured by the Fermi Gamma-ray Burst Monitor (GBM) and the onset of the MeV-GeV counterpart detected by the Fermi Large Area Telescope (LAT). This delay may hint at important physics, whether it is due to the intrinsic variability of the inner engine or related to quantum dispersion effects in the velocity of light propagation from the sources to the observer. Therefore, it is critical to have a proper assessment of how these time delays affect the overall properties of the light curves.} {We cross-correlated the 5 brightest GRBs of the 1st Fermi LAT Catalog by means of the continuous correlation function (CCF) and of the discrete correlation function (DCF). The former is suppressed because of the low number counts in the LAT light curves. A maximum in the DCF suggests there is a time lag between the curves, whose value and uncertainty are estimated through a Gaussian fitting of the DCF profile and light curve simulation via a Monte Carlo approach.} {The cross-correlation of the observed LAT and GBM light curves yields time lags that are mostly similar to those reported in the literature, but they are formally consistent with zero. The cross-correlation of the simulated light curves yields smaller errors on the time lags and more than one time lag for GRBs 090902B and 090926A. For all 5 GRBs, the time lags are significantly different from zero and consistent with those reported in the literature, when only the secondary maxima are considered for those two GRBs.} {The DCF method proves the presence of (possibly multiple) time lags between the LAT and GBM light curves in a given GRB and underlines the complexity of their time behavior. While this suggests that the delays should be ascribed to intrinsic physical mechanisms, more sensitivity and more statistics are needed to assess whether time lags are universally present in the early GRB emission and which dynamical time scales they trace.} | Gamma ray bursts (GRBs) are the most powerful explosions in the Universe. They have observed peak luminosities at $\sim$100 keV of $\sim10^{50}-10^{53}$ erg/s and integrated isotropic energy outputs in 10-1000 keV of $\sim10^{51}-10^{54}$ erg, and they are detected up to the very early Universe: about a dozen have measured redshifts higher than 4 \citep{coward2013}. A small fraction of GRBs exhibit emission at MeV-GeV energies, which was first detected by CGRO-EGRET, more recently by the AGILE-GRID \citep[in the 30 MeV-50 GeV energy range,][]{marisaldi2009}, and with more detail and accuracy, by the the Large Area Telescope \citep[LAT,][]{atwood2009} instrument (20 MeV-300 GeV) onboard the Fermi Gamma-ray Space Telescope. Possible interpretations have been given to explain the paucity of GRBs detected by the LAT \citep{ghisellini10,guetta11,longo2012}. The Gamma-ray Burst Monitor \citep[GBM,][]{meegan2009} onboard the Fermi Gamma-ray Space Telescope, operating at energies between 8 keV-40 MeV, complements the LAT. The comparison between the Fermi GBM and LAT light curves of GRBs shows that the onset of the emission of long GRBs above 100 MeV is systematically delayed by a few seconds with respect to the start of the GBM signal at hundreds of keV energies and by a fraction of a second in the case of short and hard GRBs \citep[][left panel of their Fig.~2]{abdo2009a,abdo2009b,abdo2009c,giuliani10,delmonte2011,ackermann2011,ackermann2013,piron12}. That a delay between the GBM signal onset and the first photon detected by LAT is also observed in the brightest LAT GRBs and below 100 MeV, i.e. when photon statistics are relatively rich, suggests that this delay is physical and not related to purely statistical and instrumental effects. Moreover, based on the GBM light curve, it is impossible to reproduce the delays using purely statistical methods. It must be said that the statistical contribution is taken into account in the estimate of the uncertainty of the various temporal parameters, but no correction is made on the measurement, because no plausible high energy emission model would justify such a correction (R. Bellazzini, private communication). Two possible physical explanations for this delay have been proposed. One invokes different emitting regions and mechanisms for the radiation detected by the GBM and LAT. It is plausible to expect a measurable delay if the $\sim$100 keV emission represents the prompt event produced via internal shocks \citep{mr99}, and the LAT-detected signal is an aftermath \citep{ghirlanda10}. The other explanation envisages energy-dependent variation in the speed of light according to quantum gravity (QG) theory \citep{grbgac,nemiroff}. It is assumed that the photon momentum is an analytic function of the energy alone. This can be expanded in a Taylor series, whose linear term is non-zero, to recover the classical (non-QG) dispersion relation as the low energy limit. Under these assumptions, if we consider a source that produces both high energy ($E_{high}$) and low energy ($E_{low}$) photons the difference $\Delta t$ in the arrival times between low and high energy photons is proportional to the ratio between the photon energy difference ($\Delta E=E_{high}-E_{low}$), and the characteristic QG mass ${\rm M_{QG}}$: $\Delta t= \Delta E/M_{QG} c^2 \times D/c$, where $D$ is the distance of the source and $c$ the speed of light. This idea can be promisingly tested by accurate arrival-time measurement coupled with a build-up of a small effect over the huge travel times for the photons from GRBs. These time delays have been used to set an upper limit on the Planck mass. Possible tests of QG in GRBs have been recently proposed \citep{pavlopoulos2005,gac1,gac2,gac3,vasileiou2013,couturier2013}. The insufficient accuracy of our understanding of the physical models, together with the Fermi LAT number statistics, make it impossible to distinguish between these two scenarios without overinterpreting the data. However, if either interpretation for the time lags between LAT and GBM emission is correct (i.e., afterglow vs prompt emission or QG), we would expect that the delay affects the entire light curve, not only the first detected photons. Before speculating on competing models that can explain the delays, it is therefore necessary to ascertain their presence and significance over the whole GRB evolution in the GBM and LAT energy ranges \citep[e.g.,][]{delmonte2011}. In this work we search for delays of the LAT signal with respect to the GBM signal in the five brightest LAT GRBs by cross-correlating all of the LAT and GBM light curves. Our methodology is analogous to that of \citet{ackermann2013c}, who recently applied the DCF to cross-correlating the keV and MeV-GeV light curves of the bright Fermi LAT-detected GRB~130427A. A similar approach was followed by \citet[][]{delmonte2011}. \citet{scargle2008} used a different method assuming a model for the time delay. We adopt both the continuous and the discrete correlation function (DCF) methods. The DCF was introduced by \citet{ek88} to correlate discrete time series such as the light curves of active galactic nuclei (AGNs), and it was also applied to GRB light curves \citep[e.g.,][]{pian2000}. Standard correlation function techniques usually require continuous signals, and therefore data interpolation, gap filling, and smoothing \citep[e.g.,][]{scargle2010}. This in turn implies the suppression of possible rapid variability events, which are frequent in sources like AGNs and GRBs. The primary motive for using the DCF method here is that the observed light curves of GRBs are discrete and can have different and independent sampling rates. For regular and dense time samplings, the DCF yields identical results to a continuous correlation function. A maximum of the correlation function should indicate a direct correlation of the data trains with a delay at the corresponding time. However, this could also be spuriously produced or influenced by statistical fluctuations. To quantify the significance of each time lag found using the DCF and to find if our results are affected by the low signal-to-noise (S/N) values we randomly generate $N\gg1$ GBM and LAT light curves according to \cite{peterson1998}. We correlate them pairwise using the DCF method, and for each pair of simulated light curves we estimate the lag at which the DCF peak occurs. The distribution of time lags resulting from the N DCFs yields an independent estimate of the time lag itself and of its uncertainty. The paper is organized as follows. In Sect.~\ref{par:sample} we introduce the GRB sample and describe the LAT and GBM data analysis. In Sects.~\ref{par:CCFmethod} and \ref{par:DCFmethres} we report our results from application of the CCF and DCF methods, respectively. In Sect.~\ref{par:DCFsimulations} we report the results of the Monte Carlo simulations and in Sect.~\ref{par:disc_concl} discuss our findings. | \label{par:disc_concl} Motivated by the detection of time delays between the onset of LAT and GBM signals in Fermi-detected GRBs, we have adopted the DCF method to cross-correlate the LAT and GBM light curves of the five brightest LAT GRBs in the first Fermi LAT GRB catalog and thus to estimate the delay between the arrival times of MeV-GeV and $\sim$100-keV energy photons over the whole time evolution of the GRBs. We searched for delays both in the observed light curves and in light curves that were randomly generated via Monte Carlo simulations. From the DCF of the observed light curves, we derived the time lags using a constant plus an asymmetric Gaussian approximation of the DCF maximum and determined the formal errors associated with a Gaussian fit (Table~\ref{table:fit}, Cols.~2-6). The reliability of these uncertainties depends on the correctness of the Gaussian approximation of the DCF profile around its maximum. For each GRB, we also performed the individual DCFs of simulated light curves and estimated the associated time lags, analogously to what was done for the DCF obtained by using the observed light curves. This allowed us to independently estimate both the time lag and its uncertainty as the average and the square root of the variance of the Gaussian fit function, respectively (Table~\ref{table:fit}, Cols.~7 and 8). The estimates obtained by adopting the time lag distributions are more robust than those obtained by using the individual DCF that results from the observed light curves. This is because the estimates derived from the individual DCFs are based on a statistical approximation, rather than on a functional form description of the individual DCFs. The time lags derived from the DCFs of the observed light curves are all formally consistent with zero, although they are mostly similar to those reported in the literature. This result is analogous to what is reported by \cite{delmonte2011}, who, using the cross-correlation approach, do not recover statistical significance for the time delay of $\sim$10 s observed between the start time of the AGILE MCAL and GRID light curves of GRB100724A. When the simulated light curves are cross-correlated and the resulting time lag distributions are fitted with Gaussian functions, in three cases (i.e. GRBs 080916C, 090510A, 110731A) the best-fit time lags are significantly different from zero and compatible with those reported in the literature. For GRB090902B, the main time lag ($\sim$~-2~s) is formally very different from what has been previously reported \citep[9.6 s,][]{abdo2009b}, but not significantly different from zero. For GRB090926A, the main time lag (2.3 s) is consistent with the one reported by \cite{ackermann2011}, but again not significantly different from zero. However, GRBs 090902B and 090926A also have secondary maxima in their time lag distributions. For both GRBs they correspond to time lags that are significantly different from zero and similar to those previously reported in the literature. We note that the secondary maxima become less significant when only the satisfactory fits ($\chi^2{\rm /dof}\lesssim3$) are retained. The reason may be that, since the fits corresponding to the secondary maxima generally have a more limited significance than those associated with the primary maxima, the DCF curves where they are more prominent have a complex morphology and are not well fit. The presence of these secondary maxima suggests a complexity in the time behavior of the gamma-ray signals. While in general our results suggest that the cross-correlations are influenced by the observed initial delays between the LAT and GBM light curves, they also show other time scales and suggest that these delayed LAT signal onsets are probably due to intrinsic physics. A systematic cross-correlation analysis on a bigger sample than used here may set better constraints on the physical origin of the time delays. | 14 | 3 | 1403.1199 |
1403 | 1403.3225_arXiv.txt | We study the occurrence of cuspy events on a light string stretched between two Y-junctions with fixed heavy strings. We first present an analytic study and give a solid criterion to discriminate between cuspy and noncuspy string configurations. We then describe a numerical code, built to test this analysis. Our numerical investigation allows us to look at the correlations between the string network's parameters and the occurrence of cuspy phenomena. We show that the presence of large-amplitude waves on the light string leads to cuspy events. We then relate the occurrence of cuspy events to features like the number of vibration modes on the string or the string's root-mean-square velocity. | Cosmic strings~\cite{Kibble, Vilenkin_shellard, Hind_Kibble, ms-cs07} can arise as a result of phase transitions followed by spontaneously symmetry breakings in the early Universe. Such one-dimensional false vacuum remnants were shown~\cite{Jeannerot:2003qv,ms-cs08} to be generically formed at the end of hybrid inflation within the context of grand unified theories. The evolution of a cosmic string network has been the core of many analytical and numerical studies. It has been long known and well-accepted that long strings enter the {\sl scaling regime}, rendering a cosmic string network cosmologically acceptable. Much later it was also shown~\cite{Ringeval:2005kr}, by means of numerical simulations, that cosmic string loops in an expanding universe also achieve a scaling solution, and an analytical model has been proposed~\cite{Lorenz:2010sm} to derive the expected number density distribution of cosmic string loops at any redshift soon after the time of string formation to today. Cosmic superstrings~\cite{PolchRevis,Sakellariadou:2008ie}, the string theory analogues of the solitonic strings, are generically formed \cite{Sarangi:2002yt} at the end of brane inflation. In contrast to the abelian field theory strings which can only interact through intercommutation and exchange of partners with probability of order unity, collisions of cosmic superstrings typically occur with smaller than unity probabilities and can lead to the formation of Y-junctions at which three strings meet~\cite{PolchProb,cmp,JJP}. This characteristic property of cosmic superstrings is of particular interest since it can strongly effect the dynamics of the network evolution~\cite{Sakellariadou:2004wq,TWW,NAVOS,Davis:2008kg,PACPS} leading to potentially observable phenomenological signatures~\cite{PolchRevis,Davis:2008kg,CPRev,ACMPPS}. The effect of junctions on the evolution of cosmic superstring networks was the central subject of several numerical~\cite{Rajantie:2007hp,Urrestilla:2007yw,Sakellariadou:2008ay,Bevis:2009az} and analytical~\cite{Sakellariadou:2004wq,TWW,NAVOS,Copeland:2006eh,Davis:2008kg,Copeland:2006if,Copeland:2007nv,ACMPPS,PACPS} studies. One of the most important channels of radiaton emission from cosmic (super)strings is gravity waves~\cite{Vachaspati:1984gt,Sakellariadou:1990ne,Damour:2000wa,Damour:2001bk,Brandenberger:2008ni,Abbott:2009rr,Olmez:2010bi,Binetruy:2010cc,Regimbau:2011bm}. They can be emitted either as bursts, namely by cusps and kinks, or as a stochastic background. To estimate the emission of gravity waves from cosmic (super)strings it is therefore crucial to evaluate the influence of some parameters, such as the interstring distance, the coherence distance and the wiggliness, on the number of cusps. It is usually assumed that cusps appear on the string and their number is just considered as a free and unknown parameter, to be estimated, for example, from numerical simulations. The aim of this analysis is to roughly evaluate the occurrence of cusps on a string network and in particular to relate the probability of cusp's formation to the relevant string parameters. In what follows, we present first an analytical and then a numerical study of a string stretched between two junctions, and its periodic non-interacting evolution. We consider the specific confi\-gu\-ration of two equal tension heavy strings linked by a light string. As explained in the following, the conclusions drawn in such case can be generalised to realistic strings configurations under certain circumstances which we discuss in Section~\ref{sec:setup}. We estimate the influence of the string parameters on the average number of cuspsy events appearing on the string during its evolution. In particular, we first look at the periodicity requirements and symmetries on the string, in order to allow for a Fourier decomposition. An analytical study then draws a link between waves and cuspy phenomena on the string, where by cuspy phenomena we mean both cusps and \emph{pseudocusps}. Recall that the former are points on the string reaching temporarily the speed of light $c = 1$. The latter are highly relativistic configurations close to cusps but reaching a velocity between $10^{-3}$ and $10^{-6}$ below $c$. We then present our numerical simulation which allows us to draw a specific string configuration and to subsequently compute the number of cusps and pseudocusps within a period of a non-interacting evolution. Finally, we discuss our results with respect to two parameters, one that sets the interstring distance and another one that measures the waviness of the string --- that is how many large-amplitude waves are on the string and how large there are. | Gravitational waves, even though they have yet to be observed, are at the center of attention. They are the next tool for cosmology and high energy astrophysics and should soon give us a stream of new data to analyse. Similarly, cosmic strings are thought to be unavoidable in most of the cosmic scenarii and should provide insight into the symmetry breaking they are remnants of or the theory to which they belong. In this study, we have concentrated on a particular configuration made of a light string stretched between two junctions with heavy strings. It is important to note that even if we considered simplifying assumptions, the overall behaviour and the results should remain in more realistic configurations as long as the end points of the light string can be seen as fixed during a period of oscillation. We then looked at highly relativistic points since they are sources of high frequency bursts of gravity waves. Such cuspy events appear on a string when the left- and right-movers' velocities are temporarily equal (or approximately equal), making them reasonably easy to identify. We split them into two classes: the actual cusps, resulting from crossings of the two movers' velocity curves and hence reaching momentarily the speed of light $c=1$, and the so-called pseudocusps, resulting from a close approach between the two curves and hence reaching highly relativistic velocities, typically below $c=1$ by $10^{-3}$ to $10^{-6}$. Since cuspy events emit large amounts of energy in the form of gravitational wave bursts, to estimate the signal that could be detected in the neighbourhood of the Earth by ground- and space-based detectors, one needs to know how frequently they occur. We have here aimed to quantify this and analyse it in terms of the parameters characterising the string configuration, as well as the string network through the usual network parameters $\xi$ and $\bar \xi$ (but not~$\zeta$). Our analytical approach allowed us to identify the symmetries of the problem. Indeed, because of the boundary conditions, the string moves (almost) always periodically. In addition, on the unit sphere, the left- and right-movers' velocities are symmetric with respect to the axis parallel to the heavy strings. This simplifies the problem enough to evaluate the frequency of cusps and pseudocusps on the string with respect to a few parameters. We found that cusps should be frequent for strings satisfying (see Eq.~(\ref{eq:criterion})): \be \langle a_x' a_x' \rangle_\sigma \gtrsim \frac{1+\alpha}{\alpha} \left( \frac{|\Delta|}{\sigma_{m}} \right)^2~, \nonumber \ee where ${\bf a}$ is the left-mover on the string, $|\Delta|$ the end-to-end vector's norm and $x$ its direction (the subscript $x$ thus referring to the projection on the $x$-axis), $\sigma_m$ the parameter length of the string and $\alpha$ a parameter we subsequently estimated around $\alpha \simeq 4.1$. It is important to notice that such cuspy strings should present many important waves. We then used a simulation to get a statistically important number of strings within a range of parameters, in order to check this behaviour. The set of $237$ strings we obtained presents $8719$ cusps and $4659$ pseudocusps, i.e. there are slightly less than half the number of cusps --- as roughly expected. We analysed the occurrence of cuspy events with respect to several other features, confirming our analytical work and the general behaviour of such strings. In particular, we first checked that our characterisation of pseudocusps from the minimal angle between the two curves on the unit sphere is relevant. For instance, the velocity we obtained from this description is very close to the one obtained directly from the simulation (within grid and computational inaccuracies). In addition, the presence of cusps and pseudocusps increases according to the inequality Eq.~(\ref{eq:criterion}), giving us an accurate tool to discriminate between cuspy and non-cuspy strings. More importantly, it also depends on the number and amplitude of the vibration modes in the $x$-direction; this confirms more directly the fact that the wavier a string is, the more cuspy events it presents. We also analysed the influence of the RMS velocity on the string: as one could expect, the more energy there is on the string, the more cusps appear. This is consistent with the fact that more vibrating modes imply more cusps, since both indicate more energy. Finally, we found the radius of curvature along the string is also correlated to the number of cusps and pseudocusps, favouring again the mentioned behaviour (a smaller radius of curvature is equivalent to more waves, which are in turn linked to more cusps). Expressing the usual network parameters in terms of our simulation's parameters, we refined the link between the numerical description and the way cosmic strings networks are traditionally pictured. This should allow future work, whether on gravitational waves or on interacting evolution of the network, to assess, use and further continue this work. | 14 | 3 | 1403.3225 |
1403 | 1403.3539_arXiv.txt | In the last decade direct detection Dark Matter (DM) experiments have increased enormously their sensitivity and ton-scale setups have been proposed, especially using germanium and xenon targets with double readout and background discrimination capabilities. In light of this situation, we study the prospects for determining the parameters of Weakly Interacting Massive Particle (WIMP) DM (mass, spin-dependent (SD) and spin-independent (SI) cross section off nucleons) by combining the results of such experiments in the case of a hypothetical detection. In general, the degeneracy between the SD and SI components of the scattering cross section can only be removed using targets with different sensitivities to these components. Scintillating bolometers, with particle discrimination capability, very good energy resolution and threshold and a wide choice of target materials, are an excellent tool for a multitarget complementary DM search. We investigate how the simultaneous use of scintillating targets with different SD-SI sensitivities and/or light isotopes (as the case of CaF$_2$ and NaI) significantly improves the determination of the WIMP parameters. In order to make the analysis more realistic we include the effect of uncertainties in the halo model and in the spin-dependent nuclear structure functions, as well as the effect of a thermal quenching different from 1. | \label{sec:intro} Weakly Interacting Massive Particles (WIMPs) can be directly detected through their scattering off target nuclei of a detector\cite{goodman85}. In the last decades, numerous experiments, using different targets and detection techniques, have been searching for WIMPs or are currently taking data. Some of them have searched for distinctive signals, such as an annual modulation in the detection rate: DAMA\cite{bernabei2003} and DAMA/LIBRA\cite{bernabei2008, bernabei2013}, using NaI scintillators, have reported a highly significant signal (9.3$\sigma$) and CoGeNT\cite{Aalseth:2014eft, Aalseth:2014jpa} claimed a less significant evidence (2.2$\sigma$) in the first three years of its data, gathered with a Ge semiconductor. Moreover, CoGeNT\cite{Aalseth2011}, CRESST\cite{Angloher2012} (using CaWO$_4$ scintillating bolometers) and CDMS II (with data from its Si detectors)\cite{PhysRevLett.111.251301} have reported excesses of events at low energies that could be compatible with a signal produced by light WIMPs with a mass of the order of 10~GeV. On the other hand, XENON10\cite{Angle2011}, XENON100\cite{Aprile2012}, LUX\cite{2013arXiv1310.8214L} (also based on Xe), the abovementioned CDMS II \cite{Ahmed2010}, EDELWEISS\cite{Armengaud2011, Ahmed2011} (with Ge), KIMS\cite{Kim2012} (with CsI), PICASSO\cite{Archambault2012} (with C$_4$F$_{10}$), SIMPLE\cite{Felizardo2012} (with C$_2$ClF$_5$) and COUPP\cite{Behnke2011} (with CF$_3$I) have obtained negative results setting more stringent upper bounds on the WIMP-nucleon cross sections. Currently the strongest limits are obtained by the LUX collaboration, excluding spin-independent WIMP-nucleon elastic scattering cross sections larger than 7.6$\times$10$^{-46}$~cm$^2$ for a WIMP mass of 33 GeV, and the SuperCDMS collaboration for low mass WIMPs \cite{Agnese:2013jaa, Agnese:2014aze}. In the next years new experiments and upgraded versions of the existing ones are going to explore even smaller cross sections, closing in on DM searches. The final goal of all these experiments is to determine the nature of DM, measuring some of its properties (namely its mass and interaction cross section with ordinary matter). Signals from different targets are needed, since they can provide complementary information which can lead to a better determination of the DM parameters.\cite{Bertone2007,Pato2011} In a previous paper\cite{cerdeno2013a} we analysed the complementarity of a Ge and a Xe experiment with energy thresholds and resolutions already achieved by CDMS and XENON100 experiments, respectively, and with background levels expected for their corresponding extensions (SuperCDMS\cite{Sander:2012nia} and XENON1T\cite{Aprile:2012zx}). For different WIMP scenarios, we assumed hypothetical detections with an exposure of 300~kg$\times$yr in both experiments and we concluded that the combination of data from Xe and Ge-based detectors might not lead to a good reconstruction of all the WIMP parameters, since there is a degeneracy in the SI and SD parts of the scattering WIMP-nucleus cross section, and both targets have very similar SI over SD sensitivity (see also Ref.\,\citen{Newstead:2013pea} for a recent study on the non-complementarity of Xe and Ar). We showed that incorporating targets with different sensitivities to SI and SD interactions could significantly improve the reconstruction. We considered the case of some of the most promising scintillating bolometric targets: CaWO$_4$ (currently used by CRESST), Al$_2$O$_3$ and LiF (studied by ROSEBUD\cite{2010idm..confE..54C}, that could be considered in the future as additional targets in EURECA\cite{Kraus:2011zz}, a European collaboration that plans to search for WIMPs with a 1-ton cryogenic hybrid detector). We observed that the inclusion of CaWO$_4$ (being mainly sensitive to SI couplings) only leads to a total complementary result for a WIMP of 50~GeV in a small region of the plane ($\sigsi,\sigsd$) in which the expected events in Ge and Xe are mainly due to SD interactions. On the other hand, Al$_2$O$_3$ and LiF (being more sensitive to SD interactions) achieve complementarity with germanium and xenon in regions of the parameter space where the rate in the latter is dominated by SI couplings. We also determined the exposures and background levels required by the bolometers to be complementary to Ge- and Xe-based experiments. In this paper we follow the same strategy and reanalyze the role of Ge- and Xe-based experiments in light of the improved (or potential) energy thresholds in CDMS and LUX \footnote{Notice that a threshold as low as 2~keV has been reported in previous CDMS II analysis \cite{Ahmed:2010wy} although not for a background free search. In order to simplify the comparison with LUX, we will here assume the same threshold of 3~keV, considering that the new iZIP detectors in SuperCDMS might allow a much better background subtraction.}. We also study the complementarity with two additional targets: CaF$_2$ and NaI. The first one has already been used as scintillating bolometer\cite{Alessandrello1992,Bobin1997}, whereas the construction of a bolometer based on NaI (which is a hygroscopic and fragile material) is an ongoing R\&D project of the Zaragoza group.\cite{Coron2013} We include in our analysis not only the effect of the previously considered uncertainties in the halo parameters and SD structure functions, but also the possible influence of the thermal quenching between nuclear and electron recoils in the complementarity of these targets. The structure of this article is as follows: Sec.~\ref{sec:wimpParam} is a short summary of the methodology we follow in reconstructing the WIMP parameters from the (simulated) data in direct detection experiments. In Sec.~\ref{sec:uncertainties} we address the most relevant uncertainties in the analysis, in particular the astrophysical ones (due to our imperfect knowledge of the DM halo of the Milky Way), those related to the SD Structure Functions (SDSF) parametrizing the spin content of the nucleons in the target and, finally, the effect of changing the thermal quenching $q$. In Sec.~\ref{sec:geXe} we present the results for some selected benchmarks when considering only Ge and Xe experiments, finding that the combination of data from these two targets contributes to a better measurement of the WIMP parameters, but a degeneracy in the SD and SI independent cross section usually remains. In Sec.~\ref{sec:scintBolo} we describe the characteristics of the scintillating targets under study (i.e. CaF$_2$ and NaI ). In Sec.~\ref{sec:results} we show how their inclusion can lead to a better determination of the DM mass and scattering cross section, breaking in some cases the SI-SD degeneracy. Finally, conclusions are presented in Sec.~\ref{sec:conclusions}. | \label{sec:conclusions} Following the work done in Ref.\,\citen{cerdeno2013a}, where we investigated the determination of WIMP parameters ($\mwimp$, $\sigsi$, $\sigsd$) from a hypothetical direct DM detection with multiple targets, in this paper we have extended the analysis to consider the effect of lower thresholds in Ge and Xe targets, as well as the complementarity potential of two new bolometric targets: CaF$_2$ and NaI. We first considered the combination of data from Ge and Xe targets, for both of which we assumed a low energy threshold of $3$~keV to account for recent or projected experimental improvements. We studied two benchmark scenarios, featuring a very light WIMP ($\mwimp$=20~GeV, $\sigsi$=10$^{-9}$~pb, $\sigsd$=10$^{-5}$~pb) in which SI contribution dominates the detection rate in both Ge and Xe, and a light WIMP ($\mwimp$=50~GeV, $\sigsi$=10$^{-10}$~pb, $\sigsd$=1.5$\times$10$^{-4}$~pb) in which the SD contribution dominates. Although the combination of data from both targets allows a significant improvement in the reconstruction of DM parameters, a degeneracy in the ($\sigsi$, $\sigsd$) plane usually remains in the points in the parameter space where both targets have similar SI/SD ratios. Scintillating bolometers, with very good energy threshold and resolution and particle discrimination capability, provide a wide choice of absorber materials that allows to select interesting targets form the point of view of its complementarity with other experiments. In Ref.\,\citen{cerdeno2013a} we studied how certain bolometric targets (CaWO$_4$, Al$_2$O$_3$ and LiF) could provide complementary information to data from Ge or Xe based experiments. In this work we have extended the analysis to other two scintillating targets (CaF$_2$ and NaI), and considered also the effect of an uncertainty in the thermal quenching factor of $\pm$15\%. Both targets are sensitive to the SD component of the WIMP-nucleus interaction (particularly CaF$_2$ thanks to the presence of $^{19}$F). We have shown how the inclusion of one of these targets together with Ge and Xe can help breaking the degeneracy in the ($\sigsi$, $\sigsd$) plane. In particular, in the points of the parameter space for which the rate in Ge and Xe is dominated by the SI contribution and the rate in CaF$_2$ is mostly SD, the three DM parameters can be reconstructed. In other examples, although the degeneracy cannot completely removed, at least one of the components of the WIMP-nucleus scattering cross section can be determined. We have also shown how a small uncertainty in the thermal quenching factor can modify noticeably the parameter reconstruction. | 14 | 3 | 1403.3539 |
1403 | 1403.2015_arXiv.txt | Nucleosynthesis beyond Fe poses additional challenges not encountered when studying astrophysical processes involving light nuclei. Astrophysical sites and conditions are not well known for some of the processes involved. On the nuclear physics side, different approaches are required, both in theory and experiment. The main differences and most important considerations are presented for a selection of nucleosynthesis processes and reactions, specifically the $s$-, $r$-, $\gamma$-, and $\nu p$-processes. Among the discussed issues are uncertainties in sites and production conditions, the difference between laboratory and stellar rates, reaction mechanisms, important transitions, thermal population of excited states, and uncertainty estimates for stellar rates. The utility and limitations of indirect experimental approaches are also addressed. The presentation should not be viewed as confining the discussed problems to the specific processes. The intention is to generally introduce the concepts and possible pitfalls along with some examples. Similar problems may apply to further astrophysical processes involving nuclei from the Fe region upward and/or at high plasma temperatures. The framework and strategies presented here are intended to aid the conception of future experimental and theoretical approaches. | \label{sec:intro} Elements up to the Iron peak in the solar abundance distribution can be made in hydrostatic stellar burning processes, whereas heavier elements require extremer conditions, such as explosions resulting from thermonuclear burning or rapid ejection and decompression of gravitationally condensed and heated material. The former are connected to He-shell flashes in stars with less than 8 solar masses and to type Ia supernovae, the latter are realized in core-collapse supernovae and neutron star mergers. Although nuclear processes under explosive conditions -- depending on the specific phenomenon -- may also produce nuclei around and below the Fe-Ni region, heavier nuclides cannot be produced in conditions encountered in hydrostatic burning. Due to the different temperature and density ranges of explosive burning with respect to hydrostatic burning and due to the fact that heavier nuclei are involved, also different aspects of the nuclear processes have to be studied and the behavior of reaction sequences is often governed by different considerations than those for lighter nuclei. This is especially important for experimental studies which not only may encounter additional problems when studying heavy nuclei but also have to be adapted in order to extract the data actually important for constraining nucleosynthesis. Nuclear theory also faces different challenges in heavy nuclei than in light ones and has to focus on the prediction of the actually relevant nuclear properties when applied to nucleosynthesis. Finally, a 3-D hydrodynamical simulation of exploding dense matter not only probes the limits of our nuclear models but also is a considerable computational challenge. The combined astrophysical and nuclear uncertainties lead to generally less well constrained conditions for nucleosynthesis in such phenomena and thus to considerable leeway in the interpretation and consolidation of astrophysical models. | \label{sec:conclusion} Nucleosynthesis beyond Fe poses additional challenges not encountered when studying astrophysical processes involving light nuclei. It requires different approaches, both in theory and experiment. The main considerations were presented for a selection of nucleosynthesis processes and reactions. The presentation should not be viewed as confining the discussed problems to the specific processes. The intention was to generally introduce the concepts and possible pitfalls along with some examples. Similar problems may apply to further astrophysical processes involving nuclei from the Fe region upward and/or at high plasma temperatures. The framework and strategies presented here shall aid the conception of experimental and theoretical approaches to further improve our understanding of the origin of trans-iron nuclei. | 14 | 3 | 1403.2015 |
1403 | 1403.1293_arXiv.txt | Explaining the existence of supermassive black holes (SMBHs) larger than $\sim 10^9 M_\odot$ at redshifts $z \ga 6$ remains an open theoretical question. One possibility is that gas collapsing rapidly in pristine atomic cooling halos ($T_{\rm vir} \ga 10^4 \rm{K}$) produces $10^4-10^6 M_\odot$ black holes. Previous studies have shown that the formation of such a black hole requires a strong UV background to prevent molecular hydrogen cooling and gas fragmentation. Recently it has been proposed that a high UV background may not be required for halos that accrete material extremely rapidly or for halos where gas cooling is delayed due to a high baryon-dark matter streaming velocity. In this work, we point out that building up a halo with $T_{\rm vir} \ga 10^4 \rm{K}$ before molecular cooling becomes efficient is not sufficient for forming a direct collapse black hole (DCBH). Though molecular hydrogen formation may be delayed, it will eventually form at high densities leading to efficient cooling and fragmentation. The only obvious way that molecular cooling could be avoided in the absence of strong UV radiation, is for gas to reach high enough density to cause collisional dissociation of molecular hydrogen ($\sim 10^4 ~ {\rm cm}^{-3}$) before cooling occurs. However, we argue that the minimum core entropy, set by the entropy of the intergalactic medium (IGM) when it decouples from the CMB, prevents this from occurring for realistic halo masses. This is confirmed by hydrodynamical cosmological simulations without radiative cooling. We explain the maximum density versus halo mass in these simulations with simple entropy arguments. The low densities found suggest that DCBH formation indeed requires a strong UV background. | Observations of high-redshift quasars imply that supermassive black holes (SMBHs) with masses larger than $\sim 10^9 M_\odot$ formed by $z=6$ \citep{2003ApJ...587L..15W,2006NewAR..50..665F, 2011Natur.474..616M}. That such massive black holes can form within the first Gyr after the big bang presents an interesting theoretical problem \citep[for reviews see][]{2013ASSL..396..293H,2010A&ARv..18..279V}. A seemingly natural path towards the formation of these SMBHs would be through the growth of black hole remnants from the first metal poor (Pop III) stars. However, a $\sim 100 {\rm M_\odot}$ black hole accreting at the Eddington limit with 10 percent radiative efficiency would take roughly the age of the Universe at $z=6$ to reach $3 \times 10^9 {\rm M_\odot}$. Radiative feedback could prevent sustained Eddington limited accretion over the required time period \citep{2007MNRAS.374.1557J,2009ApJ...701L.133A,2009ApJ...698..766M}. Thus, stellar seeds may not have enough time to grow into the largest SMBHs observed at $z=6$. An attractive alternative for producing the first SMBHs is direct collapse of gas in atomic cooling halos ($T_{\rm vir} \ga 10^4 \rm{K}$) into $10^4-10^6 M_\odot$ supermassive stars or quasi-stellar envelopes which quickly collapse into black holes (e.g. \citealt{2003ApJ...596...34B}; see recent reviews by \citealt{2013ASSL..396..293H,2010A&ARv..18..279V}). This reduces the tension between the required accretion time and the age of the universe by giving black holes a head start in their mass. The main challenge in direct collapse models is to avoid fragmentation and star formation which can occur through molecular hydrogen or metal cooling. We note that even if fragmentation does occur it may still be possible to form a SMBH from collisions in a dense stellar cluster \citep{2008ApJ...686..801O,2012ApJ...755...81M}, however we do not address that possibility in this paper. A strong ultraviolet (UV) background can prevent molecular hydrogen formation. The simulations of \cite{2010MNRAS.402.1249S} show that, depending on the shape of the spectrum, a background above $J_{\rm crit} \sim 1000$ (where $J_{\rm crit}$ is in units of $10^{-21} {\rm ergs ~ s^{-1} cm^{-2} Hz^{-1} sr^{-1}}$) is required. This critical intensity is much higher than the predicted cosmological mean \citep[see e.g.][]{2013MNRAS.432.2909F}. Thus, DCBHs require a bright galaxy (or galaxies) a very short distance away ($\sim 10 ~ {\rm kpc}$). Although this greatly reduces the number of dark matter halos that could host DCBHs, analytic and semi-analytic calculations still suggest that there may be enough DCBH halos to explain the abundance of SMBHs at $z=6$ \citep{2008MNRAS.391.1961D,2012MNRAS.425.2854A}. However, black hole seeds in these models may still need to accrete at nearly the Eddington limit for a significant fraction of the age of the Universe. Recently DCBH models have been proposed that eliminate the need for a strong UV background. \cite{2012MNRAS.422.2539I} propose that shocked cold flows in atomic cooling halos can reach temperatures and densities high enough to excite the rovibrational levels in molecular hydrogen, enhancing collisional dissociation (the so-called `zone of no return'). The required density and temperature, assuming an initial ionization of $x_{\rm e}=10^{-2}$, are given by \begin{align} T \ga & ~ 6000 ~ \rm{K} (n_{\rm H}/10^4 {\rm cm^{-3}})^{-1} ~ \rm{for} ~ n_{\rm H} \la 10^4 {\rm cm^{-3}}, \nonumber \\ T \ga & ~ 5000 - 6000 ~ \rm{K} ~ \rm{for} ~ n_{\rm H} \ga 10^4 {\rm cm^{-3}}. \end{align} If these temperatures and densities are achieved, the halo contracts and because atomic hydrogen cooling dominates, the gas temperature stays at $T \sim 10^4 ~ \rm{K}$ preventing fragmentation. While this is an interesting idea, recent numerical simulations \citep{2014arXiv1401.5803F} find that cold filaments shock near a halo's virial radius at relatively low density. \cite{2012MNRAS.426.1159S} also find that molecular cooling occurs in their simulations unless the UV background is very high. Although \cite{2014arXiv1401.5803F} find that cold flow shocks will not reach the zone of no return, they propose that it may be possible to form a DCBH if a halo grows sufficiently quickly such that it reaches the atomic cooling threshold before molecular cooling becomes efficient. Similarly, \cite{2013ApJ...773...83X} suggest that an atomic cooling halo without stars found in their cosmological simulations may correspond to a DCBH. Another related idea is that high baryon-dark matter streaming velocities \citep{2010PhRvD..82h3520T} could delay star formation in halos until they have grown beyond the atomic cooling threshold leading to DCBH formation \citep{2013arXiv1310.0859T}. In this paper, we point out that without a strong UV background, simply reaching $T_{\rm vir} \ga 10^4$ before efficient cooling occurs is not sufficient to avoid subsequent molecular cooling and fragmentation leading to the formation of a DCBH. However, a DCBH could form in the absence of a UV background if gas achieves a density and temperature high enough to enter the zone of no return. To test this possibility, we run cosmological hydrodynamical simulations without radiative cooling. Both one-zone models \citep{2001ApJ...546..635O, 2002ApJ...569..558O, 2012MNRAS.422.2539I, 2011MNRAS.418..838W} and numerical simulations \citep{2010MNRAS.402.1249S, 2014arXiv1401.5803F} have demonstrated that once efficient atomic cooling is activated outside of the zone of no return and without a strong UV background, molecular cooling will inevitably occur because the $\rm{H}_2$ formation timescale is shorter than the dynamical time. This cooling, in turn, should lead to fragmentation. For this reason we seek to determine if gas can reach the zone of no return before any radiative cooling (H or $\rm{H}_2$) becomes efficient. We find that the maximum densities are several orders of magnitude smaller than the threshold required to suppress molecular cooling. We also find that the maximum density (without radiative cooling) as a function of halo mass can be understood in terms of the core entropy. In fact, from entropy considerations alone, we show that the zone of no return cannot be reached before efficient cooling begins. These results support the idea that a strong UV background, or some other mechanism that continues suppressing molecular hydrogen cooling down to high density \citep[such as enhanced heating e.g.][]{2010ApJ...721..615S} is needed for DCBH formation. Throughout we assume a $\Lambda$CDM cosmology consistent with the latest constraints from Planck \citep{2013arXiv1303.5076P}: $\Omega_\Lambda=0.68$, $\Omega_{\rm m}=0.32$, $\Omega_{\rm b}=0.049$, $h=0.67$, $\sigma_8=0.83$, and $n_{\rm s} = 0.96$. | Explaining the existence of $\sim {\rm a~few} \times 10^9 M_\odot$ SMBHs at $z=6$ presents an interesting theoretical challenge. Models based on the growth of remnants from the first stars require nearly continuous Eddington limited accretion over the entire history of the universe, which seems unlikely given the expected radiative feedback. DCBHs alleviate tension associated with this timing by forming $10^4-10^6 M_\odot$ black holes in atomic cooling halos, possibly with a short intermediate phase as a supermassive star or quasi-star. We point out that simply delaying molecular cooling until a halo is larger than the atomic cooling threshold is not sufficient to prevent fragmentation, leading to the formation of a DCBH. In the absence of a high UV background, molecular cooling will still occur as the gas increases in density leading to fragmentation, as determined by one-zone models and numerical simulations. Thus, we conclude that models which produce DCBHs without a strong UV background by rapid accretion or by delayed cooling from baryon-dark matter streaming velocities are not viable. The only way we can envision DCBH formation without a strong UV background is if gas could reach high enough densities and temperatures to cause collisional dissociation of molecular hydrogen before the run-away process of molecular cooling can occur. We argue that the minimum entropy of the gas will not permit halos near the atomic cooling threshold to reach these high densities. This is confirmed by our cosmological simulations. We find that the maximum density permitted by the entropy floor of the IGM when it decoupled from the CMB falls nearly two orders of magnitude below the zone of no return. We note that throughout we have used the zone of no return for an initial ionization of $x_{\rm e}=10^{-2}$. A lower initial ionization could weaken our conclusions. However, \cite{2012MNRAS.422.2539I} tested a wide range of initial conditions and even their lowest ionization, $x_{\rm e}=10^{-5}$, still has the zone of no return above the density implied by Eqn.~\ref{density_eqn} (see their fig. 2). Overall, our results motivate additional work on DCBH formation in the presence of a strong UV background, and the ultimate fate of a dense star cluster produced by the fragmentation process. | 14 | 3 | 1403.1293 |
1403 | 1403.1400_arXiv.txt | {We present the discovery of a large scale radio structure associated with IGR J17488--2338, a source recently discovered by \emph{INTEGRAL} and optically identified as a broad line AGN at redshift 0.24. At low frequencies, the source properties are those of an intermediate-power FR II radio galaxy with a linear size of 1.4\,Mpc. This new active galaxy is therefore a member of a class of objects called Giant Radio Galaxies (GRGs), a rare type of radio galaxies with physical sizes larger than 0.7\,Mpc; they represent the largest and most energetic single entities in the Universe and are useful laboratories for many astrophysical studies. Their large scale structures could be due either to special external conditions or to uncommon internal properties of the source central engine The AGN at the centre of IGR J17488--2338 has a black hole of 1.3$\times$10$^9$ solar masses, a bolometric luminosity of 7$\times$10$^{46}$erg\,s$^{-1}$ and an Eddington ratio of 0.3, suggesting that it is powerful enough to produce the large structure observed in radio. The source is remarkable also for other properties, among which its X-ray absorption, at odds with its type 1 classification, and the presence of a strong iron line which is a feature not often observed in radio galaxies. | Powerful extragalactic radio sources are galaxies (and/or quasars) hosting active galactic nuclei (AGNs), which produce jets and extended radio emitting regions (lobes) of plasma. Some of them are characterised by giant structures and are known as giant radio galaxies (GRG), formally those with linear sizes larger than 0.7\,Mpc (e.g. \citealt{lara:2001,Ishwara-Chandra:1999}, scaled for the cosmology adopted here of H$_0$= 71 km\,s$^{-1}$\,Mpc$^{-1}$, $\Omega_{\rm m}$=0.27, $\Omega_{\Lambda}$=0.73). These objects represent the largest and most energetic single entities in the Universe and it is possible that they play a special role in the formation of large-scale structures. They generally belong to the FR II \citep{fanaroff:1974} radio morphology (edge brightened), have relatively low radio power (LogP$_{\rm 1.4GHz (W/Hz)}$$\gtrsim$24.5; \citealt{owen:1989}) and reside in elliptical galaxies and quasars. GRGs are very useful for studying many astrophysical issues, such as understanding the evolution of radio sources, probing the intergalactic medium at different redshifts, investigating the nature of their central AGN. There are various scenarios which try to explain this phenomenon. For example, GRG could be very old sources which had had enough time to evolve to such large sizes. Alternatively, they could grow in an intergalactic medium whose density is smaller than that surrounding smaller radio sources, or, instead, their AGNs are extremely powerful and/or long-lived and thus able to produce such large scale structures. Because of their large sizes and relatively low radio power, the surface brightness of GRGs is low. This is why they are so difficult to find even in radio surveys and why the finding of a new member of the class is interesting and useful. Here we report on the discovery and subsequent analysis of the \emph{INTEGRAL} source IGR J17488--2338, which we have identified as an FR II radio galaxy with a linear size of 1.4\,Mpc, thus well above the threshold for it to be a GRG. In optical terms, the galaxy is a Seyfert 1.5 at redshift 0.24 and the source is remarkable also for other properties, among which its absorption characteristics, at odds with its type 1 classification, and the presence of an iron line in the X-rays. In particular, the core of this giant radio galaxy is extremely powerful in X/gamma-rays, is highly massive and able to accrete very efficiently: it is possible that these extreme properties provide the necessary conditions (high jet power or long activity time) to produce the giant radio structure observed in this newly discovered radio galaxy. | We have uncovered an AGN with a large scale radio structure in the newly discovered \emph{INTEGRAL} source IGR J17488--2338. The source, which is clearly a FR II of intermediate power, has a linear size of 1.4\,Mpc, which fully qualifies it as a giant radio galaxy. The source is remarkable also for other properties, among which its X-ray absorption characteristics at odds with its type 1 classification and the presence of an iron line in the X-rays. It is still unclear what are the reasons or conditions that lead to the formation of giant radio galaxies. It could be special external conditions (such as the low density of the intergalactic medium) or uncommon internal properties of the source central engine (like a high jet power or a long activity time). It is likely that none of the mentioned reasons is sufficient in itself and several conditions must be actually satisfied to provide the large scale structures seen in some radio galaxies. In the particular case of IGR J17488--2338, the properties of the central AGN are quite exceptional, suggesting that it may be capable of producing a highly powerful jet or of maintaining the activity over a long period of time; either possibilities provide the conditions to form a large scale radio structure. The source is extremely bright in the X/gamma-rays: among a set of 25 radio galaxies detected so far by \emph{INTEGRAL}, this is the brightest object in the sample and also one of the most efficient accretors (Molina et al., in prep.). Like Cygnus A and 4C 74.26, also included in the \emph{INTEGRAL} sample of radio galaxies, IGR J17488--2338 hosts a black hole with a mass greater than 10$^9$ solar masses; coupled with the source extension, this value perfectly fits with the linear relation found by \citet{Kuzmicz:2013} for GRG and based on the observed linear extensions and the black hole masses derived from the H$\alpha$ emission line. It is interesting to note that also 4C 74.26 is a giant radio galaxy \citep{Ishwara-Chandra:1999} with an extension of 1.9\,Mpc, i.e. very similar to that of IGR J17488--2338. It is therefore possible that the hard X-ray selection made available by \emph{INTEGRAL} and /or \emph{Swift/BAT} allows the detection of the brightest AGN in the sky, and consequently also of the most powerful radio galaxies, i.e. those that are able to produce large scale radio structures. Indeed, among the sample of \emph{INTEGRAL} detected radio galaxies, 6 (or 24\%) qualify as giant radio galaxies; this fraction is higher than what is generally found using radio surveys, which report fractions in the range 6--11\%, depending on the survey used (\citealt{laing:1983}, \citealt{saripalli:2012}). This suggests that hard X-ray observations can provide a much more efficient way to find giant radio galaxies than radio ones, at least in the local Universe. A complete analysis of \emph{INTEGRAL} radio galaxies is underway and the results regarding this issue will be presented in a forthcoming dedicated paper. | 14 | 3 | 1403.1400 |
1403 | 1403.6827_arXiv.txt | We present a new, semi-analytical model describing the evolution of dark matter subhaloes. The model uses merger trees constructed using the method of Parkinson et al. (2008) to describe the masses and redshifts of subhaloes at accretion, which are subsequently evolved using a simple model for the orbit-averaged mass loss rates. The model is extremely fast, treats subhaloes of all orders, accounts for scatter in orbital properties and halo concentrations, and uses a simple recipe to convert subhalo mass to maximum circular velocity. The model accurately reproduces the average subhalo mass and velocity functions in numerical simulations. The inferred subhalo mass loss rates imply that an average dark matter subhalo loses in excess of 80 percent of its infall mass during its first radial orbit within the host halo. We demonstrate that the total mass fraction in subhaloes is tightly correlated with the `dynamical age' of the host halo, defined as the number of halo dynamical times that have elapsed since its formation. Using this relation, we present universal fitting functions for the evolved and unevolved subhalo mass and velocity functions that are valid for any host halo mass, at any redshift, and for any $\Lambda$CDM cosmology. | \label{Sec:Introduction} Numerical $N$-body simulations have shown that when two dark matter haloes merge, the less massive progenitor halo initially survives as a self-bound entity, called a subhalo, orbiting within the potential well of the more massive progenitor halo. These subhaloes are subjected to tidal forces and impulsive encounters with other subhaloes causing tidal heating and mass stripping, and to dynamical friction that causes them to lose orbital energy and angular momentum to the dark matter particles of the `host' halo. Depending on its orbit, density profile, and mass, a subhalo therefore either survives to the present day or is disrupted; the operational distinction being whether a self-bound entity remains or not. Characterizing the statistics and properties of dark matter subhaloes is of paramount importance for various areas of astrophysics. First of all, subhaloes are believed to host satellite galaxies, and the abundance of satellite galaxies is therefore directly related to that of subhaloes. This basic idea underlies the popular technique of subhalo abundance matching (e.g., Vale \& Ostriker 2004; Conroy \etal 2006, 2007; Guo \etal 2011; Hearin \etal 2013) and has given rise to two problems in our understanding of galaxy formation: the ``missing satellite" problem (Moore \etal 1999; Klypin \etal 1999) and the ``too big to fail" problem (Boylan-Kolchin \etal 2011). Secondly, substructure is also important in the field of gravitational lensing, where it can cause time-delays (e.g., Keeton \& Moustakas 2009) and flux-ratio anomalies (Metcalf \& Madau 2001; Brada\v{c} \etal 2002; Dalal \& Kochanek 2002), and for the detectability of dark matter annihilation, where the clumpiness due to substructure is responsible for a `boost factor' (e.g., Diemand \etal 2007; Pieri \etal 2008; Giocoli \etal 2008b). Finally, the abundance and properties of dark matter substructure controls the survivability of fragile structures in dark matter haloes, such as tidal streams and/or galactic disks (T\'oth \& Ostriker 1992; Taylor \& Babul 2001; Ibata \etal 2002; Carlberg 2009). The most common statistic used to describe the substructure of dark matter haloes is the subhalo mass function (hereafter SHMF), $\rmd N/\rmd\ln(m/M)$, which expresses the (average) number of subhaloes of mass $m$ per host halo of mass $M$, per logarithmic bin of $m/M$. Following van den Bosch, Tormen \& Giocoli (2005), we will distinguish two different SHMFs; the {\it unevolved} SHMF, where $m$ is the mass of the subhalo {\it at accretion}, and the {\it evolved} SHMF, where $m$ reflects the mass of the surviving, self-bound entity at the present day, which is reduced with respect to that at accretion due to mass stripping. The SHMFs of dark matter haloes have been studied using two complementary techniques; $N$-body simulations (e.g., Tormen 1997; Tormen, Diaferio \& Syer 1998; Moore \etal 1998, 1999; Klypin \etal 1999a,b; Ghigna \etal 1998, 2000; Stoehr \etal 2002; De Lucia \etal 2004; Diemand, Moore \& Stadel 2004; Gill \etal 2004a,b; Gao \etal 2004; Reed \etal 2005; Kravtsov \etal 2004; Giocoli \etal 2008a, 2010; Weinberg \etal 2008) and semi-analytical techniques based on the extended Press-Schechter (EPS; Bond \etal 1991) formalism (e.g., Taylor \& Babul 2001, 2004, 2005a,b; Benson \etal 2002; Taffoni \etal 2003; Oguri \& Lee 2004; Zentner \& Bullock 2003; Pe\~{n}arrubia \& Benson 2005; Zentner \etal 2005; van den Bosch \etal 2005; Gan \etal 2010; Yang \etal 2011; Purcell \& Zentner 2012). Both methods have their own pros and cons. Numerical simulations have the virtue of including all relevant, gravitational physics related to the assembly of dark matter haloes, and the evolution of the subhalo population. However, they are also extremely CPU intensive, and the results depend on the mass- and force-resolution adopted. In addition, there is some level of arbitrariness in how to identify haloes and subhaloes in the simulations. In particular, different (sub)halo finders applied to the same simulation output typically yield subhalo mass functions that differ at the 10-20 percent level (Knebe \etal 2011, 2013; Onions \etal 2012) or more (van den Bosch \& Jiang 2014). Semi-analytical techniques, on the other hand, don't suffer from issues related to subhalo identification or force resolution, and are significantly faster, but their downside is that the relevant physics is only treated approximately. All semi-analytical methods require two separate ingredients: halo merger trees, which describe the hierarchical assembly of dark matter haloes, and a treatment of the various physical processes that cause the subhalo population to evolve (dynamical friction, tidal heating and stripping, impulsive encounters). To properly account for all these processes, which depend strongly on the orbital properties, requires a detailed integration over all individual subhalo orbits. This is complicated by the fact that the mass of the parent halo evolves with time. If the mass growth rate is sufficiently slow, the evolution may be considered adiabatic, thus allowing the orbits of subhaloes to be integrated analytically despite the non-static nature of the background potential. This principle is exploited in many of the semi-analytical based models listed above. In reality, however, haloes grow hierarchically through (major) mergers, making the actual orbital evolution highly non-linear. In order to sidestep these difficulties, van den Bosch, Tormen \& Giocoli (2005; hereafter B05) considered the {\it average} mass loss rate of dark matter subhaloes, where the average is taken over the entire distribution of orbital configurations (energies, angular momenta, and orbital phases). This removes the requirement to actually integrate individual orbits, allowing for an extremely fast calculation of the evolved subhalo mass function. B05 adopted a simple functional form for the average mass loss rate, which had two free parameters which they calibrated by comparing the resulting subhalo mass functions to those obtained using numerical simulations. In a subsequent paper, Giocoli, Tormen \& van den Bosch (2008; hereafter G08), directly measured the average mass loss rate of dark matter subhaloes in numerical simulations. They found that the functional form adopted by B05 adequately describes the average mass loss rates in the simulations, but with best-fit values for the free parameters that are substantially different. G08 argued that this discrepancy arises from the fact that B05 used the `N-branch method with accretion' of Somerville \& Kolatt (1999; hereafter SK99) to construct their halo merger trees, which results in an unevolved subhalo mass function that is significantly different from what is found in the simulations. This was recently confirmed by the authors in a detailed comparison of merger tree algorithms (Jiang \& van den Bosch 2014a; hereafter JB14). Note that this same SK99 method has also been used by most of the other semi-analytical models for dark matter substructure, including Taylor \& Babul (2004, 2005a,b), Zentner \& Bullock (2003), Zentner \etal (2005) and even the recent study by Purcell \& Zentner (2012). In this series of papers we use an overhauled version of the semi-analytical method pioneered by B05 to study the statistics of dark matter subhaloes in unprecedented detail. In particular, we extent and improve upon B05 by (i) using halo merger trees constructed with the more reliable method of Parkinson, Cole \& Helly (2008), (ii) evolving subhaloes using the improved mass-loss model of G08 and accounting for stochasticity in the mass-loss rates due to the scatter in orbital properties and halo concentrations, (iii) considering the entire hierarchy of dark matter subhaloes (including sub-subhaloes, sub-sub-subhaloes, etc.), and (iv) predicting not only the masses of subhaloes but also their maximum circular velocities, $V_{\rm max}$. In this paper, the first in the series, we present the improved semi-analytical model, followed by a detailed study of the (average) subhalo abundance as function of mass and maximum circular velocity including a presentation of universal fitting functions. In Paper II (van den Bosch \& Jiang 2014) we present a more detailed comparison of the model predictions with simulation results, paying special attention to the large discrepancies among different simulation results that arise from the use of different subhalo finders. Finally, in Paper III (Jiang \& van den Bosch; in preparation) we exploit our semi-analytical model to quantify the halo-to-halo variation of populations of dark matter subhaloes. This paper is organized as follows. We start in \S\ref{Sec:Model} with a detailed description of our semi-analytical model, including the construction of halo merger trees (\S\ref{Sec:Trees}), an updated model for the average mass loss rate of subhaloes (\S\ref{Sec:MassLoss}), and a description of how we convert (sub)halo masses to their corresponding $V_{\rm max}$ (\S\ref{Sec:Vmax}). In \S\ref{Sec:Test} we demonstrate that the model can accurately reproduce both the subhalo mass and velocity functions obtained from numerical simulations, after tuning our single free model parameter, and we discuss the scalings with host halo mass and redshift. \S\ref{Sec:Universal} presents accurate, universal fitting functions for the average subhalo mass and velocity functions that are valid for any host halo mass, redshift and $\Lambda$CDM cosmology. In \S\ref{Sec:Discussion} we discuss implications of our inferred subhalo mass loss rates, and we summarize our results in \S\ref{Sec:Summary}. Throughout we use $m$ and $M$ to refer to the masses of subhaloes and host haloes, respectively, use $\ln$ and $\log$ to indicate the natural logarithm and 10-based logarithm, respectively, and express units that depend on the Hubble constant in terms of $h = H_0/(100\kmsmpc)$. | \label{Sec:Discussion} The results presented in this paper show that the orbit-averaged subhalo mass-loss rates are accurately described by \begin{equation}\label{mydecay_repeat} \dot{m} = - \calA \, {m \over \tau_{\rm dyn}} \left({m\over M}\right)^{\zeta}. \end{equation} with $\tau_{\rm dyn}$ the halo's (instantaneous) dynamical time, $\zeta = 0.07$, and $\calA$ a random variable that follows a log-normal distribution with median $\bar{\calA} = 1.34$ and dispersion $\sigma_{\log\calA} = 0.17$. This implies that, in an orbit-averaged sense, dark matter subhaloes evolve as \begin{equation}\label{mt} m(t) = m_{\rm acc} \, \left[1 + \zeta \, \calA \, \left({m_a \over M_0}\right)^{\zeta} \, \left\{\tilde{N}_{\tau}(t_{\rm acc}) - \tilde{N}_{\tau}(t)\right\} \right]^{-1/\zeta} \end{equation} where $t$ is {\it lookback time}, $t_{\rm acc}$ and $m_{\rm acc}$ are the lookback time and subhalo mass at accretion, $M_0$ is the present day mass of the host halo, and \begin{equation}\label{tildeNtau} \tilde{N}_\tau(t) \equiv \int_0^t \left[{M(t) \over M_0}\right]^{-\zeta} \, {\rmd t \over \tau_{\rm dyn}(t)} \end{equation} is some measure for the number of dynamical times that have elapsed in an evolving dark matter halo since lookback time $t$. It is interesting to see what this implies for the amount of mass that is stripped from a typical subhalo during its first radial orbit. In units of the mass of the subhalo at infall, this is given by \begin{equation}\label{Mstrip} {\Delta m \over m_{\rm acc}} \equiv {{m_{\rm acc} - m(t_{\rm acc} - T_\rmr)} \over m_{\rm acc}} \end{equation} with $T_\rmr$ the radial orbital period given by Eq.~(\ref{Tr}). Without loosing generality, we set $t_{\rm acc} = T_\rmr$, so that the subhalo has just completed its first radial orbit at the present day. In this case, we have that \begin{equation}\label{Mstripdet} {\Delta m \over m_{\rm acc}} = 1 - \left[1 + \zeta \calA \left({m_a \over M_0}\right)^{\zeta} \, \tilde{N}_{\tau}(T_\rmr)\right]^{-1/\zeta} \end{equation} Using the toy model described in \S\ref{Sec:ToyModel} we find that the distribution of $T_\rmr$ at $z=0$ is close to uniform over the interval $[5,9]$ Gyr, which has its origin in the uniform distribution of $R_\rms$ ($T_\rmr$ depends strongly on $E$ but has very little dependence on $L$). The left-hand panel of Fig.~\ref{Fig:Ntau} plots $\tilde{N}_{\tau}$ as function of lookback time, $t$, where we have assumed, for simplicity, that dark matter haloes grow in mass exponentially on a time scale $\tau_M$, i.e., $M(t) = M_0 \, \exp(-t/\tau_M)$. We also assumed a `Planck cosmology' with $\Omega_\rmm = 0.318$, $\Omega_{\Lambda} = 0.682$ and $h=0.671$, but we emphasize that the results are almost indistinguishable for other, similar cosmologies, such as those advocated by different data releases of the WMAP experiment. Results are shown for four different values of $\tau_M$, ranging from infinity (i.e., no evolution in host halo mass) to $\tau_M = 1$Gyr (i.e., host halo mass has grown by almost a factor three during the last Gyr). This more than covers the range of growth rates of dark matter haloes in the mass range $10^{11} \Msunh < M_0 < 10^{15}\Msunh$. As is evident, the interval $T_\rmr \in [5,9]$Gyr translates roughly into $\tilde{N}_\tau(T_\rmr) \in [2,4]$, with very little dependence on $\tau_M$\footnote{This also implies that the results presented here are insensitive to deviations of $M(t)/M_0$ from an exponential}. Hence, the typical radial orbital period of a subhalo following infall lasts roughly 2 to 4 dynamical times. This may sound somewhat counter-intuitive, but note that the dynamical time is an average for the entire halo, which is not representative of orbits at first infall. The right-hand panel of Fig.~\ref{Fig:Ntau} plots the distribution of $\Delta m/m_{\rm acc}$ for the same Planck cosmology, obtained using Eq.~(\ref{Mstripdet}) with $\zeta = 0.07$ and $\tau_M = 10$Gyr (roughly representative for a Milky-Way sized dark matter halo, though the results only depend very weakly on $\tau_M$). The orbital periods, $T_\rmr$, are sampled from a uniform distribution covering the range from 5 to 9 Gyr, while the mass-loss rate normalization parameter, $\calA$, is sampled from the log-normal given by Eq.~(\ref{ProbA}) with $\bar{\calA} = 1.34$. Results are shown for five different values of $m_{\rm acc}/M_0$, as indicated. The medians of the distributions are indicated by arrows, and range from $0.80$ for $m_{\rm acc}/M_0 = 10^{-5}$ to $0.95$ for $m_{\rm acc}/M_0 = 10^{-1}$. Sampling $m_{\rm acc}/M_0$ from the actual unevolved SHMF for $m_{\rm acc}/M_0 \geq 10^{-5}$ yields a distribution that is intermediate between those for $m_{\rm acc}/M_0 = 10^{-4}$ and $m_{\rm acc}/M_0 = 10^{-5}$ with a median of $0.827$. Note that only a minute fraction of subhaloes is expected to hang on to more than 50 percent of their infall mass after one radial orbit. Hence, {\it subhaloes lose the vast majority (typically more than 80 percent) of their mass during their very first radial orbit}. We emphasize that most of this mass loss is likely to occur near pericenter (and hence, roughly a time $T_\rmr/2$ after infall), but we caution that our model only treats orbit-averaged mass-loss rates, and should therefore not be used to make predictions regarding mass-loss rates on significantly shorter time-scales. \begin{figure*} \centerline{\psfig{figure=Ntau.eps,width=0.95\hdsize}} \caption{{\it Left-hand panel:} the quantity $\tilde{N}_\tau$, defined by Eq.~(\ref{tildeNtau}), as function of lookback time $t$ for four different values of the time-scale for halo mass growth, $\tau_M$, as indicated. As is apparent, $\tilde{N}_\tau$ is not very sensitive to how host halo masses grow over time. This is a manifestation of the small value for $\zeta$, which indicates that subhalo mass loss rates depend only weakly on host halo mass. {\it Right-hand panel:} Distributions of the fractional subhalo mass lost during the first radial orbit after infall, $\Delta m/m_{\rm acc}$. Results are shown for five different values of $m_{\rm acc}/M_0$, as indicated. Arrows indicate the medians of the corresponding distributions. Note that subhaloes, on average, lose more than 80 percent of their infall mass during their first radial orbit. All these results are for a Planck cosmology with $(\Omega_{\rmm},h) = (0.318,0.671)$, but results are very similar for other $\Lambda$CDM cosmologies that are consistent with current observational constraints.} \label{Fig:Ntau} \end{figure*} The dependence of $\Delta m/m_{\rm acc}$ on $m_{\rm acc}/M_0$ owes to two effects: (i) the concentration-mass relation of dark matter haloes, which makes subhaloes with a lower value of $m_{\rm acc}/M_0$ relatively denser compared to its host halo, and therefore more resilient to tidal stripping, and (ii) dynamical friction, which will cause more massive subhaloes to lose more orbital energy and angular momentum, reducing their pericentric distance, and thus causing enhanced stripping. However, with the dramatic mass stripping rates revealed here, it is also clear that dynamical friction cannot play a very important role after first pericentric passage; as a rule of thumb, the dynamical friction time is only shorter than the Hubble time if $m/M \gta 0.1$ (e.g., Mo, van den Bosch \& White 2010). Even if a subhalo is that massive at infall, it is very likely to be stripped below this limit after its first pericentric passage. Hence, mass stripping is a far more important process for the evolution of dark matter subhaloes than dynamical friction (see also Taffoni \etal 2003; Taylor \& Babul 2004; Pe\~narrubia \& Benson 2005; Zentner \etal 2005; Gan \etal 2010), and one does not make large errors by ignoring dynamical friction altogether. We have presented a new semi-analytical model that uses EPS merger trees to generate evolved subhalo populations. The model is based on the method pioneered by B05, and evolves the mass of dark matter haloes using a simple model for the {\it orbit-averaged} subhalo mass loss rate. This avoids having to integrate individual subhalo orbits, as done in other semi-analytical models for dark matter substructure (e.g., Taylor \& Babul 2004, 2005a,b; Benson \etal 2002; Taffoni \etal 2003; Oguri \& Lee 2004; Zentner \& Bullock 2003; Pe\~{n}arrubia \& Benson 2005; Zentner \etal 2005; Gan \etal 2010). We have made a number of improvements and extensions with respect to the original B05 model; in particular, we \begin{enumerate} \item use Monte Carlo merger trees constructed using the method of P08, which, as demonstrated in JB14, yields results in much better agreement with numerical simulations than the Somerville \& Kolatt (1999) method used by B05. \item construct and use complete merger trees, rather than just the mass assembly histories of the main progenitor. This allows us to investigate the statistics of subhaloes of different orders. \item adopt a new mass loss model, that is calibrated against numerical simulations and which also accounts for the scatter in subhalo mass loss rates that arises from scatter in orbital properties (energy and angular momentum) and (sub)halo concentrations. \item include a method for converting halo mass to maximum circular velocity, thus allowing us to study subhalo velocity functions as well as subhalo mass functions. \end{enumerate} In this paper, the first in a series that addresses the statistics of dark matter subhaloes, we have mainly focussed on the {\it average} subhalo mass and velocity functions, where the average is taken over large numbers of Monte Carlo realizations for a certain host halo mass, $M_0$, redshift, $z_0$, and cosmology. Our model has only one free parameters, which sets the overall normalization of the orbit-averaged mass loss rates of dark matter subhaloes. After tuning this parameter such that the model reproduces the normalization of the evolved SHMF in the numerical simulations of G08, the same model can accurately reproduce the evolved subhalo mass and velocity functions in numerical simulations for host haloes of different mass, in different $\Lambda$CDM cosmologies, and for subhaloes of different orders, without having to adjust this parameter. The inferred orbit-averaged mass loss rates are consistent with the simulation results of G08, and imply that an average dark matter subhalo loses in excess of 80 percent of its infall mass during its first radial orbit within the host halo. More massive subhaloes, in units of the normalized mass, $m/M$, lose their mass more rapidly due to (i) the concentration-mass relation of dark matter haloes, which causes subhaloes with smaller $m/M$ to be more resilient to tidal stripping, and (ii) dynamical friction, which causes more massive subhaloes to lose more orbital energy and angular momentum, resulting in enhanced stripping. According to our mass loss model, subhaloes with an infall mass that is 10 percent of the host halo mass will lose on average more than 95 percent of their infall mass during their first radial orbital period. One of the main findings of this paper is that the average subhalo mass and velocity functions, both evolved and unevolved, can be accurately fit by a simple Schechter-like function of the form \begin{equation} \label{fitgeneral} {\rmd N \over \rmd \ln \psi} = \gamma \, (\psi)^{\alpha} \, \exp\left[-\beta(\psi)^\omega\right]\,. \end{equation} where, depending on which function is being considered, $\psi$ is $m/M_0$, $m_\rma/M_0$, $V_{\rm max}/V_{\rm vir,0}$, or $V_{\rm acc}/V_{\rm vir,0}$. In particular, restricting ourselves to $\Lambda$CDM cosmologies with parameters that are consistent with recent constraints within a factor of roughly two, we find that \begin{itemize} \item The {\it unevolved} SHMF is (close to) universal, with the parameters $(\alpha, \beta, \gamma, \omega)$ independent of host halo mass, redshift and cosmology (see also B05; Li \& Mo 2009; Yang \etal 2011). We emphasize, though, that although the functional form of Eq.~(\ref{fitgeneral}) can adequately describe this univeral unevolved SHMF, it is more accurately described by the double-Schechter-like function presented in JB14. \item The {\it evolved} SHMF has a universal shape (i.e., fixed $\alpha$, $\beta$ and $\omega$), which is accurately described by Eq.~(\ref{fitgeneral}), but with a normalization, $\gamma$, that depends on host halo mass, redshift and cosmology. We have demonstrated that $\gamma$ is tightly correlated with the `dynamical age' of the host halo, defined as the number of halo dynamical times that have elapsed since its formation (i.e., since redshift $z_{1/2}$ at which the host halo's main progenitor reaches a mass equal to $M_0/2$). Using this relation we have presented a universal fitting function for the average, evolved SHMF that is valid for any host halo mass, at any redshift, and for any $\Lambda$CDM cosmology. The corresponding power-law slopes, $\alpha$, are $-0.78$, $-0.93$ and $-0.82$ for first-order subhaloes, second-order subhaloes (i.e., sub-subhaloes), and for subhaloes of all orders, significantly shallower than what has been claimed in numerous studies based on numerical simulations (see Paper~II for a detailed discussion). \item Unlike the unevolved mass function, the {\it unevolved} SHVF is not universal, in that the parameter $\beta$ is found to depend on host mass, redshift and cosmology. This has its origin in the concentration-mass-redshift relation of dark matter haloes, and can be accounted for by replacing $\psi$ in Eq.~(\ref{fitgeneral}) with $a\psi$, where $a$ is a (universal) scale factor given by $a \propto V_{\rm vir}(M_0/40,z_0)/V_{\rm max}(M_0/40,z_{0.25})$. When using this simple rescaling, one obtains a universal fitting function for the unevolved SHVF whose parameters $\alpha$, $\beta$, $\gamma$ and $\omega$ are independent of host mass, redshift and cosmology. Note that this unevolved SHVF is one of the key ingredients in the popular method of subhalo abundance matching. \item Taking into account both the `dynamical age'-dependence of the normalization of the evolved SHMF and the `$a$'-scaling of the unevolved SHVF, also yields a universal fitting function for the {\it evolved} SHVF. In this case we find that the power-law slope for the evolved SHVF of all orders is equal to $\alpha = -2.6$. \end{itemize} The various, universal fitting functions for the subhalo mass and velocity functions presented here, and summarized in Appendix~A, can be used to quickly compute the average abundance of subhaloes of given mass or maximum circular velocity, at any redshift, and for any (reasonable) $\Lambda$CDM cosmology, without having to run and analyze high resolution numerical simulations. In the second paper in this series (van den Bosch \& Jiang 2014), we compare subhalo mass and velocity functions obtained from different simulations and with different subhalo finders, among each other, and with predictions from our semi-analytical model. We demonstrate that our model is in excellent agreement with simulation results that analyze their data with halo finders that use the full 6D phase-space information (e.g., {\tt ROCKSTAR}), or that use temporal information (e.g., {\tt SURV}). Results obtained using subhalo finders that only rely on the densities in configuration space are shown to dramatically underpredict the abundance of massive subhaloes, by more than an order of magnitude. In the third paper in this series (Jiang \& van den Bosch, in preparation), we use our model to investigate, in unprecedented detail, the halo-to-halo variance of dark matter substructure, which is important, among others, for assessing the severity of the `too-big-to-fail' problem (see also Purcell \& Zentner 2012). | 14 | 3 | 1403.6827 |
1403 | 1403.2918_arXiv.txt | A spin-1 Z' particle as a single dark matter candidate is investigated by assuming that it does not directly couple to the Higgs boson and standard model fermions and does not mix with the photon and Z boson. The remaining dominant vertices are quartic $Z'Z'ZZ$ and $Z'Z'W^+W^-$, which can induce effective $Z'Z'q\bar{q}$ couplings through standard-model gauge-boson loops. We discuss constraints from the cosmological thermal relic density, and direct and indirect-detection experiments, and find that a dark Z' can only exist above the W boson mass threshold, and the effective quartic coupling of $Z'Z'VV$ is bounded in the region of $10^{-3}\sim 10^{-2}$. | After the discovery of the 125~GeV Higgs boson, the standard model (SM) of particle physics has become a complete theory; within the SM, the remaining task is the precision measurements of various Higgs properties, in particular its couplings, and to further narrow down the possible parameter space of new physics. Although the hierarchy and meta-stable vacuum problems remain for the SM Higgs theoretically, for new physics beyond the SM the absence of new physics signals at the LHC to date implies that extensions of the SM still only need to rely on the traditional particle physics facts of non-zero neutrino masses and baryon asymmetry. Given these circumstances, the presence of dark matter (DM) in our Universe becomes an even more important leading empirical evidence for the existence of new physics, because no SM particle can account for DM. Cosmology and astrophysics tell us that almost 85$\%$ of matter in our universe is dark, i.e., neutral, non-luminous and non-baryonic. The fact that the abundance of DM is comparable to that of ordinary visible matter seems to imply that DM may have the same or similar origins and properties as ordinary matter. If we accept the conclusion of quantum field theory (QFT) that all matter should be made of particles, then an unambiguous, non-gravitational signal of DM must appear in particle physics experiments. This has driven the particle physics community to try harder to unravel DM's still enigmatic properties. Because details of the particle properties of DM are lacking, the best investigative strategy for theorists is to try to cover as much ground as possible. Considering that QFT classifies particles according to their spin (even or odd half-integers), elementary particles discovered so far all have low spins. Most DM candidates discussed so far in the literature have been assumed to be spin-0 scalars \cite{ScalarDM1,*ScalarDM2,*ScalarDM3,*ScalarDM4,*ScalarDM5,*ScalarDM6,*ScalarDM7} or spin-1/2 spinors \cite{SpinorDM3,*SpinorDM4}. Whereas a scalar DM has relatively simple structure and provides possible intimate interplay with the 125~GeV Higgs, a spinor DM extends the traditional observation that matter is composed of spin-1/2 particles. The heavy sterile neutrinos \cite{SpinorDM1} and the lightest neutralino \cite{SpinorDM2} in supersymmetric models are DM candidates belonging to this type. Apart from scalar and spinor DM, the next level of higher-spin candidate particles comprise spin-1 vectors. If we limit ourselves to the simplest vector particle scenario in particle physics, a single extra neutral vector particle, usually denoted by Z', is sufficient. We shall discuss this possibility in the present paper. A higher spin case, spin-3/2 DM, has been discussed in Ref.\cite{spin3over2}. A vector particle Z' can be viewed as a gauge boson that mediates an extra $U(1)$ gauge force beyond the conventional SM strong $SU(3)_c$ force and electroweak $SU(2)_L\otimes U(1)_Y$ forces. For as yet unknown reasons, this additional $U(1)$ gauge symmetry is spontaneously broken, thus yielding a massive Z'. SM plus Z' is a minimal and well motivated generalisation of the SM; many new-physics models have such a $Z'$ boson (for details see review Refs. \cite{LangackerRMP2008,*LangackerPRL2008,*Han,*Zwirner}) as a necessary constituent and remnant for new-physics interactions. Before July 4, 2012, the Higgs was the superstar of particle physics searches and a Z' only played a supporting role. With the discovery of the 125~GeV Higgs, a Z' now becomes one of the hot new-physics candidate particles and the LHC is actively searching for it in various channels, with the model-dependent lower mass bound already reaching the TeV-energy region depending on the final state it is assumed to decay into. Now, if we further take the Z' as a DM candidate thus changing the Z' from visible to invisible, the interactions between the invisible Z' and SM particles will be strongly reduced, and the corresponding search strategies (such as direct detection, indirect detection, and collider experiment) will change with respect to those for visible Z'. Various constraints must therefore be re-examined. In the literature, the invisible Z' has been intensively discussed as a messenger between the visible sector (which contains the SM particles) and a hidden sector (to which DM belongs) \cite{DMZ1,*DMZ2,*DMZ3,*DMZ4,*DMZ5,*DMZ6,*DMZ7,*DMZ8,*DMZ9,*DMZ10,*DMZ11,*DMZ12}: in such a scenario, the SM particles can be either charged under the additional gauge symmetry or not. In the event that SM particles are neutral with respect to the extra $U(1)$ symmetry, the interaction occurs via effective operators connecting directly Z' to the SM sector. The simplest case is the kinetic mixing terms between the SM hypercharge field strength and the new Abelian field strength \cite{mixing}. The underlying reason in adopting Z' as a portal to the hidden sector stems from the traditional mediating role of gauge bosons. In this type of DM models, there are too many unknowns concerning the hidden sector, a situation that is not helpful in DM searches. In this paper, we consider an alternative simple approach by ignoring the conventional messenger role of Z', and instead treat it as pure matter. This approach is similar to the minimal darkon model \cite{ScalarDM1,*ScalarDM2,*ScalarDM3,*ScalarDM4,*ScalarDM5,*ScalarDM6} where SM is minimally expanded with the addition of a dark scalar (SM+D), except now we replace the scalar darkon D with a single vector DM candidate Z'. The change from the traditional Z' portal model to our present single dark Z' approach is similar to that from the Higgs portal model (where a scalar is taken as a messenger between the visible and hidden sectors) \cite{HiggsPortal} to the darkon model. After the reduction, because of the unique choice of DM candidate, we can ignore the uncertainties arising from the arbitrary hidden sector in the traditional Z' or Higgs portal models. The difference in the present approach with respect to scalar DM is that our dark Z' is a vector particle, which behaves not like a scalar or Higgs boson, but very much like a Z boson of SM and will have relatively complex interaction structure owing to its polarisation. A spin-1 dark matter candidate appears in models with one extra dimensions~\cite{KK5} and has been widely studied in this context~ \cite{*KKDM1,*KKDM2}. Note that this is not a generic prediction of extra dimensions, as in higher then 5 dimensions the candidate is a scalar~\cite{*KK6a,*KK6b,*KK6c}, and a scalar is again found in models of pseudo-Goldstone Higgs in warped space~\cite{frigerio} and technicolor~\cite{sannino}. This paper is organised as follows. In Section II, in terms of the model-independent extended electroweak chiral Lagrangian and the six assumptions needed to keep Z' dark, we determine the necessary operators that couple our dark Z' to SM particles. In Section III, we calculate the relic density produced from our single dark Z', derive a constraint on the effective coupling of the dark Z' pair to $W$ or $Z$ pairs. Section IV looks at the direct-detection constraint, where we compute the SM gauge-boson-loop-induced $Z'Z'\bar{q}q$ vertex and discuss direct detection. Section V examines indirect-detection constraints and includes discussions of the Pamela, AMS02, and FermiLAT experiments. In Section VI, we discuss the combined results and some other possible DM related issues. Section VII presents a summary. Some necessary results for Section II are to be found in Appendix A. | In this paper, we have investigated a rarely discussed possibility where a Z' boson is the sole DM candidate. \\We considered an extended chiral Lagrangian with an additional U(1) gauge symmetry, with the following additional assumptions: \begin{enumerate} \item dark Z' is higgsphobic, i.e., it does not directly couple to the Higgs, \item dark Z' is fermiophobic, i.e., it does not directly couple to SM fermions, \item there is no CP violating Z' couplings, \item there is no anomalous Z' couplings, \item dark Z' does not mix with $\gamma$ and the $Z$ boson, and \item there is no Z' interaction linear in the Z' field. \end{enumerate} The remaining quartic vertices, $Z'Z'ZZ$ and $Z'Z'W^+W^-$, then dominate the Z' physics, which has four independent effective coupling constants $g_1,g_2,g_3,g_4$. We found that the mass of this dark Z' is not allowed below the W boson mass threshold, due to a combination of strong constraints from the relic density and those from direct-detection experiments. For mass $M_{Z'}>100$GeV, from the relic density and direct and indirect-detection experiments where effective $Z'Z'q\bar{q}$ couplings are induced from SM gauge-boson loops, we produce five different coupling scenarios that are in the region $10^{-3}$--$10^{-2}$ (for the universal case, the result is given in Fig.~14). This range of coupling can be relaxed beyond the five cases analyzed for direct-detection experiments, but cannot be changed for indirect-detection experiments. To improve FermiLAT $\gamma$-ray spectra by our dark Z', we require a boost factor from 300 to 8000. We checked that even if our dark Z' mass lies within the low-energy region, it cannot reduce tensions among the observed possible DM signals with other null-result experiments. The bounds we extracted are therefore rather robust and model-independent. | 14 | 3 | 1403.2918 |
1403 | 1403.2129_arXiv.txt | A pair of giant gamma-ray bubbles have been revealed by the {\it Fermi} LAT. In this paper we investigate their formation mechanism. Observations have indicated that the activity of the supermassive black hole located at the Galactic center, Sgr A*, was much stronger than the present time. Specifically, one possibility is that while Sgr A* was also in the hot accretion regime, the accretion rate should be $10^3-10^4$ times higher during the past $\sim 10^7$ yr. On the other hand, recent MHD numerical simulations of hot accretion flows have unambiguously shown the existence of strong winds and obtained their properties. Based on these knowledge, by performing three-dimensional hydrodynamical simulations, we show in this paper that the Fermi bubbles could be inflated by winds launched from the ``past'' hot accretion flow in Sgr A*. In our model, the active phase of Sgr A* is required to last for about 10 million years and it was quenched no more than 0.2 million years ago. The Central Molecular Zone (CMZ) is included and it collimates the wind orientation towards the Galactic poles. Viscosity suppresses the Rayleigh-Taylor and Kelvin-Helmholtz instabilities and results in the smoothness of the bubble edge. The main observational features of the bubbles can be well explained. Specifically, the {\it ROSAT} X-ray features are interpreted by the shocked interstellar medium and the interaction region between winds and CMZ gas. The thermal pressure and temperature obtained in our model are in good consistency with the recent {\it Suzaku} observations. | Observations have shown that there exists a supermassive black hole, Sgr A*, located at the Galactic Center (GC). The mass of the black hole is about $4~\times ~10^6 \msun$ (\citealt{Schodel2002}; \citealt{Ghez2005,Ghez2008}; \citealt{Gillessen2009a,Gillessen2009b}). Because of its proximity, Sgr A* is regarded as the best laboratory of studying black hole accretion. Numerous observations have been conducted and abundant data has been obtained (see recent reviews by \citealt{Genzel2010}; Falcke \& Markoff 2013; Yuan \& Narayan 2014). The source is quite dim currently, with a bolometric luminosity of only about $10^{36}~\ergs~\sim 3\times 10^{-9}~L_{Edd}$. The mass accretion rate at the Bondi radius has been estimated by combining the {\it Chandra} observation and the Bondi accretion theory, which is $\sim 10^{-5} \msunyr$ (\citealt{Baganoff2003}). The bolometric luminosity would be 5 orders of magnitude higher if the accretion were in the mode of the standard thin disk. Amount of theoretical studies in the past 20 years have revealed that the advection-dominated accretion flow (ADAF) can explain this puzzle (\citealt{Yuan2003}). Specifically, the low-luminosity of Sgr A* is because of two reasons. One is the intrinsic low radiative efficiency of ADAF because of energy advection (\citealt{Narayan1994,Narayan1995b}; \citealt{Xie2012}). Another important reason is because of the existence of strong wind (or outflow), i.e., $\sim 99\%$ of the matter captured at the Bondi radius are lost (\citealt{Yuan2012b}; \citealt{Narayan2012}; \citealt{LiOS2013}). The existence of wind has been confirmed by the radio polarization observations (e.g., \citealt{Aitken2000}; \citealt{Bower2003}; \citealt{Marrone2007}), and more recently by the {\it Chandra} observation to the emission lines from the accretion flow in Sgr A* (\citealt{Wang2013}). Yuan \& Narayan (2014) presented the most recent review on the hot accretion flow and its various astrophysical applications, including on Sgr A*. One particularly interesting thing is that many observational evidences show that the activity of Sgr A* was very likely much stronger in the past than the current stage. These observations suggest that Sgr A* has perhaps undergone multiple past epochs of enhanced activity on different timescales. Here we only focus on relatively long timescales. These evidences were summarized in \citet{Totani2006}, and later discussed in other works (e.g., \citealt{Bland-Hawthorn2013}; \citealt{Ponti2013}; \citealt{Kataoka2013}). These evidences include: 1) orders of magnitude higher X-ray luminosity (compared to the present value) required to explain the fluorescent X-ray emission reflected from cold iron atoms in the giant molecular cloud Sgr B2 (\citealt{Koyama1996}; \citealt{Murakami_B22000,Murakami_B22001}; \citealt{Revnivtsev2004}); 2) a new X-ray reflection nebula associated with Sgr C detected by {\it ASCA} (\citealt{Murakami_C2001}); 3) the ionized halo surrounding Sgr A* (Maeda et al. 2002); 4) Galactic Center Lobe (GCL, \citealt{Bland-Hawthorn2003}); 5) Expanding Molecular Ring (EMR, \citealt{Kaifu1972}; \citealt{Scoville1972}); 6) North Polar Spur (NPS, \citealt{Sofue2000}; \citealt{Bland-Hawthorn2003}); 7) the 8 keV diffuse X-ray emission in the center (\citealt{Muno2004}); 8) the excess of H$\alpha$ emission of Magellanic Stream (\citealt{Bland-Hawthorn2013}); 9) the {\it Suzaku} observations to the NPS (\citealt{Kataoka2013}). \citet{Totani2006} found that to explain the former seven observations mentioned above, the characteristic X-ray luminosity of Sgr A* should be $\sim (10^{39}-10^{40})~\ergs\sim 2\times (10^{-6}-10^{-5})~L_{\rm Edd}$ several hundred years ago, and such an activity should last for $\sim 10^7$ yr. For such a luminosity, the accretion should be well in the regime of hot accretion rather than the standard thin disk (Yuan \& Narayan 2014). Correspondingly, the mass accretion rate should be $10^3-10^4$ times higher than the present value (\citealt{Totani2006}). Other possibilities of the past activity have also been proposed. For example, the bolometric luminosity in the past millions years estimated by \citet{Bland-Hawthorn2013} based on the 8th evidence mentioned above is much higher, $\sim 0.03-0.3~L_{\rm Edd}$. The timescale of the activity is shorter, and it was active 1$-$3 Myr ago. Yet another possibility is as follows. A star formation event has been observed and it is believed to occur at $\sim 6\times 10^6$ yr ago on scales of $\sim$ 0.03$-$0.5 pc from the SMBH (e.g., \citealt{Genzel2003}; \citealt{Paumard2006}). If the past activity of Sgr A* occur concurrently with this event, this would imply a strong activity of Sgr A* occurred $\sim 6$ Myr ago (\citealt{Zubovas2011}). In summary, so far we still lack a consensus on the past activity of Sgr A*. Yet perhaps another evidence for the past activity of Sgr A* is the {\it Fermi} bubbles recently detected. Using the {\it Fermi}-LAT, \citet{Su10} discovered two giant gamma-ray bubbles located above and below the Galactic plane (also refer to \citealt{YangRZ2014} for the recent observations). In Galactic coordinates (l,b), the height of each bubble is about $50^{\circ}$, and the width is about $40^{\circ}$. The surface brightness looks uniform, and the edge looks sharp. The total luminosity of the bubbles is $4 \times 10^{37}~\ergs$ in 1$-$100 GeV band. The total energy of the two bubbles is estimated to be $10^{55}-10^{56}$ erg. Many theoretical models have been proposed since the discovery of the {\it Fermi} Bubbles. In the ``hadronic'' model, the formation is explained as due to a population of relic cosmic ray protons injected by processes associated with extremely long time scale and high areal density star formation in the Galactic center (\citealt{Crocker2011,Crocker2012,Crocker2013}). In the ``leptonic'' scenario the $\gamma$-ray emission comes from the inverse Compton scattering between relativistic electrons (also often called as Cosmic Ray) and seed photons. The seed photons may be the cosmic microwave background, but the origin of relativistic electrons are different in different models. They can come from Fermi-1st order acceleration on shock front formed in the periodic star capture processes by Sgr A* (\citealt{Cheng2011}), the Fermi-2nd order acceleration through stochastic scattering by plasma instabilities (\citealt{Mertsch2011}), directly from the jet (\citealt{Guo1}; \citealt{Guo2}; \citealt{Yang2012}; \citealt{Yang2013}), or from outflows driven by the past star formation (\citealt{Carretti2013}). Among these models, there are two models which are physically most relevant to the model we propose in the present paper. They are the ``jet'' model (\citealt{Guo1}; \citealt{Guo2}) and the ``quasar outflow'' model (\citealt{Zubovas2011}; \citealt{Zubovas2012}). In the former, it is suggested that the bubbles are created by AGN jet which happened about 2 Myr ago. After that, cosmic rays (CRs) carried by jet diffuse to today's morphology. Yang et al. (2012, 2013) developed the jet model by including magnetic field. They showed that the suppression of the diffusion of CRs along the direction across the edge is caused by the magnetic field configuration. This is because inside the bubbles, the magnetic field is mainly radial, but just outside of the bubble and close to the edge, the field is mainly in the parallel direction. One problem, as pointed out by \citet{Zubovas2011}, is that they must require the jet direction to be perpendicular to the plane of the Galaxy, which seems to be unlikely, given the general absence of correlation between the direction of jets and galaxy planes and the observed direction of the stellar disk in the Galaxy. In addition, the velocity required in the jet model is as low as $\leq0.1c$ and the mass loss rate in the jet is in general as high as super-Eddington. Another model is the ``quasar outflow'' model proposed in \citet{Zubovas2011} and \citet{Zubovas2012}. In this model, Sgr A* is again assumed to be very active in the past, with mildly super-Eddington accretion rate 6 Myr ago and duration of the activity being 1 Myr. Under such a high luminosity, quasi-spherical outflow will be driven by the strong radiation pressure from this quasar (\citealt{KingPounds2003}), which can result in the formation of the {\it Fermi} bubbles. In this model, the existence of the well-known central molecular zone (CMZ) in the GC region plays an important role in collimating the outflow and forming the morphology of the bubbles. \citet{Kataoka2013} pointed out that the expansion velocity derived by the {\it Suzaku} observation is lower than the advocated values by both the jet and quasar outflow models by a factor of 5 and 2 respectively. Assuming that Sgr A* was in an active state as suggested by Totani (2006), in this paper we investigate whether the {\it Fermi} bubbles can be inflated by the wind launched from the hot accretion flow by performing numerical simulations. In \S2, we briefly introduce some background on the accretion flow and wind, and present an analytical solution for the interaction between the winds and ISM to be used to understand our numerical simulation results. The numerical simulations approach and the results are presented in \S3 and \S4, respectively. We then summarize in \S5. | We have performed hydrodynamical numerical simulations to study the formation mechanism of the {\it Fermi} bubbles detected by {\it Fermi}-LAT. Our main aim in the present paper is to explain the morphology and the thermodynamical properties of the bubble, but leaving the study of the production of $\gamma$-ray photons and the explanation of the spectrum to our next work. While Sgr A* is quite dim at the present stage, many observational evidences indicate that this source should be much more active in the past. Specifically, one possibility suggested by a previous work is that the mass accretion rate of the hot accretion flow in Sgr A* should be $10^3-10^4$ times higher than the present value and this activity lasts for several Myr (Totani 2006). Based on this scenario, we show that the observed {\it Fermi} bubbles can be well formed by the interaction between the winds launched from the ``past'' hot accretion flow and ISM. In our model, the winds last for $10^7$ yr and the activity of Sgr A* was quenched no more than $0.2$ Myr ago. The properties of wind such as the mass flux and velocity are not free parameters but obtained from the previous works on MHD numerical simulations of hot accretion flows. Viscosity and thermal conduction are included which can suppress various instabilities and make the gas inside the bubble uniform. The required power of the winds is $\sim 2\times 10^{41}~\ergs$, which is fully consistent with the previous studies on the past activity of Sgr A*. The edge of the bubbles corresponds to the contact discontinuity which is the boundary between the shocked interstellar medium and the shocked winds. Properties of the bubbles such as the morphology and the total energy are consistent with observations. The limb-brightened {\it ROSAT} X-ray structure can be interpreted by the shocked ISM behind the forward shock, while the conical-like X-ray structure close to Galactic center is interpreted by the interaction region of wind gas and CMZ gas. Our model can also quantitatively explain both the thermal pressure and the temperature of the X-ray structure in high latitude position ($\ga +40^{\circ}$) revealed by the recent {\it Suzaku} observations. In addition to winds, jets should also co-exist with hot accretion flow (Yuan \& Narayan 2014). In our model, we do not include the jet. We assume that the interaction between jet and the interstellar medium is negligible because, by definition, jet must be well-collimated and be as fast as the light. In this case, we expect that the jet will simply drill through the ISM, with almost no interaction with the ISM in the Galaxy. We have also calculated the energy transformation efficiency in our model. We find that at $r\sim10$ kpc, $\sim$ 60\% of the total energy of winds injected from Sgr A* is transported into the ISM. Obviously, such a high efficiency is because of the large opening angle of winds. This result suggests that we may consider the role of winds in solving the cooling flow problem in some elliptical galaxies and galaxy clusters. Usually people consider the heating of ISM or intracluster medium by jets (see, e.g., \citealt{Vernaleo2006} and references therein). However, numerical simulations have found that jet may only be able to deposit their energy at $r>100$ kpc thus not very efficient (\citealt{Vernaleo2006}). Some solutions have been suggested, e.g., the precession of a jet, or motions of intracluster medium (see \citealt{Vernaleo2006} and \citealt{Heinz2006}). But another possible way is to invoke winds whose existence has been firmly established by both observational and theoretical studies. Given our successful explanation of the formation of the {\it Fermi} bubbles by the wind model, it is also worthwhile to study whether the X-ray cavities observed in galaxy clusters (e.g., \citealt{Fabian2012}), which have the similar morphology with the {\it Fermi} bubbles, can be produced by winds. | 14 | 3 | 1403.2129 |
1403 | 1403.0589_arXiv.txt | The $Herschel$ Fornax Cluster Survey (HeFoCS) is a deep, far-infrared (FIR) survey of the Fornax cluster. The survey is in 5 $Herschel$ bands (100 - 500 \micron) and covers an area of 16 deg$^2$ centred on NGC\,1399. This paper presents photometry, detection rates, dust masses and temperatures using an optically selected sample from the Fornax Cluster Catalogue (FCC). Our results are compared with those previously obtained using data from the $Herschel$ Virgo Cluster Survey (HeViCS). In Fornax, we detect 30 of the 237 (13\,\%) optically selected galaxies in at least one $Herschel$ band. The global detection rates are significantly lower than Virgo, reflecting the morphological make up of each cluster - Fornax has a lower fraction of late-type galaxies. For galaxies detected in at least 3 bands we fit a modified blackbody with a $\beta = 2$ emissivity. Detected early-type galaxies (E\,/\,S0) have a mean dust mass, temperature, and dust-to-stars ratio of $\log_{10}(<M_{dust}>/\mathrm{M_{\odot}}) = 5.82 \pm 0.20$, $<T_{dust}> = 20.82 \pm 1.77$\,K, and $\log_{10}(M_{dust}/M_{stars}) = -3.87 \pm 0.28$, respectively. Late-type galaxies (Sa to Sd) have a mean dust mass, temperature, and dust-to-stars ratio of $\log_{10}(<M_{dust}>/\mathrm{M_{\odot}}) = 6.54 \pm 0.19$, $<T_{dust}> = 17.47 \pm 0.97$\,K, and $\log_{10}(M_{dust}/M_{stars}) = -2.93 \pm 0.09$, respectively. The different cluster environments seem to have had little effect on the FIR properties of the galaxies and so we conclude that any environment dependent evolution, has taken place before the cluster was assembled. | The Fornax cluster is a nearby example of a poor but relatively relaxed cluster. It has a recession velocity of 1379\,km\,s$^{-1}$ and a distance of 17.2\,Mpc, a mass of 7$ \times$10$^{13}$ M$_{\odot}$ and virial radius of 0.7\,Mpc~\citep{drinkwater01}. It is located away from the Galactic plane with a Galactic latitude of $-53.6^{\circ}$ in an area of relatively low Galactic cirrus. This makes it ideal for study at all wavelengths. ~\citet{drinkwater01} showed that despite Fornax's apparent state of relaxation, it still contains substructure, e.g. a small, in-falling group centred on NGC\,1316, 3$^{\circ}$ to the southwest. However, compared to the Virgo cluster, Fornax is very centrally concentrated and probably at a much later epoch of formation. This is also suggested by the strong morphological segregation that has taken place, leaving the cluster almost entirely composed of early-type galaxies.~\citet{drinkwater01} also noted that there exist two different populations, suggesting that while the giant galaxies are virialised, the dwarf population is still in-falling. Morphological segregation is not the only indicator of evolution in the cluster, the interstellar medium (ISM) of the galaxies also seems to have been affected by the cluster environment.~\citet{schroder01} found that 35 Fornax cluster galaxies were extremely \hi-deficient in comparison to a field sample. \hi\ is generally loosely bound to galaxies and as such is a good indicator of the effects of environmental processes. Dust, another constituent of the ISM, is also affected by the environment.~\citet{cortese10,cortese10b} showed that within a cluster like Virgo, dust can be stripped from the outskirts of a galaxy, truncating the dust disk. Dust is crucial for the lifecycle of a galaxy, as it allows atomic hydrogen to transform on its surface into molecular hydrogen and is thus essential for star formation. Around half the energy emitted from a galaxy is first emitted by stars, then reprocessed by dust, and re-emitted from 1\,$\mu m$ to 1\,mm~\citep{driver08}. Thus, to better understand the physical processes affecting galaxies it is crucial that we observe and understand the complete `stellar' spectral energy distribution (SED). In contrast to Virgo, Fornax has only very weak X-ray emission~\citep{fornaxxray,virgoxray}, which traces the hot intra-cluster gas. Compared to Virgo, this lack of an intra-cluster medium (ICM) along with a lower velocity dispersion ($\sim$\,300\,km\,s$^{-1}$) reduces the efficiency of mechanisms such as ram pressure stripping. We can estimate the efficiency of ram pressure stripping using $E \propto t_{cross} \delta v^{2} \rho _{gas}$~\citep{gunn72}, where ($E$) is the stripping efficiency of a cluster, with a velocity dispersion ($\delta v$), central gas density ($\rho _{gas}$) and a crossing time $t_{cross}$. Both Virgo and Fornax have a similar crossing time, $t_{cross}\sim 10^{9}$\,yr which is much less than their relaxation time t$_{relax} \sim 10^{10}$\,yr~\citep{boselli06}. Virgo has a velocity dispersion which is $\sim$\,4$\times$ greater and an ICM $\sim$\,2$\times$ as dense as Fornax~\citep{chen07, boselli06}, indicating that Fornax may be $\sim$\,32$\times$ less efficient than Virgo in removing a galaxy's ISM via ram pressure stripping. Fornax's higher galaxy density, lower ICM density and lower velocity dispersion suggest that galaxy-galaxy tidal interactions will play a more important role than in a more massive cluster like Virgo~\citep{combes88,kenney95}. Whilst the near-infrared (NIR, 1 - 5\,$\micron$) and mid-infrared (MIR, 5 - 20\,$\micron$) emission from a galaxy is dominated by the old stellar population and complex molecular line emission, respectively, the far-infrared (FIR, 20 - 500\,$\micron$) and sub-mm regime (500 - 1000\,$\micron$) is dominated by dust emitting as a modified blackbody. Although there are a few small windows in the earth's atmosphere, most of the infrared spectrum is absorbed and is either impractical or not possible to observe from the ground, so the infrared wavelength regime is best studied from space-based observatories. In 1983, the $IRAS$~\citep{IRAS} (10\,-\,100\,$\mu$m) all-sky survey opened up the extragalactic infrared sky for the first time. Of particular interest to us is the first detection of FIR sources associated with the Fornax cluster.~\citet{wang91} found 5 $IRAS$ sources matching known Fornax galaxies inside the bounds of our survey and located preferentially towards the outskirts of the cluster. Since $IRAS$, very little further study has been undertaken of the Fornax cluster in the MIR or FIR. However, other cluster observations with $ISO$~\citep{ISO}(10\,-\,70\,$\mu$m) indicated that MIR emission, originating from hot dust ($\sim$60\,K) correlates well with \hii\ regions, implying that it is heated primarily by star formation (SF)~\citep{popescu02}. In contrast, FIR emission from cold dust ($\approx $20\,K) had a nonlinear correlation with H$_{\alpha}$ luminous regions, indicating a link to the older, more diffuse stellar population. Most significantly, they found a cold dust component that, in some cases, was less than 10\,K, though $ISO$ lacked the longer wavelength photometric coverage to constrain the Raleigh-Jeans blackbody tail of this `cold dust emission'. The $Spitzer$ Space Telescope~\citep{spitzer} (3\,-\,160\,$\mu$m) was launched in 2003. Using the MIPS instrument~\citet{edwards11} showed that in the Coma cluster SF is suppressed in the cluster and this suppression decreases with distance from the cluster core. All the instruments described above lacked photometric coverage at wavelengths needed to constrain the temperature and mass of cold dust ($T \textless 20$K). The $Herschel$ Space Observatory~\citep{pilbratt10} rectified this problem as it was able to survey large areas of sky at longer FIR wavelengths and with superior resolution and sensitivity. The $Herschel$ Fornax Cluster Survey~\citep[HeFoCS;][]{davies12} observations discussed in this paper make use of the superior observational characteristics of the $Herschel$ Space observatory to address the problems highlighted above. This paper is one in a series of papers in which we compare the properties of galaxies in both the Virgo and Fornax clusters. Other papers in this series are: Paper I~\citep{davies10} examined the FIR properties of galaxies in Virgo cluster core; Paper II~\citep{cortese10} studied the truncation of dust disks in Virgo cluster galaxies; Paper III~\citep{clemens10} constrained the lifetime of dust in early-type galaxies; Paper IV~\citep{smith10} investigated the distribution of dust mass and temperature in Virgo's spirals; Paper V~\citep{grossi10} examined the FIR properties of Virgo's metal-poor, dwarf galaxies; Paper VI~\citep{baes10} a FIR view of M87; Paper VII~\citep{delooze10} detected dust in dwarf elliptical galaxies in the Virgo cluster; Paper VIII~\citep{davies12a} presented an analysis of the brightest FIR galaxies in the Virgo cluster; Paper IX~\citep{magrini11} examined the metallicity dependence of the molecular gas conversion factor; Paper X~\citep{corbelli12} investigated the effect of interactions on the dust in late-type Virgo galaxies; Paper XI~\citep{pappalardo12} studied the effect of environment on dust and molecular gas in Virgo's spiral galaxies; Paper XII~\citep{auld12} examined the FIR properties of an optically selected sample of Virgo cluster galaxies; Paper XIII~\citep{alighieri13} investigated the FIR properties of early-type galaxies in the Virgo cluster; Paper XIV~\citep{delooze13} studied Virgo's transition-type dwarfs and Paper XVI~\citep{Davies14} presented an analysis of metals, stars, and gas in the Virgo cluster. Six further papers~\citep{Boselli10,hrslate,boquien12,ciesla12,smith12,eales12} discuss the HeViCS galaxies along with other galaxies observed as part of the Herschel Reference Survey (HRS). | We have undertaken the deepest FIR survey of the Fornax cluster using the $Herschel$ Space Observatory. Our survey covers over 16 deg$^2$ in 5 bands and extends to the virial radius of the cluster, including 237 of the 340 FCC galaxies. We have used the optical positions and parameters of these FCC galaxies to fit appropriate apertures to measure FIR emission. We have detected 30 of 237 (13\,\%) cluster galaxies in the SPIRE 250\,$\micron$ band, a significantly lower detection rate than in the Virgo cluster~\cite[34\,\%; see][]{auld12}. In order to better understand the global detection rate we separated Fornax and Virgo galaxies into 4 morphological categories: dwarf (dE\,/dS0), early (E\,/\,S0), late (Sa\,/\,Sb/\,Sc\,/\,Sd), and irregular (BCD\,/\,Sm\,/\,Im\,/\,dS). We examined the detection rate for each morphological group in the 250\,$\micron$ band as it has the highest detection rate of all the $Herschel$ bands. In Fornax we detect 6\%, 21\%, 90\%, and 31\% of dwarf, early, late, and irregular, respectively. These results agrees with the fraction of detected galaxies in each morphological category in the Virgo cluster, indicating that the lower global detection rate in Fornax is due to its lower fraction of late-type galaxies. For galaxies detected in at least 3 bands we fit a modified blackbody with a fixed beta emissivity index of 2, giving dust masses and temperatures for 22 Fornax galaxies. Fornax's early-type galaxies show lower dust masses and hotter temperatures than late-type galaxies. When comparing early-type galaxies from the Fornax cluster to their counter-parts in the Virgo cluster, their FIR properties are statistically identical. The same is true for the late-type galaxies. This may suggest that the effect of the cluster is more subtle than previously thought and that the evolution of the ISM components has mostly taken place before the cluster was assembled. We observe dust mass to be well correlated to stellar mass for late-type galaxies. We suggest that this correlation has its origins in the mass-metallicity relation~\citep{lequeux79, tremonti04,lara10,hughes13}, as the ratio between the mass of metals in the dust and the gas has been found to be 0.5~\citep{meyer98,Davies14}. It therefore follows that any correlation with gas phase metallicity should also be observed between stellar and dust mass. We find early-type galaxies to have a very large range of dust-to-stars ratios, $-1.3 \ge \log_{10}(M_{star}/M_{dust}) \ge -6.2$. We argue that this supports a scenario where the dust in early-type galaxies is from an external origin, as has been previously suggested by other authors~\citep{smith12}. As FIR properties are statistically identical between environments, therefore so must the balance between dust input/creation and removal/destruction. However, this conclusion is perplexing as mergers are thought to be far less common in clusters when compared to groups or the field~\citep{mihos04}, and dust destruction is largely regulated internally~\citep{clemens10}, thus invariant with respect to environment. | 14 | 3 | 1403.0589 |
1403 | 1403.7360_arXiv.txt | We determine the oxygen density in the central zone of nine type IIP supernovae (SN~IIP) at the nebular stage using oxygen doublet [O\,I] 6300, 6364 \AA. Combined with two available estimates these data indicate that oxygen densities on day 300 are distributed in rather narrow range $(2.3\pm1)\times10^9$ cm$^{-3}$. The result does not depend on the distance, extinction, or model assumptions. We demonstrate that the found density distribution suggests that the explosion energy of SN~IIP increases with the stellar mass. | Type IIP supernovae are caused by an explosion related with the core collapse of massive stars. The theory of stellar evolution predicts that SN~IIP progenitors reside in the mass ($M$) range of $9-25~M_{\odot}$ \citep{heg03}; the bounds may be in error by 20\%. The ejected mass ($M_e$) is lower by the neutron star and the mass lost by the stellar wind. How do SN~IIP explode --- the question that still remains unresolved. Two major mechanism are discussed in this context: neutrino deposition \citep{cow66} and magnetorotational explosion \citep{bik71}. A third interesting scenario suggests the rotational fragmentation of the protoneutron star followed by an explosion of neutron mini-star of $0.1~M_{\odot}$ \citep{ims92} Of particular interest for the observational verification of the explosion mechanisms could be the relation between the explosion energy and progenitor mass. Common sense arguments suggests that the explosion energy should increase with the star binding energy. The latter increases with the progenitor mass \citep{woo02}, so one expects in this case that the explosion energy (i.e., kinetic energy at infinity) should rise with a progenitor mass. However, the analyses of the neutrino mechanism in the framework of the 2D-hydrodynamics with a simplified neutrino transfer implies that the explosion energy, on the contrary, should decrease with the rising stellar mass at least in the progenitor mass range of $15-25~M_{\odot}$ \citep{fry99}. Recent numerical experiments using one-dimensional hydrodynamics with analytical description of the neutrino luminosity predict the non-monotonic $E(M)$ relation in the mass range of $10-28~M_{\odot}$ with the energy variation in the range of $(0.5-2)\times10^{51}$ erg \citep{ugl12}. As to the magnetorotational mechanism, \citet{moi12} show that the explosion energy monotonically increases with the core mass, if one admits the constant ratio of the rotational-to-gravitational energy. A phenomenological relation between the energy and mass can be infered directly from the hydrodynamic modelling of a large sample of SN~IIP. This sort of the study for the eight SN~IIP indicates a correlation between the energy and progenitor mass \citep{utr13}. The problem is that the explored sample comprises only progenitors with masses $>15~M_{\odot}$; whether this stems from the observational selection or the mass overestimation, remains an open issue. A different conclusion is made by \citet{nad03} using estimates based on analytical relations between the observables and the supernova parameters: a sample of 14 SN~IIP does not show any correlation between the explosion energy and the mass (but see \citet{ham03}). At present therefore the issue, whether the energy of SN~IIP depends on the progenitor mass, remains unclear from both observational and theoretical point of view. In the present paper we study the issue of the energy-mass relation for SN~IIP using model independent arguments. For the freely expanding supernova envelope the density in the central zone depends on the ejecta mass and energy as $\rho\propto M_e/(vt)^3\propto t^{-3}M_e^{5/2}E^{-3/2}$. This relation $\rho(E,M_e)$ implies that the presence or absence of the correlation between $E$ and $M_e$ could be checked by means of the density measurement at some fixed stage. There exists a simple and efficient method for the density determination in the SN~IIP enevelope at the nebular stage using [O\,I] 6300, 6364 \AA\ doublet. The red-to-blue flux ratio in the optically thin case is R/B=1/3. However, in the inner zone of SN~IIP, where most of the synthesised oxygen resides, the optical depth in the [O\,I] 6300 \AA\ line may be large. In this case the doublet ratio R/B can be larger than 1/3. This effect was observed originally in SN~1987A and was used to estimate the density and filling factor of oxygen in this supernova \citep{spy91,chu88}. It is noteworthy, that the doublet ratio value can be hampered by the Thomson scattering which gives rise to the red wing of [O\,I] 6300 \AA\ line and thus the increase of R/B ratio at the early nebular stage ($t<200$ d) up to R/B$>1$ \citep{chu92}. To determine the oxygen density therefore one should use late time nebular spectra. On the other hand, at the very late epoch the doublet ratio converges to the limit R/B=1/3, in which case the optical depth in the 6300 \AA\ line is impossible to recover. The favorable conditions for the oxygen density determination take place at age of 250--400 days. Surprisingly, despite the significance of the density diagnostics upon the bases of the [O\,I] 6300, 6364 \AA\ doublet was recognized long ago, until now apart from SN~1987A this method has been applied only for SN~1988A and SN~1988H \citep{spy91}. In this paper we wish to measure oxygen density for a sample of SN~IIP with the nebular spectra of good quality using [O\,I] 6300, 6364 \AA. As a result, we hope to recover distribution function of oxygen density $p(<n)$ for this category of supernovae. The analysis of this distribution in terms of a supernova energy and mass hopefully will permit us to draw the conclusion on the energy-mass relation and to answer the question posed by the paper title. We start with the conditions in the oxygen line-emitting zone and the method of oxygen density measurement. We find then the oxygen density for a sample of SN~IIP and finaly present results for the analysis of the density distribution function. | The goal of the paper was to determine the oxygen density for a sample of SN~IIP using the [O\,I] doublet in nebular spectra and then to study the relation between energy and progenitor mass upon the bases of the recovered density distribution. It was found unexpectedly that the range of oxygen number density on day 300 is very narrow $(2.3\pm1)\times10^9$ cm$^{-3}$. Remarkably, this result does not depend on distance, extinction, or any assumptions. The modelling for the found density distribution implies that the explosion energy of SN~IIP increases with the progenitor mass. This result reflects the important property of an explosion mechanism of SN~IIP that should be used to constrain explosion models. It should be emphasised that we do not include into the family of SN~IIP events similar to SN~1994W \citep{sol98,chu04} and SN~2009kn \citep{kan12} which mimic SN~IIP by their light curves but essentially differ by their spectra. The power index of the $E(M)$ relation found from the distribution $p(<n)$ is close to the tangent of the scattering plot of hydrodynamic parameters of SN~IIP on the $\log E-\log M$ plane. The oxygen density distribution thus confirms the conclusion about the increase of the explosion energy with the mass that is indicated by parameters of the hydrodynamic models \citep{utr13}. On the other hand, we find that the $E(M)$ relations obtained from the hydrodynamic parameters and from the distribution $p(<n)$ differ by values $E$ or/and $M$. Interestingly, in this respect, that the observed distribution $p(<n)$ is not consistent with the model in which lower boundary is $15~M_{\odot}$, i.e. equals the lower limit of the sample of SN~IIP studied hydrodynamically. One of the reason of this inconsistency could be that masses derived from hydrodynamic models are overestimated. Independent argument in favor of this possibility stems from the fact that progenitor masses of hydrodynamic models lie in the range $>15~M_{\odot}$, notably larger than the lower boundary ($\approx9~M_{\odot}$) which theoretically are associated with SN~IIP \citep{heg03}. This disparity is strengthend by the fact that lower boundary of progenitor masses ($\approx8~M_{\odot}$) extracted from the archival images \citep{sma09} is close to the theoretical lower boundary. The conclusion that the explosion energy of SN~IIP grows with the progenitor mass can serve a good observational test of the explosion theory for this category of supernovae. The current state of the theory however does not permit us to use this test with all its power. The energy-mass relation in the framework of neutrino mechanism recovered by \citet{ugl12} using one-dimensional model predicts the distribution $p(<n)$ strongly unlike the observed one. Yet existing uncertainties of the neutrino mechanism do not rule out the monotonic increase of the explosion energy in the mass range $10<M<M_{up}$, where $M_{up}$ is poorely known value and probably lying in the range of $20...~25~M_{\odot}$ (T. Janka, private communication). In the mechanism of the mini-neutron star the energy release is invariant, $\approx 10^{51}$ erg \citep{ims92}. Since the binding energy grows with the mass this mechanism predicts decrease of the explosion energy with the mass and therefore cannot be universal for SN~IIP. Currently, only magneto-rotational explosion admits the energy increase with the mass \citep{moi12} which makes it an appropriate mechanism for the SNe~IIP. \vspace{1cm} We are gratefull to Lina Tomasella for sending us spectra of SN~2012A. \medskip | 14 | 3 | 1403.7360 |
1403 | 1403.0898_arXiv.txt | Measurements of the growth index of linear matter density fluctuations $\gamma(z)$ provide a clue as to whether Einstein's field equations encompass gravity also on large cosmic scales, those where the expansion of the universe accelerates. We show that the information encoded in this function can be satisfactorily parameterized using a small set of coefficients $\gamma_i$, in such a way that the true scaling of the growth index is recovered to better than $1\%$ in most dark energy and dark gravity models. We find that the likelihood of current data, given this formalism and the $\Lambda$ Cold Dark Matter ($\Lambda$CDM) expansion model of $Planck$, is maximal for $\gamma_0=0.74^{+0.44}_{-0.41}$ and $\gamma_1=0.01^{+0.46}_{-0.46}$, a measurement compatible with the $\Lambda$CDM predictions ($\gamma_0=0.545$, $\gamma_1=-0.007$). In addition, data tend to favor models predicting slightly less growth of structures than the $Planck$ $\Lambda $CDM scenario. The main aim of the paper is to provide a prescription for routinely calculating, in an analytic way, the amplitude of the growth indices $\gamma_i$ in relevant cosmological scenarios, and to show that these parameters naturally define a space where predictions of alternative theories of gravity can be compared against growth data in a manner which is independent from the expansion history of the cosmological background. As the standard $\Omega$-plane provides a tool to identify different expansion histories $H(t)$ and their relation to various cosmological models, the $\gamma$-plane can thus be used to locate different growth rate histories $f(t)$ and their relation to alternatives model of gravity. As a result, we find that the Dvali-Gabadadze-Porrati gravity model is rejected with a $95\%$ confidence level. By simulating future data sets, such as those that a Euclid-like mission will provide, we also show how to tell apart $\Lambda$CDM predictions from those of more extreme possibilities, such as smooth dark energy models, clustering quintessence or parameterized post-Friedmann cosmological models. | \label{sec:introduction} Measurements of the expansion rate history $H(t)$ of the universe, when interpreted within the standard model of cosmology, convincingly indicate that the universe has recently entered a phase of accelerated expansion \cite{Perlmutter:1998np,Riess:1998cb, astier, marpairs,mod1, anderson, Bel:2013ksa, san,Ade:2013zuv}. Most of this unexpected evidence is provided via geometric probes of cosmology, that is by constraining the redshift scaling of the luminosity distances $d_L(z)$ of cosmic standard candles (such as Supernovae Ia), or of the angular diameter distance $d_A(z)$ of cosmic standard rulers (such as the sound horizon scale at the last scattering epoch). Despite much observational evidence, little is known about the physical mechanism that drives cosmic acceleration. As a matter of fact, virtually all the attempts to make sense of this perplexing phenomenon without invoking a new constant of nature (the so called cosmological constant) call for exotic physics beyond current theories. For example, it is speculated that cosmic acceleration might be induced by a non clustering, non evolving, non interacting and extremely light vacuum energy $\Lambda$ \cite{Peebles:2002gy}, or by a cosmic field with negative pressure, and thus repulsive gravitational effect, that changes with time and varies across space (the so called dark energy fluid) \cite{lucashin, cope, wett:1988, Caldwell:1997ii, ArmendarizPicon:2000dh,Binetruy:2000mh,Uzan:1999ch,Riazuelo:2001mg, Gasperini:2001pc}, if not by a break-down of Einstein's theory of gravity on cosmological scales (the so called dark gravity scenario) \cite{costas, DeFelice:2010aj, DGP, Deffayet:2001pu, Arkani-Hamed:2002fu,Capozziello:2003tk, Nojiri:2005jg, deRham:2010kj, Piazza:2009bp,GPV,JLPV,BFS}. This last, extreme eventuality is made somewhat less far-fetched by the fact that a large variety of nonstandard gravitational models, inspired by fundamental physics arguments, can be finely tuned to reproduce the expansion rate history of the successful standard model of cosmology, the $\Lambda$CDM paradigm. Although different models make undistinguishable predictions about the amplitude and scaling of background observables such as $d_L, d_A$ and $H,$ the analysis of the clustering properties of matter on large linear cosmic scales is in principle sufficient to distinguish and falsify alternative gravitational scenarios. Indeed, a generic prediction of modified gravity theories is that the Newton constant $G$ becomes a time (and possibly scale) dependent function $G_{\rm eff}$. Therefore, dynamical observables of cosmology which are sensitive to the amplitude of $G$, such as, for example, the clustering properties of cosmic structures, provide a probe for resolving geometrical degeneracies among models and for properly identifying the specific signature of nonstandard gravitational signals. Considerable phenomenological effort is thus devoted to engineering and applying methods for extracting information from dynamical observables of the inhomogeneous sector of the universe \cite{Bel:2012ya,TurHudFel12,DavNusMas11,BeuBlaCol12,PerBurHea04,SonPer09,RosAngSha07,CabGaz09,SamPerRac12, ReiSamWhi12,ConBlaPoo13,GuzPieMen08,TorGuzPea13}. Indeed, thanks to large and deep future galaxy redshift surveys, such as for example Euclid \cite{euclid2}, the clustering properties of matter will be soon characterized with a `background level' precision, thus providing us with stringent constraints on the viability of alternative gravitational scenarios. Extending the perimeter of precision cosmology beyond zeroth order observables into the domain of first order perturbative quantities critically depends on observational improvements but also on the refinement of theoretical tools. Among the quantities that are instrumental in constraining modified gravity models, the linear growth index $\gamma$, \begin{equation} \label{defuno} \gamma(a) \equiv \big( \ln \Omega_{\rm m} (a) \big)^{-1} \ln \Big( \frac{d \ln \delta_{\rm m}(a)}{d \ln a} \Big) \end{equation} \noindent where $a$ is the scale factor of the universe, $\Omega_{\rm m}=(8\pi G \rho_{\rm m})/(3H^2)$ is the reduced density of matter and $\delta_{\rm m} =\rho_{\rm m}/\bar{\rho}_{\rm m} -1$ the dimensionless density contrast of matter, has attracted much attention. Despite being in principle a function, this quantity is often, and effectively, parameterized as being constant \cite{pee80}. Among the various appealing properties of such an approximation, two in particular deserve to be mentioned. First, the salient features of the growth rate history of linear structures can be compressed into a single scalar quantity which can be easily constrained using standard parameter estimation techniques. As it is the case with parameters such as $H_0$, $\Omega_{\rm m,0}$, etc., which incorporate all the information contained in the expansion rate function $H(t)$, so it is extremely economic to label and identify different growth histories $\delta_{\rm m}(t)$ with the single book-keeping index $\gamma$. Moreover, since the growth index parameter takes distinctive values for distinct gravity theories, any deviation of its estimated amplitude from the reference value $\gamma_0=6/11$ (which represents the exact asymptotic early value of the function $\gamma(a)$ in a $\Lambda$CDM cosmology \cite{WanSte98}) is generically interpreted as a potential signature of new gravitational physics. However useful in quantifying deviations from standard gravity predictions, this index must also be precise to be of any practical use. As a rule of thumb, the systematic error introduced by approximating $\gamma(a)$ with $\gamma_0$, which depends on $\Om$, must be much smaller than the precision with which future experiments are expected to constrain the growth index over a wide redshift range($\sim 0.7\%$ \cite{euclid2}). Notwithstanding, already within a standard $\Lambda$CDM framework with $\Omega_{\rm m,0}=0.315$, the imprecision of the asymptotic approximation is of order $2\%$ at $z=0$. More subtly, the expansion kinematic is expected to leave time dependent imprints in the growth index. The need to model the redshift evolution of the growth index, especially in view of the large redshift baseline that will be surveyed by future data, led to the development of more elaborated parameterizations \cite{Gong, GanMorPol09, FuWuYu2009,pg,gp}. Whether their justification is purely phenomenological or theoretical, these formulas aim at locking the expected time variation of $\gamma(a)$ into a small set of scalar quantities, the so called growth index parameters $\gamma_i$. For example, some authors (e.g. \cite{pg,gp}) suggest to use the Taylor expansion $\gamma(z)\,=\,\gamma_0\,+\,\big[\frac{d \gamma}{dz}\big]_{z=0}\, z$ for data fitting purposes. Indeed, this approach has the merit of great accuracy at present epoch, but it becomes too inaccurate at the intermediate redshifts ($z\sim 0.5$) already probed by current data. On top of precision issues, there are also interpretational concerns. Ideally, we would like the growth index parameter space to be also in one-to-one correspondence with predictions of specific gravitational theories. In other terms we would like to use likelihood contours in this plane to select/reject specific gravitational scenarios. This is indeed a tricky issue. For example, it is rather involved to link the amplitude of the growth index parameters to predictions of theoretical models if the growth index fitting formula has a phenomenological nature. More importantly, it is not evident how to extract growth information ($\delta_{\rm m}(a)$) from a function, $\gamma$ which, as equation (\ref{defuno}) shows, is degenerate with background information (specifically $\Om(a)$). In other terms, the growth index parameters are model dependent quantity that can be estimated only after a specific model for the evolution of the background quantity $\Om(a)$ is supplied. Therefore it is not straightforward to use the likelihood function in the $\gamma$-plane to reject dark energy scenarios for which the background quantities do not scale as in the fiducial. Because of this, up to now, growth index measurements in a given fiducial were used to rule out only the null-hypothesis that the fiducial correctly describes large scale structure formation processes. Growth index estimates were not used to gauge the viability of a larger class of alternative gravitational scenarios. A reverse argument also holds and highlights the importance of working out a growth index parameterization which is able to capture the finest details of the exact numerical solution, establishing at the same time, the functional dependence on background observables. Indeed, once a given gravitational paradigm is assumed as a prior, the degeneracy of growth index measurements with background information, can be exploited to constraining the background parameter of the resulting cosmological model, directly using growth data. Therefore, by expressing the growth index as a function of specific dark energy or dark gravity parameters one can test for the overall coherence of cosmological information extracted from the joint analyses of smooth and inhomogeneous observables. In this paper we address some of these issues by means of a new parameterization of the growth index. The main virtues of the approach is that the proposed formula is {\it a)} flexible, i.e.~it describes predictions of a wide class of cosmic acceleration models, {\it b)} accurate, i.e.~it performs better than alternative models in reproducing exact numerical results, {\it c)} it is 'observer friendly', i.e.~accuracy is achieved with a minimum number of parameters and {\it d)} it is `theorist friendly', i.e.~the amplitude of the fitting parameters can be directly and mechanically related, in an analytic way, to predictions of theoretical models. The paper is organized as follows. We define the parameterization for the growth index in section \S \ref{sec:parametrizing}, and we discuss its accuracy in describing various dark energy models such as smooth and clustering quintessence in \S \ref{sec:standard}. In \S \ref{sec:modified} we apply the formalism to modified gravity scenarios. In particular, we discuss the DGP \cite{DGP} and the Parameterized Post Friedmanian \cite{FerSko10} scenarios. In \S \ref{sec:constraining} we will impose the studied models to current (simulated future) data. Conclusions are drawn in \S \ref{sec:conclusions}. Throughout all the paper, if not specified differently, the flat Friedmann-Lema\^itre-Robertson-Walker cosmology with $Planck$ parameters $\Omo =0.315, \sigma_{8,0}=0.835$ \cite{Ade:2013zuv} is referred to as the {\it reference} $\Lambda$CDM model. | \label{sec:conclusions} The observational information about the growth rate history $f(t)$ of linear cosmic structures can be satisfactorily encoded into a small set of parameters, the growth indices $\gamma_i$, whose amplitude can be analytically predicted by theory. Their measurement allows to explore whether Einstein's field equations encompass gravity also in the infrared, i.e.~on very large cosmic scales. In order for this to be accomplished, {\it a}) an optimal scheme for compressing the growth rate function into the smallest possible set of discrete scalars $\gamma_i$, without sacrifying accuracy, and {\it b}) a prescription for routinely calculating their amplitude in relevant theories of gravity, in order to explore the largest region in the space of all possible models, must be devised. In this paper we have explored a promising approach towards this goal. We have demonstrated both the precision and the flexibility of a specific parameterization of the growth index, that is the logarithmic expansion \eqref{eq:gamma_Taylor}. If the fiducial gravitational model is not too different from standard GR, i.e.~possible deviations in both the background and perturbed sector can be interpreted as first order correction to the Friedmann model, then the proposed parameterization scheme allows to match numerical results on the redshift dependence of the growth index with a relative error which is lower than the nominal precision with which the next generation of redshift surveys are expected to fix the scaling of this function. The performances are demonstrated by comparing, for various fiducial gravitational models, the accuracy of our proposal against that of different parameterizations available in the literature. Besides accuracy, the formalism features two other critical merits, one practical and one conceptual. First we supply a simple way for routinely calculating the amplitude of the growth indices in any gravitational model in which the master equation for the growth of density perturbations reduces to the form of Eq. \ref{eq:matter_density_fluctuations}. To this purpose it is enough to specify three characteristic functions of this equation, the expansion rate $H(t)$, the damping $\nu(t)$ and the response $\mu(t)$ coefficients to calculate the parameters $\gamma_i$ up to any desired order $i$. Moreover, since the parameterization of the growth rate has not a phenomenological nature, but it is constructed as a series expansion of the exact solutions of the differential equation which rules the growth of structures (cf. Eq. \ref{eq:f_H_General}), one can easily interpret empirical results about the amplitude of the growth indices in terms of fundamental gravitational models. Since the growth index is a model dependent quantity, it has been traditionally used only to reject, statistically, the specific model adopted to analyze growth data. We have shown, instead, that the growth index parameter space $\gamma_0-\gamma_1$ provides a diagnostic tool to discriminate a large class of models, even those presenting background evolution histories different from the fiducial model adopted in data analysis. In other terms, a detection of a present day growth index amplitude $\neq 0.55$ would not only indicate a deviation from $\Lambda$CDM predictions but could be used to disentangle among different alternative explanations of the cosmic acceleration in a straightforward way. The key to this feature is the mapping of Eq.~\ref{eq:f_approximation2} which allows to factor out the effect of expansion from the analysis of growth rate histories. As the standard $\Omo-\Omega_{\Lambda,0}$ plane identifies different expansion histories $H(t)$, the $\gamma_0-\gamma_1$ plane can thus be used to locate different growth rate histories $f(t)$. We have illustrated the performance of the growth index plane in relation to modify gravity model selection/exclusion by using current data as well as forecasts from future experiments. We have shown that the likelihood contours in the growth index plane $\gamma_0 - \gamma_1$ can be used to tell apart a clustering quintessence component \cite{SefVer11} from a smooth dark energy fluid, to fix the parameters of viable Parameterized Post Friedman gravitational models \cite{FerSko10} or to exclude specific gravitational models such as, for example, DGP \cite{DGP}. The performances of the analysis tool presented in this paper are expected to be enhanced, should the formalism be coupled to models parameterizing the large class of possible gravitational alternatives to standard GR available in the literature. In particular various approaches have been proposed to synthetically describe all the possible gravitational laws generated by adding a single scalar degree of freedom to Einstein's equations ~\cite{GPV,JLPV, BFS, BFPW}. Besides quintessence, scalar-tensor theory and $f(R)$ gravity, this formalism allows also to describe covariant Galileons~\cite{NRT}, kinetic gravity braiding \cite{deffa1} and Horndeski/generalized Galileons theories~\cite{hor,Deffayet:2009wt}. Interestingly, the cosmological perturbation theory of this general class of models can be parameterized so that a direct correspondence between the parameterization and the underlying space of theories is maintained. In a different paper \cite{PSM} we have already explored how the effective field theory formalism of \cite{GPV} allows to interpret the empirical constraints on $\gamma_i$ directly in terms of fundamental gravity theories. | 14 | 3 | 1403.0898 |
1403 | 1403.2586_arXiv.txt | Remote sensing of atmosphere is conventionally done via a study of extinction / scattering of light from natural (Sun, Moon) or artificial (laser) sources. Cherenkov emission from extensive air showers generated by cosmic rays provides one more natural light source distributed throughout the atmosphere. We show that Cherenkov light carries information on three-dimensional distribution of clouds and aerosols in the atmosphere and on the size distribution and scattering phase function of cloud/aerosol particles. Therefore, it could be used for the atmospheric sounding. The new atmospheric sounding method could be implemented via an adjustment of technique of imaging Cherenkov telescopes. The atmospheric sounding data collected in this way could be used both for atmospheric science and for the improvement of the quality of astronomical gamma-ray observations. | Knowledge of optical properties of clouds and aerosols is important in a wide range of scientific problems, from atmospheric and climate science \cite{ipcc13} to astronomical observations across wavelength bands \cite{beniston02,font12,chaves13}. Clouds are reflecting and absorbing radiation form the Sun, thus regulating the intake of the Solar energy by the Earth. Study of scattering and absorption of light by clouds is, therefore, a key element for understanding of the physics of the Earth atmosphere \cite{ipcc13,stephens05}. Aerosols work as condensation centres for formation of cloud water droplets and ice crystals. Understanding of relation between clouds and aerosols is one of the main challenges of atmospheric science \cite{ipcc13,haywood00}. Probes of the properties of clouds and aerosols are done using in situ measurements and remote sensing techniques \cite{stephens07} including imaging from space or from the ground \cite{king92}, observations of transmitted light from the Sun or Moon \cite{bovensmann99} and sounding of the clouds with radiation beams \cite{winker09}. LIght Detection And Ranging (LIDAR) sounding techniques (Fig. \ref{fig:principle}) probe vertical structure of clouds and aerosols via timing of backscatter signal from a laser beam \cite{winker09}. Presence of clouds perturbs astronomical observations in the Very-High-Energy (VHE) \gr\ (photons with energies 0.1-10~TeV) band and operation of Cosmic Ray (CR) experiments which use the Earth atmosphere as a giant high-energy particle detector \cite{font12,chaves13}. Imaging Atmospheric Cherenkov Telescope (IACT) arrays\footnote{HESS telescopes: http://www.mpi-hd.mpg.de/hfm/HESS/; MAGIC telescopes: https://magic.mpp.mpg.de; VERITAS telescopes: http://veritas.sao.arizona.edu.}, as well as air fluorescence telescopes for detection of Ultra-High-Energy CRs\footnote{Pierre Auger Observatory: http://www.auger.org; Telescope Array: http://www.telescopearray.org; JEM-EUSO: http://jemeuso.riken.jp/en/.} detect cosmic high-energy particles via imaging of Cherenkov and fluorescence emission from the particle Extensive Air Showers (EAS), initiated by the primary cosmic particles. Information on the presence and properties of the clouds and aerosols is essential for the proper interpretation of the data collected in this way. Gamma-ray / CR observations affected even by optically thin clouds are normally excluded from data sets, because the properties of the clouds are not known sufficiently well to allow correction for the effects of scattering of light by the atmospheric features. Here we show that Cherenkov light produced by the EAS could be used as a tool for remote sensing of the atmosphere. We show that this tool allows characterisation of three-dimensional cloud / aerosol coverage above the observation site and provides information on physical properties of cloud and aerosol particles. | In this paper we have proposed a novel approach for the remote sounding of the atmosphere using the UV Cherenkov light generated by the cosmic ray induced EAS throughout the atmospheric volume. This approach allows detection of atmospheric features, such as cloud and aerosol layers, and characterisation of their geometrical and optical properties. {Noticing an analogy between the UV light pulse produced by the EAS and the pulse of the laser light, commonly used in the LIDAR devices, we demonstrated that the principles of the measurement of the properties of clouds and aerosols based on the imaging and timing of the EAS signal are very similar to those used by the LIDAR. In fact, the equations (\ref{eq:shower_eq1}), (\ref{eq:shower_eq2}) are the direct analogs of the well-known "LIDAR equation" commonly used in the analysis and interpretation of the LIDAR data. There are, however, important differences between the EAS and laser light pulses, which make the new approach based on the EAS light complementary to the LIDAR approach. Most importantly, the Cherenkov light is continuously "regenerated" all along the EAS track from the top to the bottom of the atmosphere, while the laser light is generated once in a single location (e.g. at the ground level for the ground based LIDAR). Another important difference is that the Cherenkov light has a continuum spectrum spanning through the visible and UV bands, while the laser light of the LIDARs is mono-wavelength. We have shown that the difference in the properties of the light used by the LIDARs and by the proposed EAS + Cherenkov telescope setup potentially provides new possibilities for the measurement of physical characteristics of the cloud / aerosol particles, such as e.g. size distribution and the scattering phase function. Thus, the proposed technique is expected to provide data useful in the context of atmospheric physics.} {Existing IACT systems use a range of atmospheric monitoring tools to characterise weather conditions at their observation sites, including infrared / visible cameras and conventional LIDARs. The Atmospheric monitoring data are collected with the aim to control the quality of the astronomical gamma-ray data, which are the data on the UV Cherenkov emission from the EAS induced by gamma-rays coming from high-energy astronomical sources. We have demonstrated that the IACTs themselves could serve as powerful atmospheric monitoring tools, providing the atmospheric data complementary to those of the LIDARs and visible / infrared cameras. The atmospheric sounding data could be partially extracted from the background cosmic ray data of \gr\ observations by existing IACTs. Their collection does not require interruptions of the planned astronomical observation schedule. Moreover, the atmospheric data could be collected in cloudy sky conditions when astronomical observations are difficult or impossible. Availability of detailed simultaneous atmospheric sounding data should allow a better control of the quality of the astronomical \gr\ data taken by existing IACTs, e.g. via a better definition of the "clear sky" conditions. Besides, this should also open a possibility for observations in a borderline situation of the presence of moderately optically thin clouds and aerosols. } | 14 | 3 | 1403.2586 |
1403 | 1403.7195_arXiv.txt | We present new astrometry for the young (12--21 Myr) exoplanet $\beta$~Pictoris~b taken with the Gemini/NICI and Magellan/MagAO instruments between 2009 and 2012. The high dynamic range of our observations allows us to measure the relative position of $\beta$ Pic b with respect to its primary star with greater accuracy than previous observations. Based on a Markov Chain Monte Carlo analysis, we find the planet has an orbital semi-major axis of 9.1$^{+5.3}_{-0.5}$ AU and orbital eccentricity $<0.15$ at 68\% confidence (with 95\% confidence intervals of 8.2--48 AU and 0.00--0.82 for semi-major axis and eccentricity, respectively, due to a long narrow degenerate tail between the two). We find that the planet has reached its maximum projected elongation, enabling higher precision determination of the orbital parameters than previously possible, and that the planet's projected separation is currently decreasing. With unsaturated data of the entire $\beta$ Pic system (primary star, planet, and disk) obtained thanks to NICI's semi-transparent focal plane mask, we are able to tightly constrain the relative orientation of the circumstellar components. We find the orbital plane of the planet lies between the inner and outer disks: the position angle (PA) of nodes for the planet's orbit (211.8$\pm$0.3$^\circ$) is 7.4$\sigma$ greater than the PA of the spine of the outer disk and 3.2$\sigma$ less than the warped inner disk PA, indicating the disk is not collisionally relaxed. Finally, for the first time we are able to dynamically constrain the mass of the primary star $\beta$~Pic to 1.76$^{+0.18}_{-0.17}$ M$_{\sun}$. | $\beta$ Pic is a young ($\sim$12--21 Myr, \citealt{barrado99,zsbw01,binks14}), nearby (19.44 pc, \citealt{newhip}) A6 star that hosts one of the most prominent known debris disks (e.g. \citealt{betapicdisk,wahhaj03,weinberger03,golimowski06,lagrange12}). The disk midplane is warped, with a 4$^\circ$ offset between the inner warped disk and the outer main disk, suggesting a giant planet influencing the disk \citep{mouillet97}. This planet $\beta$~Pic~b was first detected in data from 2003, and the planet reappeared on the other side of the star in 2009 \citep{betapicb,betapic2}. $\beta$ Pic b is one of the first planets to be directly imaged and has the smallest projected physical separation of any imaged planet to date. With a contrast of 9 magnitudes at $K_S$-band and a current projected separation of $\sim$0.4'', the planet is challenging to detect even with state-of-the-art adaptive optics. Orbital properties of directly imaged planets can encode clues to their formation. The eccentricities of these planets, for example, may trace migration or planet-planet interactions (e.g., \citealt{takeda05,juric08,wang11}). $\beta$ Pic b represents the longest-period exoplanet whose full orbital parameters can be determined with present observations. The estimated orbital period of $\sim$20 years allows us the opportunity to determine the orbit of this planet, whereas most other directly imaged planets will require many more decades of observations for a robust orbit determination. The orbital parameters of $\beta$ Pic b are of particular interest since they allow us to study the relationship between the planet and the debris disk. The Gemini NICI Planet-Finding Campaign was a 4-year survey to detect extrasolar planets conducted between 2008 and 2012 \citep{liunici,niciastars,debris,moving_groups}. In addition to detecting a number of brown dwarf companions \citep{pztel,cd35,hd1160}, we detected the planet $\beta$ Pic b at multiple epochs over the course of the Campaign. We combine these observations with new data from Magellan MagAO and previous work to determine the orbit of the planet. | We have examined the orbit of $\beta$ Pic b given five new epochs of data taken with Gemini/NICI and Magellan/MagAO, finding a semi-major axis of 9.1$^{+5}_{-0.5}$~AU and a period of 21$^{+21}_{-2}$ years. The astrometric record of $\beta$ Pic b is now long enough to be able to remove the assumption of the total system mass, which was needed by all previous fits to this orbit. When we solve for the mass of $\beta$ Pic itself we find a value of 1.76$^{+0.18}_{-0.17}$~M$_{\sun}$, consistent with the expected value of 1.75~M$_{\sun}$. The position angle of nodes for our fixed-mass orbit is offset from the observed position angles of the inner warped disk (at 3.2$\sigma$ significance) and from the outer disk (at 7.4$\sigma$ significance), suggesting that the disk is not collisionally relaxed. Numerous degenerate orbital solutions exist for astrometric data that show minimal acceleration during the timeframe of the observations, in particular between semi-major axis, period, and eccentricity. Observing significant acceleration, and in particular the reversal of direction at maximum elongation, greatly reduces these degeneracies. Our orbital fit indicates that the planet has reached maximum elongation and is currently moving back toward the star, crossing to the other side of the star by $\approx$2018. $\beta$~Pic~b has been observed extensively since its reappearance in 2009 and the current window for studying the planet will remain open for just a few more years before the planet is undetectable behind the star again. Advanced planet-finding instruments such as GPI and SPHERE will likely allow for orbital monitoring of the planet closer to the star, so the time of lost contact is likely to be significantly shorter than it was between 2003 and 2009. The window for the next transit is between 2017.41 and 2018.18 at 68\% confidence, and future astrometric monitoring will provide a more precise prediction to guide photometric monitoring. We thank Jessica Lu and Adam Kraus for helpful discussions. B.A.B was in part supported by Hubble Fellowship grant HST-HF-01204.01-A awarded by the Space Telescope Science Institute, which is operated by AURA for NASA under contract NAS 5-26555. This work was supported in part by NSF grants AST-0713881 and AST-0709484 awarded to M. Liu. The Gemini Observatory is operated by the Association of Universities for Research in Astronomy, Inc., under a cooperative agreement with the NSF on behalf of the Gemini partnership: the National Science Foundation (United States), the Science and Technology Facilities Council (United Kingdom), the National Research Council (Canada), CONICYT (Chile), the Australian Research Council (Australia), CNPq (Brazil), and CONICET (Argentina). This research has made use of the SIMBAD database, operated at CDS, Strasbourg, France. {\it Facilities:} \facility{Gemini:South (NICI)}, \facility{Magellan II (MagAO+Clio2)}, \facility{Magellan II (MagAO+VisAO)}. | 14 | 3 | 1403.7195 |
1403 | 1403.5229_arXiv.txt | \noindent Since the advent of the accelerated expanding homogeneous universe model, some other explanations for the supernova Ia dimming have been explored, among which there are inhomogeneous models constructed with exact $\Lambda = 0$ solutions of Einstein's equations. They have been used either as one patch or to build Swiss-cheese models. The most studied ones have been the Lema\^itre-Tolman-Bondi (LTB) models. However, these models being spatially spherical, they are not well designed to reproduce the large scale structures which exhibit clusters, filaments and non spherical voids. This is the reason why Szekeres models, which are devoid of any symmetry, have recently come into play. In this paper, we give the equations and an algorithm to compute the redshift-drift for the most general quasi-spherical Szekeres (QSS) models with no dark energy. We apply it to a QSS model recently proposed by Bolejko and Sussman (BSQSS model) who averaged their model to reproduce the density distribution of the Alexander and collaborators' LTB model which is able to fit a large set of cosmological data without dark energy. They concluded that their model represents a significant improvement over the observed cosmic structure description by spherical LTB models. We show here that this QSS model is ruled out by a negative cosmological redshift, i.e. a blueshift, which is not observed in the Universe. We also compute a positive redshift and the redshift-drift for the Alexander et al.'s model and compare this redshift-drift to that of the $\Lambda$CDM model. We conclude that the process of averaging an unphysical QSS model can lead to obtain a physical model able to reproduce our observed local Universe with no dark energy need and that the redshift-drift can discriminate between this model and the $\Lambda$CDM model. For completeness, we also compute the blueshift-drift of the BSQSS model. | \label{sec1} In 1998 the SN Ia observations revealed that their observed luminosity was lower than what was expected in the cold dark matter (CDM) model \cite{riess98,permutter99}. In other words, the SN Ia were found to be at a distance farther than that predicted by the CDM model. Also, the deceleration parameter was found to be negative in the CDM model. A negative deceleration parameter implies that the Universe expansion rate is accelerating. This can be explained in FLRW models only if a fluid with negative pressure is assumed to fill the Universe. Such an exotic fluid is named dark energy. Since this discovery, there have been many dark energy models proposed in the literature, but none of them satisfactorily addresses the question of its origin and nature. However, there have been attempts to explain these observations without assuming any dark energy component. The main attempts can be broadly divided into two categories: inhomogeneous models and modified gravity. As the names suggest, the first category abandons the space homogeneity assumption and the second category works with modified Einstein's equations (see, e.g., Refs.~\cite{fuzfa06,MS12,moffat06}, and Ref.~\cite{TC12} for a review). In this article, we limit ourselves to the study of inhomogeneous models. The two inhomogeneous solutions of Einstein's equations which have been most frequently used in the literature can be divided into two classes: Lema\^itre-Tolman-Bondi \cite{GL33,RCT34,HB47} (LTB) models and Szekeres \cite{PS75} models. The LTB metric is a spatially spherical dust solution of the Einstein equations while the Szekeres metric is a dust solution of these equations with no symmetry, i.e., no Killing vector \cite{WBB76}. One can find in the literature many LTB models and a few Szekeres models which claim to explain the cosmological observations without assuming dark energy (see, e.g., Ref. \cite{KB11} for a review and also Ref. \cite{AN11} for a study of a particular Szekeres model not included in this previous review). Since these solutions are considering only dust as a gravitational source, they are valid only in the Universe region where the radiation effect is negligible, i.e., between the last scattering surface and our current location. We will use them to study our local Universe where dark energy is supposed to have the strongest effect. In this paper, we are interested in the study of the Szekeres model proposed by Bolejko and Sussman \cite{BS11}, which, once spatially averaged reproduces qualitatively the density profile of the Alexander and collaborators' LTB model \cite{ABNV09}. This LTB model is a very good fit to the SN Ia data and is also consistent with the WMAP 3-year data and local measurements of the Hubble parameter. However, for models reproducing cosmological data measured on our past light cone, the discrimination between inhomogeneous models and $\Lambda$CDM models is impossible. The problem is completely degenerate. This is the reason why several tests using effects outside the light cone have been proposed, one of these being the source redshift-drift while the observer's proper time is elapsing \cite{AS62,GMV62}. In a previous paper \cite{MCS2012}, we have calculated the redshift-drift for the axially symmetric Szekeres model of Ref. \cite{BC10} and compared it to the redshift-drift in some LTB models found in the literature and to that of the $\Lambda$CDM model. We found that the redshift-drift is indeed able to distinguish between these different models. Here, our first purpose was to compute the redshift-drift for the most general Szekeres model of Bolejko and Sussman which displays no symmetry and for the LTB Alexander et al.'s model to see whether upon averaging the redshift-drift changes significantly and then to compare these redshift-drifts to that in the $\Lambda$CDM model. We have thus calculated the equations and written a code able to compute, among other features, the redshift and the redshift-drift of the most general quasi-spherical Szekeres (QSS) model. We have applied this code to the Bolejko and Sussman quasi-spherical Szekeres (BSQSS) model. We have also computed these quantities for the Alexander et al.'s model with the same recipe used in our previous \cite{MCS2012} paper. However, we found that the BSQSS model exhibits a negative cosmological redshift, i.e., a blueshift, which is not observed in the Universe. This must be considered as enough to rule out the model, however, for completeness, we have computed the blueshift-drift for this model. The structure of the present paper is as follows. In Sec.~\ref{sec2}, we present the Szekeres models and the particular QSS subclass used here. In section \ref{sec3} we display the differential equations for the redshift and the redshift-drift in the most general QSS models and an algorithm to numerically integrate them. In Sec.~\ref{sec4} we compute the redshift and the redshift-drift in the model proposed by Bolejko and Sussman \cite{BS11}. In Sec.~\ref{sec5}, we display our results for the redshift and redshift-drift computation in the LTB model studied by Alexander et al. \cite{ABNV09}. In Sec.~\ref{sec6}, we present our conclusions. | \label{sec6} The type Ia supernova data, when analyzed in a FLRW framework, seems to be revealing that our Universe expansion is accelerating from redshifts that correspond to non-linear structure formation. In the standard $\Lambda$CDM cosmological model, this is put down to the effect of a dark energy component which, up to now, is not understood. Among different other explanations, the use of exact inhomogeneous models with no dark energy to reproduce the cosmological data has been rather extended in the literature. The first models used have been of the LTB class. These are dust spherically symmetric models and have been used either to build one patch models or to construct Swiss-cheese models (see, e.g., Ref. \cite{KB11} for a review). However, we observe that the structures in the Universe are not spherically symmetric. Therefore, $\Lambda=0$ Szekeres models with no symmetry are now coming into play (see, e.g., \cite{BC10,KB11,AN11,BS11,MCS2012}), the ones most frequently used being of the quasi-spherical class \cite{BKHC09}. Now, these Szekeres models are much more complicated to deal with and the first authors who used them as cosmological models added some symmetry, e.g., axial \cite{BC10,MCS2012}. Then, other studies have been made with Szekeres models with no symmetries \cite{BS11,IPT13}. However, it is very tricky to reproduce directly cosmological data with such models. This is the reason why, in Ref.~\cite{BS11}, the authors have considered a very general quasi-spherical Szekeres model, then spatially averaged it and obtained the LTB MV model density profile of Ref.~\cite{ABNV09}. Since this MV model reproduces the SN Ia data and is consistent with the 3-yr WMAP data and the local Hubble parameter measurements, the Szekeres model of Ref.~\cite{BS11} can be considered as a proper inhomogeneous model which, once coarse-grained and averaged, is consistent with these data set. This strengthens the argument proposed in Ref.~\cite{MNC12} that void model spherical symmetry is but a mathematical simplification of an energy density smoothed out over angles around us. Now, models which reproduce the same cosmological data as $\Lambda$CDM ones on the observer past light cone cannot be distinguished from this model. The problem is completely degenerate. This is the reason why we have been interested in calculating the redshift-drift of both models with a view to comparing them, first between them, then to that of the $\Lambda$CDM model. We have therefore, for the first time in the literature to our knowledge, given two equation sets and an algorithm to compute the redshift-drift in the most general QSS model. Then, we have applied them to the BSQSS model of Ref.~\cite{BS11}. One of the steps to obtain the redshift-drift is to calculate the redshift and, in doing this for the BSQSS model, we have found that this redshift was negative, i.e., a blueshift. We observe this blueshift because the observer's location is not at the origin in this model. Actually, the origin is at the last scattering surface. Since, in this model, the universe is expanding away from this origin, the sources are coming towards the observer which is at $t=t_0$ and $r_0=100$ Mpc. Hence, the light rays are blueshifted. Since such a cosmological blueshift is not observed in the Universe, this means that the non averaged BSQSS model is ruled out as a cosmological model. However, we cannot claim it is a generic feature of all quasi-spherical Szekeres models. However, for completeness, and to test our recipe and our code, we have calculated the redshift-drift (blueshift-drift) for the BSQSS model. We have found that this redshift-drift is negative, that its amplitude is increasing with the blueshift and that it is a very tiny effect. Indeed, for a ten year observation, and around a blueshift of $z=-0.7$, the blueshift variation amplitude is $|\delta z| \sim 10^{-12}$. However, since the model is already ruled out by its blueshift, the redshift-drift consideration is purely theoretical. It has been shown in Ref.~\cite{BS11} that, once spatially averaged, the BSQSS model reproduces qualitatively the density profile of the LTB MV model of Ref.\cite{ABNV09} with a central observer. We have thus calculated the MV model redshift to see what becomes of the BSQSS blueshift once the model is averaged. We have found that this blueshift becomes a cosmological redshift and then, to discriminate it from the $\Lambda$CDM model, we have computed the MV model redshift-drift. This redshift appeared to be negative, with an amplitude increasing with redshift. On the contrary, in the redshift range of interest, the $\Lambda$CDM model redshift-drift is positive which, in principle, would allow one to discriminate between both models by measuring their drift. However, these redshift-drifts are also very tiny effects, since the void border is only at a small redshift of $z \sim 0.085$. At this redshift, the redshift variation amplitude of the MV model, for a ten year observation, is merely $|\delta z| \sim 2.10^{-11}$. This will not be measurable by the future experiments dedicated to the redshift-drift measurement in the Universe like CODEX/EXPRESSO \cite{codex07,JL08,QA10} and the gravitational waves observations DECIGO/BBO \cite{YNY12}. However, the model proposed in Ref.~\cite{BS11} is a mere toy model, only reproducing a single void in a FLRW background. The important results of our paper are to show that, even if a QSS model of this kind exhibits a cosmological blueshift, the averaging process transforms it into a cosmological redshift which is in accordance with observations and that the redshift-drift can, in principle, allow us to discriminate between the averaged model and the $\Lambda$CDM model while both reproduce the same cosmological data on the observer's past light cone. It might happen that, in the future, more elaborate inhomogeneous models with no dark energy, such as Swiss-cheese models where the patches could be QSS without any symmetry and whose average might be LTB Swiss-cheeses reproducing the cosmological data, or QSS Swiss-cheese models reproducing themselves the data, should be proposed in the literature. In this case, our work could serve as a recipe to calculate the redshift and a then measurable redshift-drift in these models. It has been indeed shown in Ref.~\cite{JL08} that a 42-m telescope is able of unambiguously detect the redshift-drift over a 20 year period at a redshift $2<z<5$. Therefore, if one constructs a QSS Swiss-cheese model of the kind described above reaching a redshift of at least $z=2$, the comparison with measured redshift-drifts might become possible in the future. | 14 | 3 | 1403.5229 |
1403 | 1403.2409_arXiv.txt | We use the SDSS/DR8 galaxy sample to study the radial distribution of satellite galaxies around isolated primaries, comparing to semi-analytic models of galaxy formation based on the Millennium and Millennium-II simulations. SDSS satellites behave differently around high- and low-mass primaries: those orbiting objects with $M_* >10^{11}M_\odot$ are mostly red and are less concentrated towards their host than the inferred dark matter halo, an effect that is very pronounced for the few blue satellites. On the other hand, less massive primaries have steeper satellite profiles that agree quite well with the expected dark matter distribution and are dominated by blue satellites, even in the inner regions where strong environmental effects are expected. In fact, such effects appear to be strong only for primaries with $M_* > 10^{11}M_\odot$. This behaviour is not reproduced by current semi-analytic simulations, where satellite profiles always parallel those of the dark matter and satellite populations are predominantly red for primaries of all masses. The disagreement with SDSS suggests that environmental effects are too efficient in the models. Modifying the treatment of environmental and star formation processes can substantially increase the fraction of blue satellites, but their radial distribution remains significantly shallower than observed. It seems that most satellites of low-mass primaries can continue to form stars even after orbiting within their joint halo for 5 Gyr or more. | \label{sec:intro} Satellite galaxies can contribute substantially to our understanding of galaxy formation. In the current structure formation paradigm, galaxies form by the cooling and condensation of gas at the centres of an evolving population of dark matter halos that are an order of magnitude larger in both mass and linear size than the visible galaxies \citep{White_Rees1978}. Comparable contributions to the growth of such halos come from smooth accretion of diffuse matter and from mergers with other halos spread over a very wide range in mass \citep{Wang2011}. The more massive accreting halos will normally have their own central galaxies, and after infall these become ``satellites'' of the galaxy at the centre of the dominant halo, orbiting it within their own ``subhalos''. Later, the satellites may merge with the central galaxy and so contribute to its growth. High-resolution cosmological simulations predict not only the masses, positions and velocities of dark matter halos but also those of the subhalos they contain \citep[e.g.][]{Moore1999, Gao2004a, Springel2008, Gao2008,Gao2011}. Linking such data over time then allows construction of the assembly history of every system in the simulated volume. In combination with a model for galaxy formation, such halo/subhalo merger trees can be used to predict the development of the full galaxy population in the region considered. This can be compared directly with properties of observed populations such as abundances, scaling relations, clustering and evolution \citep[e.g.][]{Springel2001,Bower2006,Croton2006,Guo2011}. A particular strength of such ``semi-analytic'' population simulations is that they enable evaluation of the relative sensitivity of these observables to cosmological and to galaxy formation parameters \citep[e.g.][]{Wang2008, Guo2013}. Satellite galaxies play an important role in such work because they are particularly sensitive to environmental effects and to the assembly history of halos. In \citet[][ PaperI hereafter]{Wang_White2012} we used the Sloan Digital Sky Survey (SDSS) to study the luminosity, mass and colour distributions of satellite galaxies as a function of the properties of their host. A comparison of our observational results to semi-analytic galaxy formation simulations within the concordance $\Lambda$CDM cosmology showed good overall agreement for satellite abundances, inspiring some confidence in the realism of the particular galaxy formation model used \citep[from][]{Guo2011}, but large discrepancies for satellite colour distributions confirmed earlier demonstrations that such models substantially overestimate the environmental suppression of star formation \citep[e.g.][]{Font2008, Weinmann2009}. In this paper we extend our earlier work through a detailed analysis of the radial distribution of satellites around their hosts. This enables further exploration both of the successes and of the failures of the galaxy formation model. The observational study of satellite number density profiles benefited enormously from the advent of wide-angle spectroscopic surveys such as the Two Degree Field Galaxy Redshift Survey \citep[2dFGRS,][]{Colless2001} and the SDSS \citep{York2000}. The availability of redshift measurements for almost all objects above some apparent magnitude limit allows the full three-dimensional distribution of objects to be studied (although in ``redshift space'' rather than true position space) greatly facilitating the identification of host/satellite systems. Several studies concluded that the mean radial satellite distribution in such spectroscopic samples can be fit (in projection) by a power-law $\Sigma_{\rm sat} \propto r^{-\alpha}$, although the range of indices quoted is quite broad $\alpha \sim 0.9$ to $1.7$ \citep{Sales2005,vandenBosch2005,Chen2006, Chen2008}. There is some indication that this index correlates with the properties of the primaries and/or satellites under consideration, but results are also rather noisy because of the relatively bright lower limit on the luminosity of the satellites which is enforced by the spectroscopic apparent magnitude limit. In addition to being restricted to relatively bright objects, spectroscopic satellite samples are also subject to selection effects such as redshift incompleteness due to fibre-fibre collisions and survey geometry constraints which particularly affect their coverage of close pairs. In this context, photometric samples offer an interesting alternative, since they are complete at all separations and to apparent magnitude limits which are typically $3-4$ magnitudes fainter than the corresponding spectroscopic surveys. For example, the SDSS/DR8 data are effectively complete to $r$-band magnitudes $m_r=17.7$ and $m_r=21$ for the spectroscopic and photometric catalogues, respectively \citep{Aihara2011}. Inspired by this, several groups have recently analyzed primary/satellite samples, where the primary galaxies are selected from spectroscopic surveys, ensuring their distances and environments are well characterized, but their satellite populations are identified in deeper photometric data and so must be corrected statistically for the inevitable foreground and background contamination \citep[e.g. ][]{Lares2011, GuoQuan2012, Nierenberg2011, Nierenberg2012, Jiang2012,Tal2012}. This approach is reminiscent of the pioneering work in this field, where satellites were identified on photographic plates around relatively bright primary samples \citep{Holmberg1969, Lorrimer1994}. The projected satellite profiles measured in these hybrid studies are also consistent with power-laws $\Sigma_{\rm sat} \propto r^{-0.9,-1.2}$, and again correlations are seen between the slope of the profiles and the colour/mass/type of the primaries and satellites. Despite this superficial agreement, there are large discrepancies between recently published studies of satellite radial distributions. Some authors find satellite profiles to be steeper than the NFW profile \citep{Navarro1996,Navarro1997} predicted for the dark matter \citep{Watson2012,Tal2012,GuoHong2014}; others consider them as good tracers of the dark matter \citep{Nierenberg2012}; yet others find them to be less concentrated than the dark halos they inhabit \citep{Budzynski2012, Wojtak2013}. The trends found with intrinsic properties of satellites/primaries also disagree between studies. For instance, whereas Watson et al. and Tal et al. find that bright satellites have steeper profiles, Guo et al. and Budzynski et al. conclude that faint companions are more strongly concentrated. Nierenberg et al. find no variation in profile slope with satellite mass. Some of this disagreement can plausibly be traced to differing sample definitions. For example, \cite{Tal2012} studied satellite profiles around Luminous Red Galaxies (LRGs) at $0.28<z<0.4$, whereas both \cite{Watson2012} and \cite{GuoQuan2012} used SDSS Main Sample galaxies at lower redshifts. The redshift range probed by \cite{Nierenberg2012} is $0.1<z<0.8$, based on data from the deeper COSMOS survey, but their samples are relatively small so trends may be masked by counting noise. Furthermore, \cite{Tal2012} and \cite{Watson2012} studied satellite radial profiles down to very small separations ($r_p\sim 30$kpc) and their inference of a steeper than NFW distribution depends on these scales. Careful photometric corrections are needed in such work, because satellite magnitudes are systematically biased by their proximity to a much brighter central galaxy. Deblending and background estimation effects can be substantial in this situation and are quite uncertain \citep[e.g.][]{Mandelbaum2006}. Not all authors apply such corrections \citep[e.g.][]{GuoQuan2012} and in consequence their results at the smallest separations may be compromised. Variations in the radial distribution of satellites as a function of primary or satellite properties provide important clues to the processes driving galaxy evolution, in particular, to the influence of environmental effects. Tidal disruption and ram-pressure stripping are believed to be the main agents of structural change in satellites once they have fallen into their host halos. Extended reservoirs of gas may be removed, causing the satellites to run out of fuel for star formation, or gas and stars may be removed directly from the visible regions of the galaxies. As a result, satellites are predicted to be less active and redder than otherwise similar galaxies in the field. There is clear observational evidence for effects of this kind. Studies of galaxy correlations show enhanced clustering of red objects at fixed stellar mass \citep[see e.g.][]{LiCheng2006, Zehavi2011} and there is a consensus among authors that the fraction of red and passive satellite galaxies is larger than for central galaxies of similar mass \citep[see e.g. ][]{vandenBosch2008, Yang2009, Weinmann2009}. There are also, however, clear indications that this increased red fraction among satellites is a function of the stellar or halo mass of the primary, suggesting that environmental effects are weak or even negligible for satellites orbiting low-mass primaries \citep[e.g.][]{Weinmann2006a,Prescott2011,Wetzel2012}. Theoretical predictions based on semi-analytical models successfully reproduce several of these trends, but typically overproduce the fraction of red satellites \citep[e.g.][]{Coil2008}. Recent improvements in the modelling of gas removal and tidal stripping have improved the situation \citep{Font2008,Guo2011}, but a significant problem still persists \citep{Weinmann2011}. The radial distributions of red and blue satellites and their relation to the properties of the primary galaxy give additional information about environmental influences on satellites, complementing the information provided by the relative abundances of the two populations. Despite difficulties in matching the observed colour distribution, simulations have proven useful for interpreting the observed properties of satellite galaxies. \citet{Kravtsov2004} and \citet{Gao2004b} used N-body simulations to argue that the observed radial distribution of luminous satellites is more easily understood if these objects populate the most massive subhalos at the time of infall, rather than the most massive today. The distribution of luminous satellites in both hydrodynamical and semi-analytic simulations suggests that they may be reasonable tracers of the underlying dark matter distribution of their host halo \citep[e.g. ][]{Gao2004b,Nagai2005,Sales2007c}. Numerical simulations also show that the time of infall of satellites onto their host is correlated with their current distance from halo centre \citep[e.g.][]{Gao2004a}, a relation that becomes tighter if we consider satellite orbital binding energy \citep{Rocha2012}. Thus the radial distribution of satellites encodes information about the assembly of dark matter halos that is not otherwise observationally accessible. In this paper we study the radial distribution of satellites in a hybrid primary/satellite sample selected from the spectroscopic + photometric SDSS/DR7 and DR8 catalogues. We go beyond previous work by comparing our results with a mock-galaxy catalogue generated from the Millennium and Millennium-II simulations \citep{Springel2005a, Boylan-Kolchin2009} using the semi-analytical model of \citet{Guo2011}. The mock sample allows an improved assessment of the projection and sample selection effects, facilitating the physical interpretation of the observed profiles. At the same time, we are able to test the galaxy formation model by contrasting its predictions with observables it was not tuned to reproduce. This paper follows naturally from the analysis presented in Paper I which focused on the abundance and mass spectrum of satellites around isolated primaries. This paper is organized as follows: our data sources and the selection criteria we apply to observed and simulated catalogues are described in Sec.~\ref{sec:data}. We report the trends found in the radial distribution of satellites according to primary/satellite colours and masses in Sec.~\ref{ssec:pricolor} and \ref{ssec:satcolor}, while we discuss the implications for environmental modulation of star formation in Sec.~\ref{sec:tinf}. We summarize and discuss our main conclusions in Sec.~\ref{sec:concl}. Throughout this paper we adopt the cosmology of the original Millennium simulations ($H_0=73 ~\mathrm{km~s^{-1}} ~\mathrm{Mpc}^{-1}$, $\Omega_{\rm m}=0.25$, $\Omega_\Lambda=0.75$, $n=1$). A discussion of the effect of cosmology on the satellite properties presented in this paper is included in Sec.~\ref{sec:tinf}. \begin{table*} \caption{Average halo virial radius $r_{\rm vir}$ (following G11), scale radius $r_s$ (following Zhao et al. 2009), inner radius $r_{\mathrm{inner}}$ and the $(g-r)$ colour cut separating blue from red satellites for the five primary stellar mass bins considered in our study. The final row gives the number of red and blue SDSS primaries in each of these bins.} \begin{center} \begin{tabular}{lrrrrr}\hline\hline $\log M_*/M_\odot$ & \multicolumn{1}{c}{11.4-11.7} & \multicolumn{1}{c}{11.1-11.4} & \multicolumn{1}{c}{10.8-11.1} & \multicolumn{1}{c}{10.5-10.8} & \multicolumn{1}{c}{10.2-10.5} \\ \hline $r_{\rm vir}$ [kpc]& 725 & 430 & 270 & 210 & 170 \\ $r_s$ [kpc] & 156.6 & 74.4 & 39.3 & 27.1 & 21.0 \\ $r_{\mathrm{inner}}$ [kpc] & 50 & 50 & 30 & 20 & 10 \\ $(g-r)_{\rm SDSS}$ & 0.840 & 0.830 & 0.820 & 0.811 & 0.801 \\ $(g-r)_{\rm mock}$ & 0.627 & 0.618 & 0.609 & 0.600 & 0.591 \\ $N_\mathrm{SDSS}$ [red, blue] & 1651, 35 & 6170, 731 & 8518, 4142 & 5453, 5953 & 1625,3764 \\ \hline \label{tbl:primaries} \end{tabular} \end{center} \end{table*} | \label{sec:concl} We study the mean number density profiles of satellites around isolated primary galaxies selected from the spectroscopic catalogue of SDSS/DR7. We select satellites from the full photometric catalogue of SDSS/DR8, correcting statistically for contamination by unrelated foreground and background galaxies. Our sample contains about 41,000 isolated primaries with $\sim 7,000,000$ photometric companions (including background) projected within 500~kpc. We explore the dependence of these profiles on the stellar mass and colour of both primaries and satellites. Our results can be summarized as follows: \begin{itemize} \item The radial distribution of satellites depends on primary stellar mass. Satellites around massive primaries $\rm \log M_*/M_\odot>11.1$ have slightly shallower profiles than are predicted for the dark matter in their host halos, whereas for less massive primaries $10.2 <\log M_*/M_\odot< 11.1$ satellites follow quite closely the predicted dark matter profiles. \item We find the shape of satellite number density profiles to depend at most weakly on satellite stellar mass. \item Red primaries have more satellites than blue primaries of the same stellar mass, at least for $\log M_*/M_\odot> 10.8$. \item Observed satellite number density profiles depend on satellite colour and behave differently for high- and low-mass primaries. For primaries with $\log M_*/M_\odot<11.1$, the blue and red populations have profiles of similar shape, consistent in both cases with that predicted for the dark matter distribution. Blue satellites dominate at all radii. Around more massive primaries ($\log M_*/M_\odot>11.1$) the blue population has a shallower profile and is subdominant at all radii. \end{itemize} We compare these observational results with satellite samples selected from the galaxy population simulation of \citet{Guo2011}. The number density profiles of the whole satellite population always parallel the dark matter profile of the host halo, regardless of primary mass; this disagrees with the SDSS result for massive primaries. This may reflect the need for more efficient tidal disruption of satellites in the model. Satellite colours remain the most important challenge to these theoretical models, however. In the model the fraction of blue satellites is too small, particularly around low-mass primaries, and the few remaining blue satellites have an almost flat radial profile, in clear disagreement with the observations. Given that observed satellites of low-mass primaries are predominantly blue at all projected radii, the distributions of time since infall that we find for simulated satellites imply that real satellites can remain actively star-forming as much as 5 Gyr after they have fallen into their current host halo, even when their orbit takes them into its inner regions. This seems qualitatively consistent with earlier work reporting that the decline of star formation in satellites occurs over extended periods of time \citep[e.g.][]{WangLi2007,Weinmann2009, Wetzel2013,Trinh2013,Wheeler2014}. The significantly shorter timescales implied by our model ($\sim 0.9$ Gyr) are responsible for its overabundance of red satellites. This indicates that the environmental suppression of star formation is overestimated by the model, particularly around low-mass primaries, and perhaps that the environmental stimulation of star formation needs to be included. Indeed, from a series of experiments with differing treatments of environmental processes, we find that although the combined effect of suppressing ram-pressure stripping in low-mass halos ($M_\mathrm{vir}<10^{14}M_\odot$) and decreasing the density threshold for star formation can bring the overall blue fraction into agreement with SDSS, the {\it shape} of the blue satellite profile remains much flatter than observed. The fact that SDSS satellites are still predominantly blue even a few tens of kpc from low-mass primaries, suggests that processes which enhance star formation during close encounters need to be introduced into the models. Progress in this area will require a better understanding of tidally or shock-induced star formation, as well as observational studies which resolve the structure of star-forming regions in typical satellite galaxies. | 14 | 3 | 1403.2409 |
1403 | 1403.3063_arXiv.txt | Using a large N-body cosmological simulation combined with a subgrid treatment of galaxy formation, merging, and tidal destruction, we study the formation and evolution of the galaxy and cluster population in a comoving volume $(100\,\rm Mpc)^3$ in a $\Lambda$CDM universe. At $z=0$, our computational volume contains 1788 clusters with mass $M_{\rm cl}>1.1\times10^{12}\msun$, including 18 massive clusters with $M_{\rm cl}>10^{14}\msun$. It also contains $1\,088\,797$ galaxies with mass $M_{\rm gal}\geq2\times10^9\msun$ and luminosity $L>9.5\times10^5\lsun$. For each cluster, we identified the brightest cluster galaxy (BCG). We then computed two separate statistics: the fraction $f_{\rm BNC}$ of clusters in which the BCG is not the closest galaxy to the center of the cluster in projection, and the ratio $\Delta v/\sigma$, where $\Delta v$ is the difference in radial velocity between the BCG and the whole cluster, and $\sigma$ is the radial velocity dispersion of the cluster. We found that $f_{\rm BNC}$ increases from 0.05 for low-mass clusters ($M_{\rm cl}\sim10^{12}\msun$) to 0.5 for high-mass ones ($M_{\rm cl}>10^{14}\msun$), with very little dependence on cluster redshift. Most of this turns out to be a projection effect, and when we consider 3D distances instead of projected distances, $f_{\rm BNC}$ increases only to 0.2 at high cluster mass. The values of $\Delta v/\sigma$ vary from 0 to 1.8, with median values of in the range 0.03--0.15 when considering all clusters, and 0.12--0.31 when considering only massive clusters. These results are consistent with previous observational studies, and indicate that the central galaxy paradigm, which states that the BCG should be at rest at the center of the cluster, is usually valid, but exceptions are too common to be ignored. We built merger trees for the 18 most massive clusters in the simulation. Analysis of these trees reveal that 16 of these clusters have experienced one or several major or semi-major mergers in the past. These mergers leave each cluster in a non-equilibrium state, but eventually the cluster settles into an equilibrium configuration, unless it is disturbed by another major or semi-major merger. We found evidence that these mergers are responsible for the off-center positions and peculiar velocities of some BCGs. Our results thus support the merging-group scenario, in which some clusters form by the merger of smaller groups in which the galaxies have already formed, including the galaxy destined to become the BCG. Finally, we argue that $f_{\rm BNC}$ is not a very robust statistics, being very sensitive to projection and selection effect, but that $\Delta v/\sigma$ is a more robust one. Still, both statistics exhibit a signature of major mergers between clusters of galaxies. | Clusters of galaxies contain hundreds or thousands of galaxies with a full range of luminosities, going from low-luminosity dwarf galaxies to $L^*$ galaxies and beyond. If clusters are dynamically relaxed systems, we naturally expect the brightest galaxies, which are presumably the most massive ones, to be concentrated in the central regions of clusters, since this is the most stable configuration. In particular, in each cluster, we expect to find the brightest galaxy cluster (BCG) at rest at the center.\footnote{In the literature, this galaxy is called either the {\it Brightest Halo Galaxy\/} (BHG), {\it Brightest Cluster Galaxy\/} (BCG), or {\it Brightest Cluster Member\/} (BCM). All terms are equivalent. In this paper, we use BCG.} \cite{vdbetal05} refer to this assumption as the ``central galaxy paradigm.'' This paradigm has played an important role in the development of semi-analytical models of galaxy formation over the past twenty years. In the early model of \citet{kwg93}, each dark matter halo can host a central galaxy plus a number of satellite galaxies. In the initial state, halos contain only a mixture of cold and hot gas, with no galaxy. Eventually, a central galaxy forms at the center of each halo. Then, when a merger between several halos takes place, the central galaxy of the most massive progenitor becomes the central galaxy of the new halo, while all other galaxies become satellite galaxies. In this model, the brightest galaxy in a halo is always the central one. Many other semi-analytical models of galaxy formation have been developed since, and the central galaxy paradigm remains a key ingredient for most of them \citep{coleetal00,hattonetal03,baugh06,mft07,somervilleetal08}, thought some models locate the central galaxy at the minimum of the gravitational potential \citep{springeletal01,crotonetal06,guoetal11}, which can be off-center if the halo contains substructures. Halo occupation modeling \citep{scoccimarroetal01,shethetal01,ymv03,zehavietal05, zhengetal05,cooray05,phlepsetal06,vdbetal07,tinkeretal08,rs09,matsuokaetal11, richardsonetal12} and large N-body simulations of structure formation in CDM universes \citep{tb04,springeletal05,dlb07} also rely on the assumption that the brightest galaxy is located in the center of the parent halo. Several studies assume a phenomenological model that a massive ``central'' galaxy lies at the center of the host dark matter halo, and ``satellites'' constitute of the remaining galaxies in the halo (as in \citealt{kwg93}). Observations attempt to quantify the correlations and differences between the properties (like SFR, color) of central and satellite galaxy populations, and their dependence on the environment \citep{weinmannetal06,azzaroetal07, kimmetal09,prescottetal11,wetzeletal13,wooetal13,yangetal13}. Also, many observational techniques are based on the assumption that the central galaxy paradigm is valid. These include measurement of halo masses by satellite kinematics \citep{mckayetal02,vdbetal04,moreetal09,romanowskyetal09,duttonetal10, watsonetal12}, weak lensing \citep{mandelbaumetal06,johnstonetal07,cacciatoetal09,sheldonetal09, pastormiraetal11,vanuitertetal12,lietal13}, and strong lensing \citep{kochanek95,cohnetal01,kt03,rusinetal03,oguri06, killedaretal12,moreetal12}, and automated identification of groups and clusters in redshift surveys \citep{yangetal05,yangetal07,berlindetal06,koesteretal07}. Observational studies of galaxy clusters have been performed in order to test the validity of the central galaxy paradigm \citep{bg83,malumuthetal92,zabludoffetal93,bird94,pl95,zm98, oh01,yjb03,lm04,vdl07,bildfelletal08,hl08,ses09,cozioletal09, skibbaetal11}. Two different approaches are used in these studies. The first one consists of measuring the difference in radial velocity between the BCG and the cluster itself, and comparing it to the velocity dispersion of the cluster. The second one consists of measuring the projected distance between the BCG and the center of the cluster, estimated either from the distribution of galaxies or from the peak X-ray luminosity. The overall conclusion is that the central galaxy paradigm is usually valid, that is, most BCGs are at rest at the center of their host cluster, but many of them are not, too many to be dismissed as peculiar objects. In two of the most recent studies, \citet{cozioletal09} studied a large sample of clusters containing 1426 candidate BCGs, and found that a significant number of BCGs have large peculiar velocities, the median value being 32\% of the radial velocity dispersion of the cluster. \citet{skibbaetal11} studied a sample of $334\,010$ galaxies from the {\sl Sloane Digital Sky Survey\/} (SDSS), and found that the fraction $f_{\rm BNC}$ of clusters in which the brightest galaxy is not the central one varies from 0.25 for low-mass clusters to 0.40 for high-mass one. In both papers, the authors suggest that major mergers between clusters might explain their results. The central galaxy paradigm is based on the assumption that the galaxies inside a parent cluster either formed concurrently with the cluster, or later, after the distribution of dark matter and gas in the cluster had settle into an equilibrium configuration \citep{ot75,ho78,merritt84,malumuth92}. One alternative scenario is that the cluster formed by the merger of smaller groups \citep{malumuth92,ellingson03,mihos04,adamietal05, ac07,cp07}, and that the galaxy that will eventually become the BCG already existed in one of these groups \citep{merritt85,bird94,zm98,pimbbletetal06}. If the cluster has not yet reached equilibrium by the present, this could explain the off-center location and peculiar velocity of the BCG. Our goal is to test this {\it Merging-Group Scenario\/}, as \citet{cozioletal09} call it. We performed a numerical simulation of the formation and evolution of the galaxy and cluster populations inside a large cosmological volume, in a $\Lambda$CDM universe. This is a challenging task: to obtain statistically meaningful results, we need to simulate a volume sufficiently large to contain several massive clusters. At the same time, we need to describe the formation and evolution of the galaxy population down to low-mass galaxies. To achieve this, we combine a large N-body cosmological simulation with a semi-analytical subgrid treatment of galaxy formation, merging, and tidal destruction. The objectives of this work are (1) to determine if the observational results reported by \citet{cozioletal09}, \citet{skibbaetal11}, and others can be reproduced using a numerical simulation, (2) to investigate the role played by major mergers in the build-up of clusters, and determine if the merging-group scenario constitutes a valid explanation for the observational results, and (3) to check the robustness of the various statistics used to assess the success or failure of the central galaxy paradigm. The reminder of this paper is organized as follows. In section~2, we describe our algorithm for simulating the formation and evolution of the galaxy and cluster populations. Results are presented in Section~3. Summary and conclusion are presented in Section~4. | Using a cosmological N-body simulation combined with a sub-grid treatment of galaxy formation, merging, and tidal destruction, we simulated the evolution of the galaxy and cluster population in a comoving volume of size $100\,\rm Mpc$, in a $\Lambda$CDM universe. In the final sate of the simulation, at $z=0$, we identified 1788 clusters, including 18 massive ones ($M_{\rm cl}>10^{14}\msun$). We then investigated the location and velocity of the BCG in each cluster, in order to test the validity of the central galaxy paradigm. The fraction $f_{\rm BNC}$ of clusters for which the BCG is not the closest galaxy to the center increases with cluster mass. The same trend is seen in the results of \citet{skibbaetal11} and in the prediction of the semi-analytical models. Furthermore, our results, within error bars, match the results of \citet{skibbaetal11} at the high-mass end. However, at the low-mass end, our results predict that $f_{\rm BNC}$ decreases, in agreement with the semi-analytical models, but not with the results of \citet{skibbaetal11}, which predict a plateau at $f_{\rm BNC}\sim 0.25$. We agree with the general conclusion of \citet{skibbaetal11} that many BCGs do not reside at the center if their host cluster. However, we found that $f_{\rm BNC}$ is not a very robust statistics. Its determination is affected by projection effects, which may lead to an overestimate of $f_{\rm BNC}$, and selections effects, which may lead to an underestimate of $f_{\rm BNC}$. Uncertainties in the determination of the center of the clusters can also be a problem. We also calculated the ratio $\Delta v/\sigma$. This is a more robust statistics, since it is not affected by projection effects, and weakly affected by selection effects through the estimation of $\sigma$. The distributions of $\Delta v/\sigma$ extend from 0 to 1.8, and are very skewed toward low values. The median values of $\Delta v/\sigma$ are in the range 0.03--0.08, significantly lower than the value 0.32 reported by \citet{cozioletal09}. However, when we consider only clusters with masses $M>10^{14}\msun$, the distributions become wider, and the median values raise to 0.15 at $z=0$ and 0.28 at $z=0.5$. This indicates that low-mass clusters and nearby clusters, are more likely to be in equilibrium than high-mass ones or distant ones. We selected the 18 most massive clusters in our simulation, with masses $M>10^{14}\msun$, and performed a detailed study of the history of their formation, focussing on the period $z=1.5$ to $z=0$ (that is, the last 9.4 Gyrs). For each cluster, we built a merger tree, and followed the location and velocity of the BCG along the main family line of each cluster. A general pattern emerges. The brightest galaxy is initially the closest to the center of the cluster, and remains the closest until the cluster experiences a major or semi-major merger with another cluster of comparable mass. Immediately after that merger, the brightest galaxy can find itself at one Mpc or even more from the center of the new cluster, and in this case is no longer the closest to the center. However, the new cluster, immediately after the merger, is out of equilibrium. During the time it takes for the cluster to reach equilibrium, the brightest galaxy migrates toward the center, until it finds itself the closets to the center again. The whole situation can be described in terms of two sets of timescales. First, we have the timescale for the clusters to form by the merger of smaller progenitors, versus the timescale for galaxies to form inside these progenitors. If the former timescale is the shorter one, which is the basic assumption behind the central galaxy paradigm, then the galaxies will form inside a system in equilibrium, and the BCG will settle at rest at the center of the cluster. But if the latter timescale is the shorter one, the galaxy destined to become the BCG is already present in one of the progenitor. The second set of timescales then comes into play: the timescale for the cluster to reach equilibrium after a major or semi-major merger, versus the timescale between such mergers. If the former timescale is the shorter one, then the cluster will reach equilibrium at $z=0$, after the last merger. But if the latter timescale is the shorter one, then the cluster will be constantly disturbed by mergers, will never have sufficient time to reach equilibrium, and therefore will still be out of equilibrium at $z=0$. These two limits are illustrated by the solid and open circles, respectively, in Figure~\ref{last}. We conclude that brightest galaxies not being at the center of their host clusters, and having large velocities, is a transient phenomenon, closely associated to major mergers between clusters. If the last major merger took place at large redshift, $z\gtrsim0.3$ (or if no such merger ever took place), the cluster has time to reach equilibrium before the present. But if the last major merger took place recently, the cluster will still be out of equilibrium at $z=0$. This explains why $f_{\rm BNC}$ increases with cluster mass: low-mass clusters are the ones that have not experienced any major merger in their recent history. They formed at high redshift, and were ``left alone'' until the present, giving them time to reach equilibrium. | 14 | 3 | 1403.3063 |
1403 | 1403.3580_arXiv.txt | NaI(Tl) large crystals are applied in the search for galactic dark matter particles through their elastic scattering off the target nuclei in the detector by measuring the scintillation signal produced. However, energies deposited in the form of nuclear recoils are small, which added to the low efficiency to convert that energy into scintillation, makes that events at or very near the energy threshold, attributed either to radioactive backgrounds or to spurious noise (non-bulk NaI(Tl) scintillation events), can compromise the sensitivity goals of such an experiment. DAMA/LIBRA experiment, using 250\,kg NaI(Tl) target, reported first evidence of the presence of an annual modulation in the detection rate compatible with that expected for a dark matter signal just in the region below 6\,keVee (electron equivalent energy). In the frame of the ANAIS (Annual modulation with NaI Scintillators) dark matter search project a large and long effort has been carried out in order to understand the origin of events at very low energy in large sodium iodide detectors and develop convenient filters to reject those non attributable to scintillation in the bulk NaI(Tl) crystal. $^{40}K$ is probably the most relevant radioactive contaminant in the bulk for NaI(Tl) detectors because of its important contribution to the background at very low energy. ANAIS goal is to achieve levels at or below 20\,ppb natural potassium. In this paper we will report on our effort to determine the $^{40}$K contamination in several NaI(Tl) crystals, by measuring in coincidence between two (or more) of them. Results obtained for the $^{40}K$ content of crystals from different providers will be compared and prospects of the ANAIS dark matter search experiment will be briefly reviewed. \\ \\ {\bf Keywords:} sodium iodide; scintillation, potassium; dark matter search, annual modulation. \\ {\bf PACS numbers:} 29.40.Mc; 29.40.Wk; 95.35.+d | \label{sec:Intro} ANAIS project aims at the study of the annual modulation signal attributed to galactic dark matter particles using 250\,kg NaI(Tl) scintillators at the Canfranc Underground Laboratory (LSC), in Spain. NaI(Tl) large crystals have been applied for a long time in the search for galactic dark matter particles through their elastic scattering off the target nuclei in the detector by measuring the weak scintillation signal produced \cite{DAMA,LIBRA,DM32,JAPAN1,PSD_Gerbier,UKDMS,JAPAN2,DM-ice}. However, energies deposited in the form of nuclear recoils are small, which added to the low efficiency to convert that energy into scintillation, makes that events at or very near the energy threshold, attributed either to radioactive backgrounds or to spurious noise (non-bulk NaI(Tl) scintillation events), can compromise the sensitivity goals of such an experiment. DAMA experiment, at the Laboratori Nazionali del Gran Sasso, in Italy, and using 100\,kg NaI(Tl) target, reported first evidence of the presence of an annual modulation in the detection rate compatible with that expected for a dark matter signal just in the region below 6~keVee (electron equivalent energy) with a high statistical significance \cite{DAMA}. This signal was further confirmed by LIBRA experiment, using 250\,kg of more radiopure NaI(Tl) detectors \cite{LIBRA}. Using the same target than DAMA/LIBRA experiment, which accumulates by now fourteen annual cycles, makes possible for ANAIS to confirm DAMA/LIBRA results in a model-independent way. To achieve such a goal ANAIS detectors should be as good (or better) as (than) those of DAMA/LIBRA in terms of energy threshold and radioactive background below 10 keVee (electron equivalent energy). In this paper we will present some of the past and recent efforts to determine and reduce the background related to $^{40}K$ contamination in several prototypes, as well as to determine the achievable threshold profiting from the low energy events population, conveniently tagged, that such a contamination provides. We will start by presenting the ANAIS project and the different crystals and experimental set-ups studied; then, we will move to understand the importance of the background due to $^{40}K$ contamination and the technique used to determine its content in the different crystals, in particular, in ANAIS-25 modules, as well as the results derived. | \label{sec:results} \subsection{$^{40}K$ bulk content} \label{sec:resultsK} In this section, we will present the potassium bulk content results for all the studied crystals. The first step in order to derive the $^{40}K$ activity in one crystal, is to select the corresponding energy windows containing the 1460.8\,keV energy depositions in the other(s) crystal(s) sharing the experimental space. The $^{40}K$ activity of every crystal can be estimated with the area of the 3.2\,keV peak {\it (Area)} identified in the coincident spectra with the high energy window chosen in the other(s) detector(s), the total available live time {\it (t)}, the crystal mass {\it (m)}, the efficiency of the coincidence, determined by simulation using Geant4 package {\it ($\epsilon$)} and the fraction of events effectively selected by the coincidence window chosen {\it (F)}: \begin {equation} Activity (Bq/kg) = \frac{Area (counts)}{t(s)\cdot m(kg)\cdot \epsilon \cdot F} \label{eq:Activity} \end {equation} Only statistical errors coming from the 3.2\,keV peak area determination are taken into account in the derivation of the activity errors shown in the following. First of all, eleven NaI(Tl) crystals from BICRON, 10.7\,kg mass each were measured: six and seven detectors were placed in a similar configuration in runs {\it a} and {\it b}, respectively, using two of them in both as cross-check. The results for the $^{40}K$ activity corresponding to all of the old BICRON crystals are presented in Table~\ref{tab:PotassiumBICRON}. It can be observed that it is very similar for all of them, ranging from 13 to 21\,mBq/kg, which corresponds to 0.42 to 0.68\,ppm natural potassium in the bulk of the crystals. The procedure followed in order to derive such activity values is equivalent to those explained in detail in the following for ANAIS-0 and PIII crystals, as well as for both ANAIS-25 modules, and is described in detail in section\,\ref{sec:resultsK_a}. \begin{table}[ht] \begin{center} \caption{Results for $^{40}K$ bulk activity of the BICRON\,-\,10.7\,kg NaI(Tl) crystals.} \vspace{0.3cm} {\begin{tabular}{@{}cc@{}} \hline Detector & $^{40}K$ Activity (mBq/kg)\\ \hline EP054 & $13.7 \pm 0.3$\\ EP055 & $15.2 \pm 0.1$\\ EP056 & $18.8 \pm 0.2$\\ EP057 & $20.9 \pm 0.4$\\ EP058 & $16.2 \pm 0.3$\\ EP059 & $16.6 \pm 0.2$\\ EL214 & $17.9 \pm 0.4$\\ EM301 & $21.2 \pm 0.4$\\ EL604 & $16.5 \pm 0.3$\\ EL603 & $14.5 \pm 0.2$\\ EL607 & $15.7 \pm 0.5$\\ \hline \end{tabular} \label{tab:PotassiumBICRON}} \end{center} \end{table} \subsubsection{ANAIS-0 and Prototype III.} \label{sec:resultsK_a} The first step in order to derive the $^{40}K$ activity in both crystals, is to select the corresponding energy windows containing the 1460.8\,keV energy depositions. Different window widths ($1\,\sigma$\footnote{$\sigma$ will refer for each detector to the corresponding value of the standard deviation obtained in the gaussian fit of the 1460.8\,keV line.}, $2\,\sigma$ and $3\,\sigma$) have been considered for the selection of coincident events in both detectors. High energy spectra for the two phases of measurement with ANAIS-0 and PIII detectors are shown in Fig.~\ref{fig:40KPIV_HE}, left, and a zoom around the 1460.8\,keV line, remarking the three windows considered (right). The low energy spectra in coincidence with $1\,\sigma$ window around the 1460.8\,keV line in the other detector are shown in Fig.~\ref{fig:40KPIV_LE}. The gain of PIII along phase II was probably not stable enough and the 1460.8\,keV line is clearly distorted; this could lead to a decrease in the efficiency of the coincidence in an undetermined way, compromising the validity of the derived result. \begin {figure}[h!] \centerline{\includegraphics[width=0.95\textwidth]{Fig4.png}} \caption{High energy spectra of PIII and ANAIS-0 detectors. Left, the whole spectra and right, a zoom showing the 1460.8\,keV gamma line following $^{40}K$ EC decay for the two considered phases. The $1\,\sigma$ (red), $2\,\sigma$ (blue) and $3\,\sigma$ (green) coincidence windows are also shown.} \label{fig:40KPIV_HE} \end {figure} \begin {figure}[h!] \centerline{\includegraphics[width=0.95\textwidth]{Fig5.png}} \caption{Low energy spectra (in counts/channel) in coincidence with $1\sigma$ windows around the 1460.8\,keV line in the other crystal, shown in Figure~\ref{fig:40KPIV_HE}, for ANAIS-0 and PIII detectors in the two measurement phases considered in this study. The 3.2\,keV peak is clearly visible above the baseline noise peak.} \label{fig:40KPIV_LE} \end {figure} Only a small fraction of the events selected by the coincidence are attributable to $^{40}K$ decay, but they are, as expected, distributed in a peak around 3.2\,keV, the rest are mostly baseline noise and a few fortuitous coincidences. A simple selection of $^{40}K$ events is done by considering only those events above a threshold. Table~\ref{tab:40KPIVTH} shows the analysis thresholds chosen for each phase and detector. The choice is done just by visual inspection, and it must be remarked that this is not the threshold of the experiment. \begin{table}[h!] \begin{center} \caption{Threshold (in channels) considered for the selection of $^{40}K$ events in the low energy spectra for the two phases for ANAIS-0 and PIII crystals.} \vspace{0.3cm} {\begin{tabular}{@{}ccc@{}} \hline Detector& Phase & Threshold \\ & & (QDC channel)\\ \hline ANAIS-0 &I & 115 \\ &II & 115 \\ PIII &I & 125\\ &II & 120 \\ \hline \end{tabular} \label{tab:40KPIVTH}} \end{center} \end{table} In order to check the $^{40}K$ origin of the low energy events selected, as previously explained, the effect of changing the high energy window above and below the 1460.8\,keV position has been studied. For this purpose, the coincidence is done with $1\,\sigma$ width windows centered in channels $2\,\sigma$ above and below 1460.8\,keV. Also another window more energetic is selected, centered $11\,\sigma$ above the 1460.8\,keV position (see results in Fig.~\ref{fig:40KPIV_windows}). Assuming gaussian shape for the 1460.8\,keV line, the corresponding percentage of real $^{40}K$ events selected in each window should be 68\% for $\mu\pm\sigma$, 16\% for $\mu+2\sigma\pm\sigma$ and 0 for the $\mu+11\sigma\pm\sigma$. The highest window should only present fortuitous coincidences and could allow us to estimate their contribution in the other windows. In Table~\ref{tab:windows} the results are presented considering only events above the thresholds shown in Table~\ref{tab:40KPIVTH}. As expected, events at low energy coincident with the $\mu+11 \sigma$ window correspond to fortuitous coincidences, and the peak is not seen. The $\mu +2\sigma$ window presents results compatible with the expected 16\% of the total number of coincident events. Events found in coincidence with the $\mu -2 \sigma$ window are much more than expected for a pure gaussian peak contribution, but this is probably due to the presence of multi-Compton events with partial energy deposition from the $^{40}K$ gamma line. Numbers shown in Table~\ref{tab:windows} do not allow directly estimate of the fortuitous coincidence rate contribution, because this is strongly related to the total rate in the high energy coincidence window (much lower for instance in the $\mu+11 \sigma$ window); then, conclusions derived from this table are only qualitatively valid. \begin {figure}[h!] \centering{\includegraphics[width=0.95\textwidth]{Fig6.png}} \caption{Low energy spectra (in counts/channel) in coincidence with high energy windows in the other detector (all of them having the same width, $\pm\sigma$) are shown for the different phases and detectors: centered at 1460.8\,keV peak ($\mu$) in black, centered $2\sigma$ above ($\mu+2\sigma$) in blue, centered $2\sigma$ below ($\mu-2\sigma$) in red, and centered further above ($\mu+11\sigma$) in green.} \label{fig:40KPIV_windows} \end {figure} \begin{table}[h!] \begin{center} \caption{Number of events selected by the coincidence above the thresholds, shown in Table~\ref{tab:40KPIVTH}, in the different windows studied (all of them having the same width $\pm\sigma$). Effective exposure corresponding to every phase is given in Table~\ref{tab:setups}.} \vspace{0.3cm} {\begin{tabular}{@{}cccccc@{}} \hline Detector & {Phase}& $\mu$ & $\mu +2 \sigma$ & $\mu -2 \sigma$& $\mu + 11 \sigma$\\ &&\multicolumn{4}{c}{Events}\\ \hline ANAIS-0 &I & 726 & 95& 312 & 8\\ & II & 713 & 142& 253 & 26\\ PIII &I & 706 & 146 & 358 & 22\\ &II & 766 & 137& 205 & 12\\ \hline \end{tabular}\label{tab:windows}} \end{center} \end{table} Then, the set-up has been simulated with Geant4, version geant4.9.1.p02 \cite{Geant4}, in order to evaluate the probability that, after a $^{40}K$ disintegration in one crystal, the 1460.8\,keV photon escapes and releases the full energy in the other detector\footnote{This simulation has been specifically done for every experimental set-up.}. 500000\,photons of 1460.8\,keV have been simulated assuming homogeneous distribution of the contaminant in the bulk in ANAIS-0 and PIII crystals. The absolute branching ratio for the $^{40}K$ K-shell EC followed by the emission of the 1460.8\,keV photon is 0.0803, as given by Geant4~\cite{Geant4}. The number of events with the full gamma energy absorbed in PIII or ANAIS-0 crystals when emitted in ANAIS-0 and PIII are 8258 and 6998, respectively. Thus, the efficiencies for the observation of the respective coincidences per $^{40}K$ decay are $1.33\cdot10^{-3}$ and $1.13\cdot10^{-3}$. The area of the 3.2\,keV peak (Area) is obtained by fitting to a gaussian the events above the threshold. The fits are shown in Fig.~\ref{fig:40KPIV_fit}. The $^{40}K$ activity is calculated following eq.\,\ref{eq:Activity} for each phase individually and using all the available data for the three different coincidence window widths ($1\sigma$, $2\sigma$ and $3\sigma$) around 1460.8\,keV energy. Results for each phase and detector are shown in Table~\ref{tab:fits}. \begin {figure}[h!] \centering{\includegraphics[width=0.8\textwidth]{Fig7a.png} \includegraphics[width=0.8\textwidth]{Fig7b.png} \includegraphics[width=0.8\textwidth]{Fig7c.png}} \caption{Low energy coincident events (in counts/channel) for the $1\sigma$, $2\sigma$ and $3\sigma$ coincidence windows, and gaussian fits of the events above the threshold.} \label{fig:40KPIV_fit} \end {figure} \begin{table}[h!] \begin{center} \caption{$^{40}K$ activity calculated for ANAIS-0 and PIII using different width coincidence windows. Combined values derived from the first two phases for each detector are also shown. } \vspace{0.3cm} {\begin{tabular}{@{}ccccc@{}} \hline Detector & Phase &\multicolumn{3}{c}{$^{40}K$ Activity (mBq/kg)}\\ & &$1\,\normalfont\sigma$ & $2\,\normalfont\sigma$ & $3\,\normalfont\sigma$ \\ \hline ANAIS-0 &I & $14.3 \pm 0.8$ & $15.1 \pm 0.9 $& $17.2 \pm 1.1$\\ &II & $11.1\pm 0.5$ & $12.4 \pm 0.5 $& $13.4 \pm 0.6 $\\ &I and II &$ 12.7 \pm 0.5$ & $13.6 \pm 0.5$ &$15.2 \pm 0.6 $\\ &III & $14.5 \pm 0.6$ & $14.3 \pm 0.5 $& $15.1 \pm 0.5$\\ \hline PIII &I& $13.5 \pm 0.9$ & $16.8\pm 1.13 $& $20.1 \pm 1.4 $\\ &II&$13.9\pm 0.9$ & $16.1 \pm 1.1 $& $19.0 \pm 1.4 $\\ &I and II&$ 13.7 \pm 0.6$ & $16.4 \pm 0.7$&$19.5\pm 1.0 $\\ \hline \end{tabular} \label{tab:fits}} \end{center} \end{table} Results derived for the different windows and phases analyzed are mostly compatible. As expected, larger windows have a larger contribution from fortuitous coincidences and Compton events. Hence, results of the $1\,\sigma$ window have been taken in the following as the most reliable. Phase II presented a non gaussian shape for the 1460.8\,keV gamma line of PIII which might have been caused by gain instabilities. However, phase I showed a higher discrepancy between the results determined with the different sigma windows and in Fig.~\ref{fig:40KPIV_fit} it can be seen that the 3.2\,keV peak is wider in phase I and more contribution from fortuitous coincidences is expected. In PIII both phases gave similar results. Then, the average of phase I and II results for every detector was taken as final result of our analysis and used, for instance in Ref.\,\citen{ANAISbkg}. We checked that these results were compatible with the intensity of the 1461-1464\,keV gamma line seen at ANAIS-0 background. Bulk crystal potassium contamination contributes to this line with different energy depositions: from K-shell EC decay of $^{40}K$, producing 1464.0\,keV (1460.8\,keV\,+\,3.2\,keV) total energy release, but also from L and M-shell EC decays, with energy depositions that can not be distinguished from the 1460.8\,keV line. Moreover, any other external $^{40}K$ contamination would also contribute to the 1460.8\,keV line. We chose data from ANAIS-0 operating without PIII to minimize contributions from external components contaminated in $^{40}K$ to the photopeak and fitted it to a gaussian, comparing its area with the prediction of our Geant4 simulation (see Fig.\,\ref{fig:A02_40K}). For 500000 isotropic 1460.8\,keV photons simulated, the ANAIS-0 crystal detects 135294 photons in the photopeak (27.1\%). Taking into account that only in 10.55\% of the $^{40}K$ decays a high energy gamma is emitted, if the result for this 1461-1464\,keV peak is $32.38\pm0.62$\,cpd/kg, the activity of $^{40}K$ derived is $13.11\pm0.25$\,mBq/kg assuming that only $^{40}K$ in the crystal bulk is contributing. This result is compatible with the activity derived from the coincidence measurement and implies that the ANAIS-0 background is dominated by $^{40}K$ in the bulk, as confirmed the background model proposed and simulated in Ref.\,\citen{ANAISbkg}. \begin {figure}[h!] \centering{\includegraphics[width=0.8\textwidth]{Fig8.png}} \caption{High energy spectrum for the ANAIS-0 module used to derive the $^{40}K$ bulk content from the intensity of the background line at 1461-1464\,keV, assuming negligible contributions from external $^{40}K$ sources.} \label{fig:A02_40K} \end {figure} At last, the temporal distribution of $^{40}K$ events at low energy selected by the coincidence above the threshold is shown in Fig.~\ref{fig:40KPIV_rate}. Average values of $9.3\pm3.6$\,counts/day for ANAIS-0 and of $8.5\pm3.1$\,counts/day for PIII crystals are reported without significant fluctuations. \begin {figure}[h!] \centering{\includegraphics[width=0.8\textwidth]{Fig9.png}} \caption{Rate of the 3.2\,keV events selected by the coincidence above the threshold. Average values of $9.3\pm3.6$\,counts/day for ANAIS-0 and of $8.5\pm3.1$\,counts/day for PIII crystals are shown as horizontal lines in both plots.} \label{fig:40KPIV_rate} \end {figure} After this analysis was completed, a new estimate of the $^{40}K$ content of ANAIS-0 crystal was derived using data from the phase III $^{40}K$ coincidence set-up. Corresponding results are shown in Fig.~\ref{fig:40KPhIII} and Table~\ref{tab:fits}. Very nice and stable operation can be reported and results are compatible with those of phase I, slightly higher than those of phase II, and pointing at some coincident events loss in phase II attributable to instability in PIII high energy data. Our conclusion is that the $^{40}K$ activity we assumed for ANAIS-0 (derived from the coincident 3.2\,keV peak intensity, by averaging the estimates from phase I and II) is underestimated in about a 10\%. The limit for $^{40}K$ derived from the 1460.8\,keV gamma line in the background is more hardly compatible with such a higher $^{40}K$ bulk content, but systematics on Geant4 simulations could be responsible of such an underestimate: for instance, possible energy loss mechanisms could affect in about such a percent the conclusions derived from our analysis. \begin {figure}[h!] \centering{\includegraphics[width=0.8\textwidth]{Fig10a.png} \includegraphics[width=0.8\textwidth]{Fig10b.png}} \caption{Top: High energy spectrum of PIII corresponding to the phase III of the $^{40}K$-coincidence set-up (left) and zoom showing the 1460.8\,keV line and the $1\,\sigma$ (red), $2\,\sigma$ (blue) and $3\,\sigma$ (green) coincidence windows (right). Bottom: Low energy coincident events (in counts/channel) for the $1\sigma$, $2\sigma$, and $3\sigma$ coincidence windows, and gaussian fits of the events above the threshold.} \label{fig:40KPhIII} \end {figure} \subsubsection{ANAIS-25} \label{sec:resultsK_b} The potassium content of the ANAIS-25 modules has been carefully analyzed directly applying the same technique. The $^{40}K$ gamma events at 1460.8\,keV in one detector are selected considering different windows widths ($1\,\sigma$, $2\,\sigma$ and $3\,\sigma$) as done with ANAIS-0 and PIII data. High energy spectra of both detectors are shown in Fig.~\ref{fig:A25_40K_HE}. The bad resolution observed is a consequence of the instabilities in PMTs gain, specially in detector~1 where only data from PMT~1 have been considered for the first weeks of data, because a fast estimate of the potassium content was important and no data were discarded. Excellent performance of the set-up has been demonstrated later on. The low energy spectra in coincidence with the $1\,\sigma$ window around the 1460.8\,keV line in the other detector, are shown in Fig.~\ref{fig:A25_40K_LE}. The threshold chosen to select the events attributable to $^{40}K$ decay are channel~60 for D0 and channel~70 for the D1. The effect of changing the high energy window above and below the 1460.8\,keV position has also been studied, see results in Figure~\ref{fig:A25_40K_LEw}. Numbers of counts over these thresholds for every coincidence window are shown in Table~\ref{tab:A25wcts}. \begin {figure}[h] \centering{\includegraphics[width=0.8\textwidth]{Fig11.png}} \caption{High energy spectra of the ANAIS-25 modules corresponding to 70.4\,days (left), and zoom showing the 1460.8\,keV gamma line in the right. The $1\,\sigma$ (red), $2\,\sigma$ (blue) and $3\,\sigma$ (green) coincidence windows are also shown.} \label{fig:A25_40K_HE} \end {figure} \begin {figure}[h] \centering{\includegraphics[width=0.8\textwidth]{Fig12.png}} \caption{Low energy spectra (in counts/channel) in coincidence with $1\,\sigma$ windows around the 1460.8\,keV line (shown in Figure~\ref{fig:A25_40K_HE}) in the other crystal for ANAIS-25 D0 (left) and D1 (right).} \label{fig:A25_40K_LE} \end {figure} \begin {figure}[h] \centering {\includegraphics[width=0.8\textwidth]{Fig13.png}} \caption{Low energy spectra (in counts/channel) in coincidence with high energy windows in the other detector (all of them having the same width, $\pm\sigma$) are shown for ANAIS-25 D0 (left) and D1 (right): centered at 1460.8\,keV peak ($\mu$) in black, centered $2\sigma$ above ($\mu+2\sigma$) in blue, centered $2\sigma$ below ($\mu-2\sigma$) in red, and centered further above ($\mu+11\sigma$) in green.} \label{fig:A25_40K_LEw} \end {figure} \begin{table}[h] \begin{center} \caption{Events above the analysis threshold (channel 60 for D0 and channel 70 for D1), among those selected by the coincidence with an event in a window centered at, below, or above the 1460.8\,keV gamma position (1\,$\sigma$ width).} \vspace{0.3cm} {\begin{tabular}{@{}ccccc@{}} \hline Detector& $\mu$&$\mu\,-2\,\sigma$ & $\mu\,+2\,\sigma$&$\mu\,+11\,\sigma$ \\ \hline 0 & 82 & 25 & 16& 7\\ 1& 78 & 22 & 21&10\\ \hline \end{tabular} \label{tab:A25wcts}} \end{center} \end{table} The probability that, after a $^{40}K$ disintegration in one crystal, the 1460.8\,keV photon escapes and releases the full energy in the other detector has been estimated with Geant4, in this case using version geant4.9.4.p01. The corresponding efficiency for the coincidences between both ANAIS-25 modules is $1.08\cdot10^{-3}$, just a bit lower than that obtained for ANAIS-0 and PIII because of the higher mass, that decreases the probability for the escape of the gamma without losing any energy. Then, the activities of $^{40}K$ for each ANAIS-25 crystal have been estimated for the different width coincidence windows, see equation \ref{eq:Activity}. The gaussian fits performed to redeem the area are shown in Fig.~\ref{fig:A25_40K_LEfit}. Results on the $^{40}K$ content for each detector are shown in Table~\ref{tab:A25_40KA}. Good agreement between results derived for both detectors is observed, as expected. Averaging the $1\,\sigma$ window results for the two crystals, we can conclude that ANAIS-25 crystals have a $^{40}K$ content $1.25\pm0.11$\,mBq/kg ($41.7\pm3.7$\,ppb of potassium) much lower than that estimated for ANAIS-0 crystal, see Fig.~\ref{fig:A25_40K_LEcn}. \begin {figure}[h!] \centering{\includegraphics[width=0.8\textwidth]{Fig14a.png} \includegraphics[width=0.8\textwidth]{Fig14b.png}} \caption{Low energy spectra (in counts/channel) in coincidence with $1\,\sigma$, $2\,\sigma$, and $3\,\sigma$ windows around 1460.8\,keV line in the other crystal for ANAIS-25 D0 and D1.} \label{fig:A25_40K_LEfit} \end {figure} \begin{table}[h!] \begin{center} \caption{$^{40}K$ Activity calculated for the two ANAIS-25 crystals using different widths coincidence windows.} \vspace{0.3cm} {\begin{tabular}{@{}cccc@{}} \hline Detector &\multicolumn{3}{c}{$^{40}K$ Activity (mBq/kg)}\\ & $1\,\sigma$ &$2\,\sigma$ &$3\,\sigma$ \\ \hline 0 & $1.34\pm0.13$ & $1.16\pm0.11$ &$1.31\pm0.11$\\ 1 & $1.15\pm0.18$ & $1.08\pm0.16$ &$1.21\pm0.20$\\ \hline \end{tabular} \label{tab:A25_40KA}} \end{center} \end{table} \begin {figure}[h!] \centering{\includegraphics[width=0.5\textwidth]{Fig15.png}} \caption{Low energy spectra in coincidence with $1\,\sigma$ windows around 1460.8\,keV line in the other crystal for ANAIS-0 (black), ANAIS-25 detector 0 (blue), and ANAIS-25 detector 1 (red).} \label{fig:A25_40K_LEcn} \end {figure} NaI(Tl) crystals from different manufacturers have been characterized in terms of their potassium bulk content by a measurement in coincidence, and improvement of one order of magnitude in the potassium content can be reported for the ANAIS-25 detectors, built in collaboration with Alpha Spectra. However, the 20\,ppb goal has not yet been achieved and before ordering the additional 18 modules required to complete the ANAIS total detection mass, careful analysis of the situation in collaboration with Alpha Spectra is undergoing, trying to further purify the starting NaI powder. \subsection{Trigger efficiency at 3.2\,keV} However, having a tagged population of bulk scintillation events at 3.2\,keV is very useful for many other purposes related to the DM search. Just to show an example, in ANAIS-25 setup we used this $^{40}K$ events to estimate the trigger efficiency of the experiment. It is shown in Fig.\,\ref{fig:trigger_eff} how many of the low energy events identified by the coincidence with the high energy window around 1460.8\,keV, and hence, corresponding to the decay of $^{40}K$, have effectively triggered our acquisition. A good trigger efficiency can be reported: 99\% of the events above 1.5\,keV are triggering in D1, and 97\% in D0. It is worth noting that fortuitous coincidences in D0 arrive up to higher energies (which is related to the higher dark current of the PMTs used), and that in D1, baseline is eventually triggering. \begin {figure}[h!] \centering{\includegraphics[width=0.8\textwidth]{Fig16.png}} \caption{$^{40}K$ events at low energy, identified by the coincidence with a high energy gamma for the ANAIS-25 D0 (left), and D1 (right). Events with T=2 (T=1) have not triggered in D0 (D1), whereas events with T=3, have triggered in both detectors.} \label{fig:trigger_eff} \end {figure} | 14 | 3 | 1403.3580 |
1403 | 1403.1585_arXiv.txt | It has been argued that the specific star formation rates of star forming galaxies inferred from observational data decline more rapidly below $z = 2$ than is predicted by hierarchical galaxy formation models. We present a detailed analysis of this problem by comparing predictions from the \galform semi-analytic model with an extensive compilation of data on the average star formation rates of star-forming galaxies. We also use this data to infer the form of the stellar mass assembly histories of star forming galaxies. Our analysis reveals that the currently available data favour a scenario where the stellar mass assembly histories of star forming galaxies rise at early times and then fall towards the present day. In contrast, our model predicts stellar mass assembly histories that are almost flat below $z = 2$ for star forming galaxies, such that the predicted star formation rates can be offset with respect to the observational data by factors of up to $2-3$. This disagreement can be explained by the level of coevolution between stellar and halo mass assembly that exists in contemporary galaxy formation models. In turn, this arises because the standard implementations of star formation and supernova feedback used in the models result in the efficiencies of these process remaining approximately constant over the lifetime of a given star forming galaxy. We demonstrate how a modification to the timescale for gas ejected by feedback to be reincorporated into galaxy haloes can help to reconcile the model predictions with the data. | \label{Introduction} Understanding the star formation history of the Universe represents an important goal of contemporary astronomy, both in theoretical modelling and from observations of the galaxy population. Traditionally, the main diagnostic used to characterise the cosmic star formation history is the volume averaged star formation rate (SFR) density \cite[e.g.][]{Lilly96,Madau96,Hopkins06}. This quantity encompasses the combined effect of all the physical processes that are implemented in a given theoretical model of galaxy formation. The lack of a complete theory of how these processes operate within galaxies means that these models are typically designed to be flexible, utilising simple parametrisations with adjustable model parameters. The cosmic star formation rate density, along with other global diagnostics used to assess the plausibility of a given model, is sensitive to all of these model parameters. Hence, while simply selecting a set of parameters to define a viable model is already challenging, the problem is compounded by the possibility of degeneracies between different model parameters. This has prompted the use of statistical algorithms as tools to explore and identify the allowed parameter space of contemporary galaxy formation models \citep{Bower10,Henriques13,Lu13a,Mutch13,Ruiz13}. An alternative to attempting to ``solve'' the entire galaxy formation problem from the top down is to try to find observational diagnostics that are sensitive to some specific physical processes but not to others. A promising area in this regard revolves around the discovery of a correlation between the star formation rate (SFR) and the stellar mass of star forming galaxies, forming a sequence of star forming galaxies \cite[e.g.][]{Brinchmann04, Noeske07, Daddi07, Elbaz07}. This is most convincingly demonstrated in the Sloan Digital Sky Survey \cite[SDSS; ][]{York00} which exhibits a clear star forming sequence with relatively small scatter and a power-law slope which is slightly below unity \cite[e.g.][]{Brinchmann04, Salim07, Peng10, Huang12}. The discovery of the star forming sequence in the local Universe has motivated a series of studies which try to establish whether the sequence is in place at higher redshifts \cite[e.g.][]{Noeske07b}. This task is challenging because of the difficulties in reliably measuring the star formation rates of galaxies. Beyond the local Universe, star formation tracers that do not require the application of uncertain dust corrections are typically available for only the most actively star forming galaxies. This makes it difficult to prove whether or not there is a clear bimodality between star forming and passive galaxies in the SFR-stellar mass plane. On the other hand, it has been demonstrated that star forming and passive galaxies can be separated on the basis of their colours over a wide range of redshifts \cite[e.g.][]{Daddi04,Wuyts07,Williams09,Ilbert10,Whitaker11,Muzzin13}. This technique can then be combined with stacking in order to measure the average SFR of star forming galaxies as a function of both stellar mass and redshift. However, the extent to which these convenient colour selection techniques can truly separate galaxies that reside on a tight star forming sequence from the remainder of the population remains uncertain. The significance of the star forming sequence as a constraint on how galaxies grow in stellar mass has been discussed in a number of studies \cite[e.g.][]{Noeske07b,Renzini09,Firmani10a,Peng10,Leitner12,Heinis14}. The small scatter of the sequence implies that the star formation histories of star forming galaxies must, on average, be fairly smooth. This has been taken as evidence against a dominant contribution to the star formation history of the Universe from star formation triggered by galaxy mergers \cite[e.g.][]{Feulner05a,Noeske07b,Drory08}. This viewpoint is supported by studies that demonstrate that the contribution from heavily star forming objects that reside above the star forming sequence represents a negligible contribution to the number density and only a modest contribution to the star formation density of star forming galaxies \cite[e.g.][]{Rodighiero11, Sargent12}. Various studies have shown that a star forming sequence is naturally predicted both by theoretical galaxy formation models \cite[e.g.][]{Somerville08,Dutton10,Lagos11a,Stringer12,Ciambur13,Lamastra13,Lu13b} and by hydrodynamical simulations of a cosmologically representative volume \cite[e.g][]{Dave08,Kannan14,Torrey14}. These models have reported a slope and scatter that is generally fairly consistent with observational estimates. However, there have been a number of reported cases where it appears that the evolution in the normalisation of the sequence predicted by galaxy formation models is inconsistent with observational estimates \cite[e.g.][]{Daddi07,Dave08,Damen09,Santini09,Dutton10,Lin12,Lamastra13,Genel14,Gonzalez14,Kannan14,Torrey14}. This disagreement is often quantified by comparing model predictions with observational estimates of the specific star formation rates of galaxies of a given stellar mass as a function of redshift. This comparison can also be made for suites of hydrodynamical zoom simulations which exchange higher resolution for a loss in statistical information for the predicted galaxy formation population \cite[][]{Aumer13,Hirschmann13,Hopkins13b,Obreja14}. These studies find that it is possible to roughly reproduce the observed specific star formation rate evolution, greatly improving over earlier simulations. However, upon closer inspection, it appears that in detail, they may suffer from a similar problem to larger simulations and semi-analytical models with reproducing the observed evolution of the star forming sequence, as noted by \cite{Aumer13}, \cite{Hopkins13b} and \cite{Obreja14}. It is important to be aware that below $z \approx 2$, comparisons of specific star formation rates can yield different constraints on theoretical models depending on whether or not star forming galaxies are separated from passive galaxies. In principle, if star forming galaxies are successfully separated, any disagreement in the evolution of their average specific star formation rates between models and observational data should be independent of ``quenching'' caused by environmental processes or AGN feedback. Hence, testing the model using the evolution in the normalisation of the star forming sequence potentially offers a significant advantage, as compared to more commonly used diagnostics such as the cosmic star formation rate density, luminosity functions and stellar mass functions. In particular, the reduced number of relevant physical processes makes the problem more tractable and offers a way to improve our understanding of galaxy formation without having to resort to exhaustive parameter space searches, where arriving at an intuitive interpretation of any results can be challenging. This is particularly pertinent if the simple parametrisations used in theoretical galaxy formation models for processes such as feedback are not suitable to capture the behaviour seen in the observed galaxy population. Here, we use the \galform semi-analytic galaxy formation model along with an extensive literature compilation of observations of the star forming sequence to explore the shape of the star formation histories of galaxies within the context of a full hierarchical galaxy formation model. Our aim is to understand the origin of any discrepancies between the predicted and observed evolution in the normalisation of the star forming sequence and to demonstrate potential improvements that could be made in the modelling of the interplay between star formation, stellar feedback and the reincorporation of ejected gas. The layout of this paper is as follows. In Section~\ref{GALFORM_Section}, we describe the relevant features of the \galform galaxy formation model used for this study. In Section~\ref{Sequence_Section}, we present model predictions for the star forming sequence of galaxies and provide a comparison with a compilation of observational data extracted from the literature. In Section~\ref{SFH_Section}, we compare the predicted stellar mass assembly histories of star forming galaxies with the average stellar mass assembly histories inferred by integrating observations of the star forming sequence. We also explore the connection between stellar and halo mass assembly, highlighting the role of different physical processes included in the model. In Section~\ref{Modifications_Section}, we explore modifications that can bring the model into better agreement with the data. We discuss our results and present our conclusions in Section~\ref{Discussion_Section} and Section~\ref{Summary_Section} respectively. Appendix~\ref{MSI_Appendix} provides a detailed introduction and exploration of how the stellar mass assembly histories of star forming galaxies can be inferred from observations of the star forming sequence. Appendix~\ref{Invariance_Section} discusses the impact of changing various parameters in the \galform model. Appendix~\ref{H12_section} presents a short analysis of how well the various models presented in this paper can reproduce the evolution in the stellar mass function inferred from observations. | \label{Discussion_Section} The focus of this study has been on using the observed evolution of the star forming sequence as a constraint on galaxy formation models. The disagreement in this evolution between models and observational data is undoubtedly related to the problems with reproducing the correct evolution in the low mass end of the stellar mass function which has recently received considerable attention in the literature \cite[e.g.][]{Avila-Reese11,Weinmann12,Henriques13,Lu13a,Lu13b}. Specifically, there is a general finding that models and simulations overpredict the ages of low mass galaxies and consequently underpredict evolution in the low mass end of the stellar mass function at low redshift. \cite{Weinmann12} interpret this problem as an indication that the level of coevolution between halo and stellar mass assembly needs to be reduced, broadly in agreement with our results. However, part of the reason why they arrive at this conclusion is because they identify the prediction of a positive correlation between specific star formation rate and stellar mass as a key problem with respect to the data. We note that in contrast, \galform naturally predicts a slightly negative correlation for star forming galaxies and that this is also true for many other models and simulations presented in the literature \cite[e.g.][]{Santini09,Dutton10,Lamastra13,Torrey14}. \cite{Henriques13} show that there is no combination of parameters for their standard galaxy formation model that can reconcile the model with the observed evolution in the stellar mass and luminosity functions. This is consistent with the findings of \cite{Lu13a}, who use a similar methodology but for a different model. \cite{Lu13b} compare three different models of galaxy formation and find that they all predict very similar stellar mass assembly histories and suffer from predicting too much star formation at high redshift in low mass haloes. We note that the models presented in \cite{Lu13b} are all very similar to \galform in many respects and that therefore the similarity of the predictions from their three models makes sense in the context of the discussion we present in Appendix~\ref{Invariance_Section}. \cite{Henriques13} go one step further to suggest an empirical modification to the reincorporation timescale within their model that reduces the rate of star formation at early times in low mass haloes. In this respect, their equation $8$ uses the same scaling between reincorporation timescale and halo mass which we introduce in Eqn.~\ref{reincorporation_modified} for the same reason. However, our modification diverges from their suggestion in that we also require an additional redshift dependence that lengthens the reincorporation timescale towards low redshift. The modification suggested by \cite{Henriques13} can be compared to our modification in Fig.~\ref{tret_modified}. The difference between the two suggested modifications stems from the way that our analysis indicates that it is not simply that stars form too early in the model. Instead, we find that it is the precise shape of the stellar mass assembly history which is inconsistent with the currently available data which favours a peak of activity at intermediate times. This highlights how the differences in methodology between different studies can lead to different conclusions. Our analysis is designed to reduce the number of relevant physical processes by focusing only on the normalisation of the star forming sequence. In principle, this approach can provide a more direct insight into how the implementation of different physical processes within galaxy formation models needs to be changed, provided that the uncertainty in the relevant observations can be correctly accounted for. On the other hand, as discussed in Appendix~\ref{H12_section}, our modified reincorporation model does not reproduce the evolution in the stellar mass function inferred from recent observations. We again emphasise that the focus of this study is on the evolution of the normalisation of the star forming sequence and that the stellar mass function can be affected by the quenching processes which we have not considered in our analysis. Nonetheless, it may well be the case that our methodology is limited by the lack of a consensus on the slope of the star forming sequence in observations. Alternatively, there could be some inconsistency between observations of the star forming sequence and observations of the evolution in the stellar mass function. We note that the latter possibility is disfavoured by recent abundance matching results \cite[e.g.][]{Behroozi13,Moster13}. \subsection{Do the stellar mass assembly histories of star forming galaxies rise and then fall?} Our suggestion that the reincorporation timescale needs to be increased at low redshift stems from our inference from observations that the stellar mass assembly histories of star forming galaxies rise to a peak before falling towards the present day. As discussed in Appendix~\ref{MSI_Obs_Section}, this inference is consistent with the findings of \cite{Leitner12} who use a similar methodology, albeit with the caveat that we find that evidence of a strong downsizing trend in the purely star forming population is not conclusive. Instead, we find that the considerable uncertainty that remains in the power-law slope of the star forming sequence means that overall, the observational data are also consistent with no downsizing, such that the shapes of the stellar mass assembly histories of star forming galaxies are independent of the final stellar mass. Clearly, any improvements in measuring the form of the star forming sequence as a function of lookback time would greatly increase the constraining power of the MSI technique with respect to galaxy formation models. If the slope of the sequence, $\beta_{\mathrm{sf}}$, can be conclusively shown to be significantly below zero as advocated, for example by \cite{Karim11}, then even larger modifications than those considered here towards separating stellar and halo mass assembly would be required. Another methodology that can be used to infer the shape of the stellar mass assembly histories of galaxies is to employ abundance matching to make an empirical link between the dark matter halo population predicted by theory and the observed galaxy population \cite[e.g.][]{Behroozi13,Moster13,Yang13}. Comparison with stellar mass assembly histories of the star forming galaxies that are discussed in this study is complicated by the fact that abundance matching has only been used so far to predict the average star formation histories of all galaxies (including passive galaxies) and as a function of halo mass. On average, the haloes hosting the galaxies which we consider in this study have median masses of $\log(M_{\mathrm{H}} / \mathrm{M_\odot}) < 12$, where the fraction of passive central galaxies relative to star forming centrals is predicted to be negligible. However, because there is substantial scatter between stellar mass and halo mass for central galaxies, the fraction of passive galaxies at a given stellar mass is not negligible for most of the stellar mass bins which we consider in this study. For example, the fraction of central galaxies with $\log(M_\star / \mathrm{M_\odot}) = 10$ that are passive is predicted to be $25 \%$ at $z=0$ in our fiducial \galform model. Furthermore, the star forming galaxies considered in this study and in \cite{Leitner12} are hosted by haloes that reside within a fairly narrow range of halo mass. If we ignore these issues, then qualitatively speaking, it is apparent that the shape of stellar mass assembly histories inferred by \cite{Behroozi13} and \cite{Yang13} are broadly consistent with what we and \cite{Leitner12} infer from the data, in that there is a rise with time towards a peak at some intermediate redshift before a fall towards the present day. \cite{Moster13} show qualitative agreement with this picture for $\log(M_{\mathrm{H}} / \mathrm{M_\odot}) = 12$ haloes, but find a constant rise from early to late times in the stellar mass assembly rates of galaxies that reside within haloes with $\log(M_{\mathrm{H}} / \mathrm{M_\odot}) = 11$. Finally, we also note that \cite{Pacifici13} find that the spectral energy distributions (SEDs) of massive star forming galaxies are well described by models that feature initially rising then declining star formation histories. However, for lower mass galaxies they find that the SEDs are best reproduced using star formation histories that monotonically rise towards the present day, in qualitative agreement with the results from \cite{Moster13}. However, their galaxy sample does not include any galaxies observed below $z=0.2$, corresponding to a lookback time of $t_{\mathrm{lb}} \approx 3 \, \mathrm{Gyr}$. It is therefore unclear whether their analysis disfavours a drop in the star formation rates of lower mass galaxies at late times. \subsection{Modifications to galaxy formation models} The parametrisations for star formation and feedback that are implemented in most galaxy formation models can reproduce the shape of the local luminosity and stellar mass functions. However, as observational data that characterises the evolution of the galaxy population has improved, it has now been demonstrated that either one or more of these parametrisations is inadequate or alternatively that another important physical process has been neglected in the models entirely. The assumption that the reincorporation timescale for ejected gas scales with the dynamical timescale of the host halo is common to various semi-analytic galaxy formation models \citep[e.g.][]{Bower06,Croton06,Somerville08,Lu11}. If the reincorporation timescale is set to exactly the dynamical timescale, the associated physical assumption is that ejected gas simply behaves in a ballistic manner, ignoring any possible hydrodynamical interaction between the ejected gas and the larger scale environment. In practice, these models (including ours) typically introduce a model parameter such that the reincorporation timescale is not exactly equal to the dynamical timescale, reflecting the considerable uncertainty on predicting this timescale. Nonetheless, the assumption that this uncertainty can be represented by a single parameter and that there is no additional scaling with other galaxy or halo properties is clearly naive. Comparison with hydrodynamical simulations will clearly be useful in this respect, provided that the reincorporation rates can be clearly defined and measured from the simulations and that the effect of the assumptions made in sub-grid feedback models can be understood. While we and \cite{Henriques13} show that a modification to the reincorporation timescale for gas ejected by feedback can be one solution, we could equally change the parametrisation for the mass loading factor, $\beta_{\mathrm{ml}}$, or the star formation law introduced in \cite{Lagos11a}. In this analysis, we found that the physically motivated parametrisation for the mass loading factor of SNe driven winds presented in \cite{Lagos13} fails to reconcile the model with the data. However, it should be noted that unlike the fiducial model we consider for this study, the supernova feedback model presented in \cite{Lagos13} relies heavily upon correctly predicting the evolution in the sizes of galaxies. In principle, if the predicted sizes evolved differently in our model, it is possible that using the \cite{Lagos13} supernova feedback model could help to reconcile model predictions for the stellar mass assembly histories of galaxies with the observational data. As for modifying the star formation law, the implementation used in \galform is derived from direct empirical constraints. Furthermore, changing the star formation law will have little impact on the stellar mass assembly histories of star forming galaxies as long as the characteristic halo accretion timescale is longer than the disk depletion timescale. Of course, an alternative to the physically motivated \cite{Lagos13} model is simply to implement an ad hoc modification to the mass loading, similar to that given by Eqn.~\ref{reincorporation_modified} for the reincorporation timescale. We note that by doing this, we find it is possible to produce a model that almost exactly matches the predictions made by the modified reincorporation model presented in this paper. It therefore suffers from the same problems as the modified reincorporation model in reproducing the observed evolution of the stellar mass function and the decline in the specific star formation rates of the most massive star forming galaxies at a given redshift. Many other suggestions for changing the stellar mass assembly histories predicted by models and simulations have been made recently in the literature, typically focusing on reducing the fraction of stars that form at high redshift. For example, \cite{Krumholz12} argue that early star formation is reduced once the dependence of star formation on metallicity is properly implemented in hydrodynamical simulations. \cite{Gabor14} suggest that if galaxies at high redshift accrete directly from cold streams of gas, the accreted gas injects turbulent energy into galaxy disks, increasing the vertical scaleheight and consequently lowering the star formation efficiency in these systems by factors of up to $3$. \cite{Lu14} demonstrate that if the circum-halo medium can be preheated at early times up to a certain entropy level, the accretion of baryons onto haloes can be delayed, reducing the amount of early star formation. Various authors \cite[e.g][]{Aumer13,Stinson13,Trujillo-Gomez13} find that implementing a coupling between the radiation emitted by young stars and the surrounding gas into their simulations can significantly reduce the levels of star formation in high redshift galaxies. \cite{Hopkins13a} and \cite{Hopkins13b} echo these findings and emphasise the highly non-linear nature of the problem once sufficient resolution is obtained to start resolving giant molecular cloud structures. They argue that radiative feedback is essential to disrupt dense star forming gas before SNe feedback comes into effect to heat and inject momentum into lower density gas, avoiding the overcooling problem as a result. It remains to be seen at this stage whether the emergent behaviour from such simulations, once averaged over an entire galaxy disk or bulge, can be captured in the parametrisations that are used in semi-analytic galaxy formation models. | 14 | 3 | 1403.1585 |
1403 | 1403.1066_arXiv.txt | The star formation rates for the 230 nearest Markarian galaxies with radial velocities $V_{LG}<$3500 km/s have been determined from their far ultraviolet fluxes obtained with the GALEX satellite. We briefly discuss the observed relationship between the star formation rate and other integral parameters of these galaxies: stellar mass, hydrogen mass, morphological type, and activity index. On the average, the Markarian galaxies have reserves of gas that are a factor of two smaller than those of galaxies in the field of the same stellar mass and type. Despite their elevated activity, the specific rate of star formation in the Markarian galaxies, $SFR/M_*$, does not exceed a limit of $\sim$dex(-9.4) [yr$^{-1}$]. {\em Keywords: galaxies: Markarian galaxies: star formation} | In 1963 B. E. Markarian published [1] a list of 41 galaxies for which a discrepancy had been observed between their color and morphological type, in the sense that the central parts of these galaxies have a bluer light than normal galaxies of the same Hubble type. Thus, it was proposed [1] that the emission from the nuclei of some galaxies is non-thermal in nature and this should be expressed as an excess of ultraviolet radiation ($UV$ excess) in the central parts. Markarian conducted a spectral survey of the northern sky and then, together with colleagues in 1965-1981 at the 40-52$"$ Schmidt telescope at the Byurakan Observatory using a low-dispersion prism. The result of these many years of work was the publication of 15 lists of galaxies with ultraviolet continua, which were then compiled to the First Byurakan Survey-Catalog of galaxies with a $UV$ continuum [2]. The observation process, the principles for selection and classification of the objects, and the general characteristics of the Catalog are described in detail in the Catalog. In the worldwide literature, the objects of the catalog [2] have come to be known as Markarian galaxies. A catalog of Markarian galaxies [3] was prepared almost simultaneously with the first survey [2]. Another version of the catalog of Markarian galaxies, supplemented with new observational data, has been published recently [4]. The results [2-4] can be summarized as follows: 1. The term “Markarian galaxies” combines galaxies of widely different morphological types -- elliptical, lenticular, spiral, blue compact, and irregular dwarf, as well as bright HII regions in spiral and irregular galaxies. A comparison of Markarian galaxies [2] with objects in other catalogs and lists shows that a substantial fraction of them can be identified with compact and post-eruptive galaxies [5,6], interacting Vorontsov-Velyaminov galaxies [7,8], and other peculiar objects. 2. Since almost all Markarian galaxies now have measured radial velocities, it is possible to compare their position relative to known clusters, groups, and sparsely populated systems all the way to isolated galaxies. It turns out that the Markarian galaxies lie in systems with different multiplicities, while no more than 2\% of them are isolated galaxies. 3. Markarian galaxies manifest different degrees of nuclear activity. Their spectra include signs of quasars (QSO), Seyfert galaxies (Sy) of types 1, 2, and intermediate types, or so called “Wolf-Rayet” (WR) galaxies, as well as galaxies with starburst activity or with spectra similar to the spectra of HII regions. Some of the galaxies are characterized by ordinary emission spectra (e) and a very few have absorption spectra (a). Note that the existing diagnostic diagrams make it possible to assign galaxies to one or another activity class in more than one way. The features of Markarian galaxies listed above explain their value for solving various problems related to the origin and internal evolution of galaxies, as well as the influence of an interaction on active processes in galaxies. Observational data on a large number of Markarian galaxies can be found in the citations placed in the catalogs [2-4], as well as in conference proceedings [9] and other papers. The modern surveys make it possible to analyze different properties of Markarian galaxies in bulk. We have made use [10] the far ultraviolet ({\em FUV}) fluxes obtained with the GALEX satellite [11] to determine the star formation rate in nearby isolated galaxies from the LOG catalog [12]. Here we examine the star formation characteristics of Markarian galaxies contained within the same volume of the Local Supercluster and its surroundings as the LOG galaxies and compare them. | Numerous observations have established that star formation in galaxies in past epochs ( z $\geq$ 1 ) was an order of magnitude more intense than in the contemporary epoch ($z <$ 0.1) [27-29]. At present the major processes for the conversion of gas into stars take place in the disks of spiral and irregular galaxies. The distinctive feature of star formation in disks is their protracted time scale log($\dot M_{*}$/$M_{*}$)=$SSFR^{-1} \sim 10^{10}$ yr, which is comparable to the age of the universe $T=H_{0}^{-1}=1.37\cdot 10^{10}$ yr. The reason for the slow rate of star formation in disks is probably the existence of a rigid feedback in this process, in which an excessively high rate of formation of young hot stars suppresses further star formation or even entirely exhausts the reserves of neutral gas. In an analysis of star formation in approximately 600 galaxies of the Local volume with measurements of $H_\alpha$ and FUV fluxes, Karachentsev and Kaisina [23] noted the existence of an upper bound lim(log$SSFR) = -9.4 [yr^{-1}]$ which encompasses all the galaxies within a volume of radius 10 Mpc. Karachentsev, et al. [10], have determined the star formation rate of 520 especially isolated galaxies in the volume of the Local supercluster of radius $\sim$50 Mpc and also noticed the existence of this upper bound for log$SSFR$. This fact might seem trivial, since the evolution of isolated galaxies proceeds without significant tidal influence from neighbors, which provokes star formation outbursts. Nevertheless, as we have shown in this paper, the same upper bound on the star formation rate occurs for active objects, i.e., Markarian galaxies. It should be noted that the large number of galaxies in the GAMA (N$\sim$70000) sample includes objects with specific star formation rates log$SSFR\sim$-8.5 [yr$^{-1}$] [30-32]. In samples of galaxies from the ALFALFA survey [33] and galaxies with especially low metallicity [34] it is still possible to detect galaxies with extreme values of log$SSFR \sim [-8.0,-7.5]$. We assume, however, that these cases are artifacts arising from a large underestimate of the stellar mass of these galaxies according to photometric data from the automatic SDSS sky survey. In a study of the galaxies in the Local volume, Johnson, et al. [35], found no objects with a specific star formation rate exceeding log$SSFR = - 9.2$. A similar limit was found by Gavazzi, et al. [36], for galaxies from the ALFALFA survey in the region of the “Great Wall,” and in [37] for satellites surrounding massive galaxies of the same type as the Milky Way. It seems obvious that verification of the cases with anomalously high estimates for the specific star formation rate and confirmation of an upper bound on $SSFR$ will make it possible to better understand aspects of the conversion of gas into stars. In this regard, we plan to extend the approach used in this paper to all the objects in the Markarian catalog. This work was supported by grants RFFI 13-02-90407-Ukr-f-a, GFFI (Ukraine) F53.2/15, and RFFI 12-02- 91338-NNIO. We have used the data bases HyperLEDA (http://leda.univ-lyon1.fr), NED (http://nedwww.ipac.caltech.edu), and SDSS (http://sdss.eso.org), as well as data from the Galaxy Evolution Explorer satellite (GALEX). | 14 | 3 | 1403.1066 |
1403 | 1403.6470_arXiv.txt | We investigate the gas-phase metallicity and Lyman Continuum (LyC) escape fraction of a strongly gravitationally lensed, extreme emission-line galaxy at $z=3.417$, $J1000+0221S$, recently discovered by the CANDELS team. We derive ionization and metallicity sensitive emission-line ratios from H+K band LBT/LUCI medium resolution spectroscopy. $J1000+0221S$ shows high ionization conditions, as evidenced by its enhanced [\oiii]/[\oii] and [\oiii]/\hb\ ratios. Consistently, strong-line methods based on the available line ratios suggest that $J1000+0221S$ is an extremely metal-poor galaxy, with a metallicity of {12$+\log$(O/H)$<$7.44} ($Z$\,$<$\,0.05\,$Z_{\odot}$, placing it among the most metal-poor star-forming galaxies at $z\ga$\,3 discovered so far. In combination with its low stellar mass (2$\times$10$^{8}$\,M$_{\odot}$) and high star formation rate (5\,M$_{\odot}$\,yr$^{-1}$), the metallicity of $J1000+0221S$ is consistent with the extrapolation to low masses of the mass-metallicity relation traced by Lyman-break galaxies at $z\ga$\,3, but it is {0.55} dex lower than predicted by the fundamental metallicity relation at $z\la$\,2.5. These observations suggest the picture of a rapidly growing galaxy, possibly fed by the massive accretion of pristine gas. Additionally, deep LBT/LBC in the \textit{UGR} bands are used to derive a limit to the LyC escape fraction, thus allowing us to explore for the first time the regime of sub-$L^{*}$ galaxies at $z>$\,3. We find a 1$\sigma$ upper limit to the escape fraction of 23\%, which adds a new observational constraint to recent theoretical models predicting that sub-$L^{*}$ galaxies at high-$z$ have high escape fractions and thus are the responsible for the reioization of the Universe. | \label{s1} Recently, \citet[][hereafter vdW13]{vanderWel2013} presented the serendipitous discovery of the first strong galaxy lens at $z_{\rm lens}>$\,1, $J100018.47+022138.74$. This quadrupole lens system was found in the COSMOS field covered by the CANDELS \citep{Grogin2011,Koekemoer2011} survey. Using \textit{Hubble Space Telescope} (HST) near-infrared (NIR) imaging from CANDELS and NIR spectroscopy from the Large Binocular Telescope (LBT), the authors reported a record lens redshift $z_{L}=$\,1.53\,$\pm$\,0.09 and a strongly magnified (40x) source at redshift $z_S=$\,3.417\,$\pm$\,0.001 (hereafter $J1000+0221S$). While the lens is a quiescent and relatively massive galaxy, the magnified source was found to be a low-mass (M$_{\star} \sim$\,10$^{8}$\,M${_{\odot}}$), extreme emission-line galaxy (EELG) with unusually high rest-frame [\oiii]$\lambda$\,5007\AA\ equivalent width ($EW_0 \sim 1000$\AA). The scarcity of strong galaxy lenses at high redshift makes the discovery of $J100018.47+022138.74$ especially remarkable. Strikingly enough, the probability to find a EELG being lensed by another galaxy appears to be very low, unless these galaxies become significantly more abundant at high-$z$ (vdW13). Consistently, a large number of low-mass EELGs at $z\sim$\,2 have started to emerge from deep surveys \citep[e.g.][]{vanderWel2011,Atek2011,Guaita2013,Maseda2013,Maseda2014} and recent observational evidences point to their ubiquitousness at $z\sim$\,5-7 \citep{Smit2013}. Low-mass galaxies with extreme nebular content at lower redshift are mostly chemically unevolved systems, characterized by their compacteness, high \textit{specific} star formation rates (SFR), high ionization and low metallicities, which make them lie offset from the main sequence of galaxies in fundamental scaling relations between mass, metallicity and SFR \citep[e.g.][]{Amorin2010,Atek2011,Nakajima2013,Ly2014,Amorin2014b}. At high redshift, however, the full characterization of these properties in intrinsically faint galaxies requires a enormous observational effort \citep[e.g.][]{Erb2010,Maseda2013} and detailed studies are mostly restricted to those sources being subject of strong magnification by gravitational lensing \citep[e.g.][]{Fosbury2003,% Richard2011,Christensen2012,Brammer2012, Belli2013,Wuyts2012}. The aim of this \textit{Letter} is to fully characterize the lensed EELG $J1000+0221S$ at $z=3.417$ presented by vdW13. This unique galaxy will serve to investigate two key issues. Using the deepest available LBT photometry and spectroscopy we will first derive robust estimates of the ionization and metallicity properties of $J1000+0221S$ through strong emission line ratios. This provide additional hints on the evolutionary stage of the galaxy and allows to add new constrains to the low-mass end of the mass-metallicity-SFR relation at $z\sim$\,3.4. Finally, $J1000+0221S$ will offer the opportunity to derive, for the first time, a limit to the Lyman Continuum (LyC) escape fraction at $z>3$ in the sub-$L^*$ regime, as suggested by \citet{vanzella12}. \begin{table}[t!] \caption{Main derived properties of $J100018.47+022138.74$} \label{Tab1} \centering \begin{tabular}{lc | lc } \hline\hline \noalign{\smallskip} $RA$ ($J2000$) & 150.07697 & $z$ & 3.417 \\[3pt] $DEC$ ($J2000$) & $+$2.36076& [\oii]$\lambda\lambda$\,3727,3729/\hb &$<$\,0.30 \\[3pt] $M_{\rm B}$ & -17.8$\pm$0.3 & [\oiii]$\lambda$\,4959/\hb & 1.44$\pm$1.35 \\[3pt] $E(B-V)_{\star}$ & $0.0\substack{+0.2 \\ -0.0}$ & [\oiii]$\lambda$\,5007/\hb & 4.47$\pm$1.25 \\[3pt] $\log$\,M$_{\star}$ [M$_{\odot}$] & $8.41\substack{+0.25 \\ -0.30}$ & [\neiii]$\lambda$\,3868/\hb &$<$\,0.20 \\[3pt] SFR [M$_{\odot}$\,yr$^{-1}$] & 5$\pm$2 & $12+\log({\rm O/H})$ & $<$\,$7.44\substack{+0.20 \\ -0.17}$ \\[3pt] \noalign{\smallskip} \hline \hline \end{tabular} \begin{list}{}{} \item Notes: {$B$-band absolute magnitude, stellar reddening, star formation rate and stellar mass were derived from the SED fitting after correction for magnification \citep{vanderWel2013}. Line fluxes are given relative to F(\hb)=1.} \end{list} \end{table} | \label{s4} \subsection{The low-mass end of the mass metallicity relation at z$\sim$3.4} \label{s4.1} At redshift $z \sim$\,2-3, where massive galaxies are rapidly assembling most of their present-day stellar mass \citep[][]{Hopkins2006}, the mass-metallicity relation (MZR) traced by luminous LBGs shows significant evolution \citep{Erb2006,Maiolino2008,Mannucci2009,Troncoso2013}. However, due to the challenge that measuring metallicities represents and poor statistics, the shape and normalization of the MZR at $z\ga$\,3 are still poorly constrained, especially in its low-mass end. Thus it is particularly interesting to study the position of $J1000+0221S$ in the MZR. {In Figure~\ref{fig:MZR} we use our metallicity limit and the stellar mass\footnote{{Stellar masses have been derived following \citet{Finkelstein2012} by fitting models accounting for nebular (line plus continuum) emission to the lens-subtracted SED of the source in four HST bands, which correspond to rest-frame UV. Photometry at longer wavelengths (optical rest-frame) were not included as source plus lens emission could not be deblended. However, we note that the sum of the best-model luminosities in the optical (rest-frame) for lens and source appears consistent, being only slightly lower than the total observed luminosities after de-magnification (see Fig.3 in vdWel13).}} derived by vdWel13 to show the good agreement found between the position of $J1000+0221S$ and the extrapolation to low stellar masses of the MZR traced by more massive LBGs at $z \ga$\,3.} Compared to the other few galaxies of similar or slightly higher masses and redshift, the upper limit in metallicity of $J1000+0221S$ is $\sim$\,0.5 dex lower. The scatter and normalization of the MZR at low-$z$ have been associated to the star formation activity and to the presence of intense gas flows in a tight relation between mass, metallicity and SFR, the so-called ``Fundamental Metallicity Relation'' \citep[FMR, ][]{Mannucci2010}. According to the FMR, at a given stellar mass, galaxies with higher SFR do have lower metallicities \citep[see also][]{Perez-Montero2013}. In contrast to the MZR, the FMR has been found to persist in galaxies out to redshift $z \sim$\,2.5 \citep[][]{Mannucci2011,Belli2013}. However, at $z\ga$\,3 most LBGs studied so far, e.g. those in the AMAZE and LSD samples, are found to be more metal-poor than predicted by the FMR. This may suggest a change in the mechanisms giving origin to the FMR or a strong selection effect at these redshifts \citep{Troncoso2013}. Alternatively, it may suggest that metallicity calibrations based on local galaxies could not apply to high-$z$ galaxies due to their comparatively higher excitation/ionization conditions \citep{Kewley2013}. Also, recent studies on local galaxies using integral field spectroscopy have questioned the validity of the FMR as due to aperture biases \citep{Sanchez2013}. In Figure.~\ref{fig:MZR} we reproduce the results by \citet{Troncoso2013} {for the FMR} and include the position of $J1000+0221S$ {using the extrapolation of the FMR to low stellar masses by \citet{Mannucci2011}.} Clearly, the metallicity of $J1000+0221S$ is at least 0.55 dex lower than predicted by the FMR. The very low metallicity, high specific SFR ($\sim$\,10$^{-8}$\,yr$^{-1}$) and extremely high EWs of $J1000+0221S$ are indicative of a rapidly growing galaxy in an early stage of its evolution. The offset position found in the MZR and FMR by at least 1 dex and 0.5 dex, respectively, compared to the local relations suggests the action of massive gas flows \citep[e.g.][]{Dayal2013}. One interesting possibility is that the recent star formation in $J1000+0221S$ is being fed by massive accretion of pristine gas in the cold-gas flows mode, as predicted by cosmological simulations as the main mode of build-up galaxies at these redshifts \citep[e.g.][]{Dekel2009} and supported by observational evidences in some low-metallicity starbursts \citep{Cresci2010,Sanchez-Almeida2014}. \begin{figure}[t!] \centering \includegraphics[angle=90,width=8.7cm]{fig3a_rev.eps} \\%\hspace{10pt} \includegraphics[angle=90,width=8.7cm]{fig3b_rev.eps} \caption{\textit{Upper panel:} Mass-metallicity relation at $z\geq$\,3. \textit{Bottom panel:} deviation {in metallicity ($\Delta$\,FMR$=$\,$Z_{\rm obs}-Z_{\rm FMR}$) from} the FMR {\citep{Mannucci2011}} as a function of $\mu_{32}$. {Dotted, dashed, dot-dashed and solid lines in the upper panel show the MZR at $z$\,$\sim$\,0.07, 0.7, 2.2 and 3.4 presented by \citet{Troncoso2013}. Solid line in the lower panel indicates a perfect match between measured metallicities and the ones derived through the FMR. } LBGs from \citet[][triangles]{Troncoso2013} and the low-mass lensed galaxy "The Sextet" at $z=3.04$ from \citet[][blue cross]{Belli2013} are shown for comparison.} \label{fig:MZR} \end{figure} \subsection{The first estimation of Lyman Continuum escape fraction in the very low luminosity regime $L=0.04 L^{\ast}$} The discovery of $J1000+0221S$ also offer the great opportunity to derive one of the first limits to the LyC escape fraction at $z>3$ in a regime of very low intrinsic luminosity ($L \sim$\, 0.05$L^{*}$), as suggested by \citet{vanzella12}. The photometry of this object has been derived by \citet{boutsia13} from deep LBC data in the $UGR$ filters used in that work to search for LBGs at $2.7\le z\le 3.4$. This lensed galaxy has AB magnitudes $R=24.15\pm 0.02$, $G=25.22\pm 0.04$ and an upper limit $U\ge 28.93$ at 1$\sigma$. Note that this galaxy was not selected as an LBG candidate in \citet{boutsia13} due to its color $G-R=1.07$, which is slightly redder than the typical color cut $G-R\le 1$ adopted. This is due to the fact that, as shown in vdW13, the SED of this galaxy is contaminated by the lens galaxy, an elliptical at $z=1.53$. Using the LBC photometry alone, we can derive a limit to the LyC escape fraction for the lensed galaxy, adopting the same technique already used in \citet{boutsia11}. To get rid of the light contamination by the elliptical $z=1.53$, we have checked the photometry in the ACS band $V_{606}$, which is the closest HST band to our $R$ filter. The total magnitude of the lens+source system is $V_{606}=24.15$, equal to our $R$ band magnitude. Thus we can safely assume that the contribution by the lens is corresponding to $R=26.4$ and the corrected magnitude of $J1000+0221S$ is $R=24.3$. Adopting the upper limit in the $U$ band as an estimate for the maximum flux emitted by the lensed source and using the corrected $R$ band flux, we derive the relative escape fraction simply by \begin{equation} f^{rel}_{esc}=\frac{L_{1500}/L_{900}}{flux_R/flux_U}exp(\tau_{IGM}) \end{equation} As in \cite{boutsia11}, we adopt a value of 3 as an estimate for the intrinsic ratio $L_{1500}/L_{900}$. {Following \citet{Prochaska2009} we derive a correction for the IGM transmission of $\exp(-\tau_{IGM})=0.1811$ at the redshift of the source, $z=3.417$.} We thus obtain an upper limit to the escape fraction of 23.2\% at 1$\sigma$ confidence level. While this limit is not so stringent with respect to other estimates in the literature at z=3 \citep[e.g.][]{boutsia11,vanzella10, mostardi13}, it is nonetheless important since we are probing an intrinsic luminosity regime still unexplored before this work. The source magnitude corrected for lensing is $R=28.3$, corresponding to an absolute magnitude of $-17.4$ or equivalently to $L_{1500}=0.036 L^{\ast}(z=3)$, assuming a typical value $L^{\ast}(z=3)=-21.0$. This is equivalent to a depth of 32.9 magnitudes in the $U$ band, after correcting our upper limit for the magnification factor. A number of theoretical models \citep[e.g.][]{Nakajima2013,paard13,ferrara13,Dijkstra2014} are investigating the processes involved during the end of the so-called ``Dark Ages''. The main-stream under all these models is that the sources responsible for re-ionizing the Universe are dwarf galaxies (sub-$L^{\ast}$) at $z\sim z_{reion}\sim 7$. Some of these works are also assuming a LyC escape fractions greater than 30-50\% even at lower redshifts \citep[e.g.][]{Nakajima2013,Dijkstra2014}. Due to IGM absorption increasing with redshift, the LyC escape fraction can only be directly measured up to $z\sim 3$. For this reason our limit on the $f_{\rm esc}$ from an ultra-faint $z=3.4$ galaxy provides an interesting input to reionization model predictions, under the assumption that faint galaxies such as $J1000+0221S$ are representative of the whole faint galaxy population at $z>3$. In fact, our stringent 23\% limit is at the lowest boundary of the range $f_{\rm esc}\sim20-30$\% which, according to recent observational evidence \citep{Finkelstein2012,Grazian2012}, allows star-forming galaxies to keep the IGM ionized at $z>6$. Our current results for $J1000+0221S$ highlight the strength of strong lensing techniques to study the properties of low-mass star-forming galaxies at $z\ga$\,3. If, as expected, analogs of $J1000+0221S$ are ubiquitous at higher redshifts, forthcoming data from the CANDELS survey and the HST Frontier Fields Initiative and its spectroscopic follow-ups, will likely provide a statistically significant number of such systems, needed to derive more stringent limits to the escape fraction of LyC photons by ultra-faint galaxies and to study in larger detail the mechanisms driving the early phases of galaxy formation. | 14 | 3 | 1403.6470 |
1403 | 1403.6745.txt | *{ The high luminosity of Very Massive Stars (VMS) means that radiative forces play an important, dynamical role both in the structure and stability of their stellar envelope, and in driving strong stellar-wind mass loss. Focusing on the interplay of radiative flux and opacity, with emphasis on key distinctions between continuum vs.\ line opacity, this chapter reviews instabilities in the envelopes and winds of VMS. Specifically, we discuss how: 1) the iron opacity bump can induce an extensive inflation of the stellar envelope; 2) the density dependence of mean opacity leads to strange mode instabilities in the outer envelope; 3) desaturation of line-opacity by acceleration of near-surface layers initiates and sustains a line-driven stellar wind outflow; 4) an associated line-deshadowing instability leads to extensive small-scale structure in the outer regions of such line-driven winds; 5) a star with super-Eddington luminosity can develop extensive atmospheric structure from photon bubble instabilities, or from stagnation of flow that exceeds the ``photon tiring'' limit; 6) the associated porosity leads to a reduction in opacity that can regulate the extreme mass loss of such continuum-driven winds. Two overall themes are the potential links of such instabilities to Luminous Blue Variable (LBV) stars, and the potential role of radiation forces in establishing the upper mass limit of VMS. } \abstract{ The high luminosity of Very Massive Stars (VMS) means that radiative forces play an important, dynamical role both in the structure and stability of their stellar envelope, and in driving strong stellar-wind mass loss. Focusing on the interplay of radiative flux and opacity, with emphasis on key distinctions between continuum vs.\ line opacity, this chapter reviews instabilities in the envelopes and winds of VMS. Specifically, we discuss how: 1) the iron opacity bump can induce an extensive inflation of the stellar envelope; 2) the density dependence of mean opacity leads to strange mode instabilities in the outer envelope; 3) desaturation of line-opacity by acceleration of near-surface layers initiates and sustains a line-driven stellar wind outflow; 4) an associated line-deshadowing instability leads to extensive small-scale structure in the outer regions of such line-driven winds; 5) a star with super-Eddington luminosity can develop extensive atmospheric structure from photon bubble instabilities, or from stagnation of flow that exceeds the ``photon tiring'' limit; 6) the associated porosity leads to a reduction in opacity that can regulate the extreme mass loss of such continuum-driven winds. Two overall themes are the potential links of such instabilities to Luminous Blue Variable (LBV) stars, and the potential role of radiation forces in establishing the upper mass limit of VMS. } % \input{secs/sec1} | An overall theme of this chapter is that, because of their very high luminosity, radiative forces play an important, dynamical role in the stability of the envelopes and winds of VMS. A key issue is the nature of the opacity that links the radiation to gas, and in particular the distinction between line vs. continuum processes. Line opacity can in principle be much stronger, but in the stellar envelope the saturation of the radiative flux within the line means that flux-weighted line-force depends on an inverse or harmonic mean (a.k.a. Rosseland mean). This only becomes moderately strong (factor ten above electron scattering) in regions of strong line overlap, most particularly the so-called Iron bump near 150,000~K. This iron bump can cause a strong, even runaway inflation of the stellar envelope, leading to an Iron-Bump Eddington Limit that might be associated with S-Doradus type LBVs. Near surface layers, the desaturation of the lines leads to a much stronger line-force that drives a strong stellar wind, with a well defined mass loss rate regulated by the level of line saturation at the sonic point base. Away from the wind base, there develops a strong "line-deshadowing instability" that induces an extensive clumping and associated porosity in the outer wind. For stars that exceed the classical Eddington limit, much stronger mass loss can be driven by the continuum opacity, even approaching the ``photon tiring'' limit, in which the full stellar energy flux is expended to lift and accelerate the mass outflow. A key issue here is regulation of the continuum driving by the porosity that develops from instability and flow stagnation of the underlying stellar envelope. For a simple power-law model of the porous structure, the derived mass loss rates seem capable of explaining the giant eruption LBVs, including the 1840's eruption seen in eta Carinae. Two key remaining issues are the cause of the super-Eddington luminosity, and whether the response might be better modeled as an explosion vs. a quasi-steady mass loss eruption. \begin{acknowledgement} This work was supported in part by NASA ATP grant NNX11AC40G, NASA Chandra grant TM3-14001A, and NSF grant 1312898 to the University of Delaware. I thank M. Giannotti for sharing his Mathematica notebook for the OPAL opacity tables, and N. Shaviv for many helpful discussions and for providing figure 12. I also acknowledge numerous discussions with G. Graefener, N. Smith, J. Sundqvisit, J. Vink and A.J. van Marle. \end{acknowledgement} %\input{secs/app} % | 14 | 3 | 1403.6745 |
|
1403 | 1403.1299_arXiv.txt | The Large Underground Xenon (LUX) dark matter experiment aims to detect rare low-energy interactions from Weakly Interacting Massive Particles (WIMPs). The radiogenic backgrounds in the LUX detector have been measured and compared with Monte Carlo simulation. Measurements of LUX high-energy data have provided direct constraints on all background sources contributing to the background model. The expected background rate from the background model for the 85.3 day WIMP search run is $(2.6\pm0.2_{\textrm{stat}}\pm0.4_{\textrm{sys}})\times10^{-3}$~events~keV$_{ee}^{-1}$~kg$^{-1}$~day$^{-1}$ in a 118~kg fiducial volume. The observed background rate is $(3.6\pm0.4_{\textrm{stat}})\times10^{-3}$~events~keV$_{ee}^{-1}$~kg$^{-1}$~day$^{-1}$, consistent with model projections. The expectation for the radiogenic background in a subsequent one-year run is presented. | Introduction} The LUX experiment \cite{LUXNIM,LUXPRL} uses 370~kg of liquid Xe to search for nuclear recoil (NR) signatures from WIMP dark matter \cite{Blumenthal1984,Davis1985,Clowe2006}. The LUX detector reconstructs event energy, position, and recoil type through its collection of scintillation (S1) and electroluminescence (S2) signals. LUX seeks sensitivity to rare WIMP interactions at energies on the order of several keV. The extremely low WIMP interaction rate necessitates precise control of background event rates in the detector. A particle that produces a WIMP search background in LUX must mimic a WIMP signature in several ways. WIMPs are expected to interact with Xe nuclei in the active region, creating a NR event. WIMP interactions will be single-scatter (SS) events, distributed homogeneously in the active region. The LUX WIMP search energy window is defined in the range 3.4--25~keV$_{nr}$, where the ``nr'' subscript denotes that the energy was deposited by a nuclear recoil \cite{LUXPRL}. This window captures 80\% of all WIMP interactions, assuming a WIMP mass of 100~GeV and standard galactic dark matter halo parameters as described in \cite{LUXPRL}. The dominant background in the LUX WIMP search, which principally constrains the experimental sensitivity published for the 85.3~day run \cite{LUXPRL}, is low-energy electron recoil (ER) signatures in the Xe target. These events are generated through electromagnetic interactions from photons or electrons. The energy window for ER events differs from that of NR events due to differences in scintillation and ionization yield for each type of event. The 3.4--25~keV$_{nr}$ NR energy range has an S1 yield range equivalent to 0.9--5.3~keV$_{ee}$, where the ``ee'' subscript denotes an energy calibration for ER events. The ER energy range 0.9--5.3~keV$_{ee}$ is therefore taken as the WIMP search background range for ER events. ER events are created mainly by $\gamma$~rays interacting in the 250~kg active volume. Gamma~rays are generated from the decay of radioisotope impurities in detector construction materials, with typical energies ranging from $\sim$100~keV to several~MeV. The dense liquid Xe target (2.9~g~cm$^{-3}$) attenuates $\gamma$~rays of these energies at the outer edge of the active region, with a mean free path on the order of several cm. Gamma~rays generated outside of the detector are suppressed below significance by the use of a 300~tonne water shield and 20~tonne external steel shield. The total water shielding thickness on all sides is $>$2.5~meters~water~equivalent (m.w.e.). ER events are also generated by radioisotope decays within the Xe target itself. These isotopes are referred to as ``intrinsic.'' Intrinsic isotopes generate $\beta$~rays or X-rays that are fully absorbed within mm of the decay site. These isotopes are thoroughly mixed by convection and diffusion, and are distributed homogeneously in the active region. These energies of the $\beta$~rays or X-rays can fall within the 0.9--5.3~keV$_{ee}$ WIMP search energy range. A subdominant background is expected from NR signatures from neutron scatters. Neutrons are generated internally in the detector through ($\alpha$,n) interactions in construction materials, and from spontaneous fission of $^{238}$U. These neutrons are generated with energies on the scale of MeV, with a mean free path of order 10~cm in liquid Xe. Neutrons are also generated from muon interactions in the laboratory and water shield. Muon-induced neutrons have energy at the GeV scale, with a mean free path in liquid Xe much longer than the size of the detector. LUX uses S1 and S2 signal characteristics for multiple background rejection techniques. Scattering vertex positions in the detector are reconstructed with cm accuracy in XY, and mm accuracy in Z. This allows rejection of multiple scatter (MS) events, and enables the use of an inner fiducial region in which to conduct the WIMP search. The fiducial region excludes background events at the detector edges and maximizes WIMP signal-to-noise. Due to the limited $\gamma$~ray mean free path, together with the detector dimensions of 54~cm in height and 49~cm in diameter and use of an inner fiducial volume, the number of single-scatter $\gamma$~rays passing through the fiducial volume is four orders of magnitude less than the number of $\gamma$~rays with shallow penetration. The ratio of S2 to S1 also provides 99.6\%~discrimination against ER events on average over the WIMP search energy range. This work details modeling and measurements of the LUX background rate from both electromagnetic and neutron sources. Monte Carlo simulation studies of all background components and direct measurement of signatures of these components in data are described in Sec.~\ref{sec:Background-Modeling}. The characterization of ER background rejection using the S2/S1 ratio is described in Sec.~\ref{sec:ER-NR-Disc}. Comparison of expected and measured low-energy background spectra is described in Sec.~\ref{sec:Comparison-with-lowE-data}. | ER and NR low-energy backgrounds in the LUX experiment have been modeled in detail. Modeling work is based on Monte Carlo projections constrained by $\gamma$~ray assay of construction materials, as well as {\it in-situ} measurements of $\gamma$~rays and intrinsic radioisotope decay rates performed outside of the WIMP search fiducial volume and energy range. Low-energy background predictions are not directly fit from Monte Carlo but rather extrapolated from high-energy measurements. The use of independent measurements to set the model parameters and the resulting good agreement between low-energy projections and observed data gives high confidence that the low-energy backgrounds in LUX are well understood. The primary backgrounds in the LUX detector arise from low-energy depositions from $\gamma$~ray scatters in the fiducial region. The $\gamma$-rays are generated from radioisotope decays in detector construction materials. The R8778~PMTs are the largest source of $\gamma$~ray backgrounds, with additional contributions from insulation materials. Cosmogenic production of $^{60}$Co in Cu contributes a $\gamma$~ray rate $\times$3 higher than expected based on initial screening results. Measurements of $\alpha$~particle energy depositions in the detector provide a model for radon daughter decays in the fiducial volume. Alpha decay rates, combined with high-energy spectrum measurements, provide a constraint on $^{214}$Pb rates within a factor of $\times$2. $^{85}$Kr backgrounds are calculated from direct measurements of $^{\textrm{nat}}$Kr in LUX Xe. The LUX 85.3~day WIMP search run background rate was elevated above expectations due to the presence of cosmogenically produced $^{127}$Xe. This isotope creates a low-energy ER background through the coincidence of low-energy X-ray generation and high-energy $\gamma$~ray de-excitation, where the $\gamma$~ray escapes detection by leaving the active region. This isotope decays with a 36~day half-life, and contributes an extra 0.5~mDRU$_{ee}$ to the 85.3~day WIMP search run backgrounds. The backgrounds generated by this isotope will not be present in future dark matter search runs. Neutron emission rates from ($\alpha$,n) reactions, $^{238}$U fission, and high-energy muon interactions are predicted to create a subdominant NR background in LUX. A search was performed for low-energy MS events in the detector, as such events would be a signature of neutron scattering. No NR-like MS events below the 50\% NR acceptance mean were found during the 85.3~day run, consistent with predicted neutron emission rates. Neutron scatter rates within the WIMP search fiducial and energy regions are projected to be comparable between internal and external sources. The ER S2/S1 band was characterized by high-statistics $^3$H calibration. The measured ER discrimination factor in LUX is 99.6\%, where NR events are characterized as falling below the NR S2/S1 band centroid. Measured low-energy background rates are within 1$\sigma$ of expectation. An additional transient background during the first half of the WIMP search run was measured, in excess of expectations from $^{127}$Xe. The average background rate during the WIMP search run was $3.6\pm0.4$~mDRU$_{ee}$. 0.64~events are projected to fall below the NR centroid in the 85.3~day WIMP search data set, based on measured ER rates. One event was observed at the NR centroid, with none falling below. The data taken during the 85.3~day run show an overall agreement with the background-only model, with a p~value of 35\%. The projected background rate for the 2014 one-year $\times$ 100~kg WIMP search run is $1.7\pm0.3$~mDRU$_{ee}$. The projected one-year run background rate is reduced by 55\% relative to 85.3~day rate due to the decay of all transient backgrounds, as well as the use of a smaller fiducial volume. Further reductions in background are expected in particular from optimization of the shape of the fiducial volume to minimize position-dependent background contributions. The model predicts a strong WIMP discovery potential for LUX for the upcoming one-year WIMP search run. | 14 | 3 | 1403.1299 |
1403 | 1403.1250_arXiv.txt | { The middle-aged supernova remnant (SNR) W44 has recently attracted attention because of its relevance regarding the origin of Galactic cosmic-rays. The gamma-ray missions AGILE and Fermi have established, for the first time for a SNR, the spectral continuum below 200 MeV which can be attributed to neutral pion emission. Confirming the hadronic origin of the gamma-ray emission near 100 MeV is then of the greatest importance. Our paper is focused on a global re-assessment of all available data and models of particle acceleration in W44, with the goal of determining on a firm ground the hadronic and leptonic contributions to the overall spectrum. We also present new gamma-ray and CO NANTEN2 data on W44, and compare them with recently published AGILE and Fermi data. Our analysis strengthens previous studies and observations of the W44 complex environment and provides new information for a more detailed modeling. In particular, we determine that the average gas density of the regions emitting 100 MeV- 10 GeV gamma-rays is relatively high ($n \sim 250-300$ cm$^{-3}$). The hadronic interpretation of the gamma-ray spectrum of W44 is viable, and supported by strong evidence. It implies a relatively large value for the average magnetic field ($B \geq 10^{2}$ $\mu$G) in the SNR surroundings, sign of field amplification by shock-driven turbulence. Our new analysis establishes that the spectral index of the proton energy distribution function is $p_{1}=2.2\pm0.1$ at low energies and $p_{2}=3.2\pm0.1$ at high energies. We critically discuss hadronic versus leptonic-only models of emission taking into account simultaneously radio and gamma-ray data. We find that the leptonic models are disfavored by the combination of radio and gamma-ray data. Having determined the hadronic nature of the gamma-ray emission on firm ground, a number of theoretical challenges remains to be addressed.} | Cosmic-rays (CRs) are highly energetic particles (with kinetic energies up to $E=10^{20}$ eV) mainly composed by protons and nuclei with a small percentage of electrons (1$\%$). Currently, the CR origin is one of the most important problems of high-energy astrophysics, and the issue is the subject of very intense research \citep{fermi49,ginzburg64,berezinskii90}. For recent reviews see \citet{helder12} and \citet{aharonian12}. Focusing on CRs produced in our Galaxy (energies up to the so called ``knee'', $E=10^{15}$~eV), strong shocks in Supernova Remnants (SNRs) are considered the most probable CR sources \citep[e.g.,][]{ginzburg64}. This hypothesis is supported by several ``indirect'' signatures indicating the presence of ultra-high energy electrons \citep[recent review in] []{vink12}. However, the final proof for the origin of CRs up to the knee can only be obtained through two fundamental signatures. The first one is the identification of sources emitting a photon spectrum up to PeV energies. The second one is the detection of a clear gamma-ray signature of $\pi^{0}$ decay in Galactic sources. Both indications are quite difficult to obtain. The ``Pevatron'' sources are notoriously hard to find \citep[see][for a review]{aharonian12}, and the neutral pion decay signature is not easy to identify because of the possible contribution from co-spatial leptonic emission. Hadronic (expected to produce the $\pi^{0}$ decay spectral signature) and leptonic components can in principle be distinguished in the 50-200 MeV energy band, where they are expected to show different behaviors.\\ Over the last five years AGILE and Fermi, together with ground telescopes operating in the TeV energy range (HESS, VERITAS and MAGIC), collected a great amount of data from SNRs \citep{abdo09_W51,abdo10_CasA,abdo10_IC443,abdo10W9b,abdo10_W44,abdo10_W28,abdo11_1713, acciari09_IC443,tavani10_IC443,acciari10_CasA,acciari11_tycho,aharonian01_CasA,aharonian07_1713,aharonian08_W28,aleksic12_W51,giordano12_tycho, giuliani10_W28,hewitt12_PuppisA,katsuta12_S147,lemoine12_RCW86} providing important information and challenging theoretical models. For example, most of the observed SNRs appear to have a spectrum steeper than the one expected from linear and non-linear diffusive shock acceleration models (DSA) of index near 2 \citep[and possibly convex spectrum][]{bell87,malkov01,blasi05}. W44 is one of the most interesting SNRs observed so far; it is a middle-aged SNR, bright at gamma-ray energies and quite close to us. Its gamma-ray spectral index (indicative of the underlying proton/ion distribution in the hadronic model) is $p\sim3$, in apparent contradiction with DSA models. W44 is therefore an ideal system to study CR acceleration in detail. The AGILE data analysis of this remnant provided for the first time information below $E=200$~MeV, showing the low-energy steepening in agreement with the hadronic interpretation \citep{GiuCaTa11}. Recently, an analysis of Fermi-LAT data confirms these results \citep{ackermannW44}.\\ In this paper, we present a new analysis of AGILE data together with a re-assessment of CO and radio data on W44. We also compare our results with those obtained from Fermi-LAT data. In section~\ref{SNRW44}, we summarize the most relevant facts about W44, and in section~\ref{newAGILE}, we present an updated view on the AGILE gamma-ray data and on the CO and radio data of this SNR. In section~\ref{modeling}, we discuss hadronic and leptonic models in the light of our refined analysis. The implications of this work are discussed in section~\ref{Discussion}. We provide relevant details about our modeling in the Appendices. \begin{figure*}[!ht] \begin{center} \includegraphics[scale=1.8]{W44_AGILEnew_Ferminew_bold.eps} \end{center} \caption{AGILE new gamma-ray spectrum of SNR~W44 (red data points) superimposed with the Fermi-LAT data from \cite{ackermannW44} (blue data points).} \label{spectra} \end{figure*} | \label{Conclusions} W44 is a crucial SNR providing important information on the CR origin in our Galaxy. However, several characteristics of this SNR deduced by a multifrequency approach (gamma-ray spectral indexes, large magnetic field) are challenging. As discussed in this paper, W44 is a relatively close and quite bright gamma-ray source. Therefore, an excellent characterization of its gamma-ray spectrum in the range 50-200 MeV has been possible because of the good statistics achieved by AGILE and Fermi-LAT. In this paper we re-analyzed the spectral properties and the likelihood of interpreting the decrement below 200 MeV as a ``pion bump''. We performed a re-analysis of the AGILE data, together with revisiting radio and CO data of W44. We showed the unlikeliness of leptonic-only models in their most natural form: electron distributions constrained by radio data cannot fit the broad-band W44 spectrum. On the other hand, we find that both gamma-ray and radio data can be successfully modeled by different kinds of hadronic models (H1, H2, H3).\\ Our results regarding the spectral properties of the accelerated proton/ion population by the W44 shock are in qualitative agreement with the results of \citep{GiuCaTa11}. We provided in this paper a broader discussion of alternatives, and specified the role played by leptons alone and jointly with protons. In what follows, we summarize the most important physical characteristics of this source. \begin{itemize} \item \textbf{Neutral pion signature -} W44 is the first SNR clearly showing the so called ``pion bump'' that we expect at $E\geq67$ MeV from $\pi^{0}$-decay photons. The low-energy gamma-ray spectral index in our best model is $p_{1}=2.2\pm0.1$. This value is similar to those found in young SNRs, indicating that the proton injection spectrum is affected by non-standard mechanisms of acceleration. \item \textbf{High density of the surrounding environment -} We determined that the average density in the W44 shell is $n_{av} \sim 300 \, \rm cm^{-3}$, with $n\geq 10^{3}$~cm$^{-3}$ in correspondence with CO peaks (see medium panels in Fig.~\ref{maps}). This feature was also found in other middle-aged SNRs, like W51c and IC443 \citep{koo10,castelletti11} and explains the high gamma-ray flux detected from these sources. In the SNR~W28, the average density is lower, $n_{av}\approx 5$~cm$^{-3}$ \citep{gabici09}, but gamma-ray emission was detected in good correlation with the two MC complexes where $n\approx 10^{3}$~cm$^{-3}$ \citep{giuliani10_W28}. \item \textbf{High magnetic field -} In W44 our best hadronic models imply a magnetic field $B\geq 100$ $\mu$G, which is lower than the post-shock magnetic field estimated by \cite{claussen97} from Zeeman splitting in the OH masers, and substantially higher than the equipartition magnetic field \citep{castelletti07}. However, in most of SNRs, magnetic field estimations give values $B\sim10-10^{2}$ $\mu$G that are much higher than the average diffuse galactic value [see, for example, \cite{morlino12} for Tycho, and \cite{koo10} and \cite{tavani10_IC443} for W51c and IC443, respectively]. This is hardly surprising since magnetic field compression due to the shock interaction with the ISM leads to its amplification. We need then to consider a non-linear scenario with a back-reaction of the accelerated particle at the shock \citep{bell01}. The large value for the magnetic field in W44 may be linked to the environment density value, $n_{av}\sim 300 \, \rm cm^{-3}$ given by NANTEN2. We notice that for a lower density value, we can enhance the electron density and make plausible a lower magnetic field. \item\textbf{ Steepness of the high energy index -} As in \cite{abdo10_W44}, G11, and A13, W44 shows for energies above 1~GeV, a spectral index $p_{2}\sim3$ that is steeper than the values found in other middle-aged SNRs. Alfv\`en damping in a dense environment \citep{malkovW44} is a mechanism for explaining this behavior, but other possibilities exist \citep[e.g.,][]{blasi12a,blasi12b}. This is a point requiring deeper investigations in the future. \end{itemize} Our final conclusion is that W44 stands out as a crucial SNR whose gamma-ray emission can be firmly demonstrated to be of hadronic origin. A complete understanding of the W44 features requires modeling physical processes beyond DSA. Future investigations will have to address these issues as well as understanding W44 within the context of other SNRs. | 14 | 3 | 1403.1250 |
1403 | 1403.2190_arXiv.txt | The Hantzsche-Wendt space is one of the 17 multiply connected spaces of the three-dimensional Euclidean space $\mathbb{E}^3$. It is a compact and orientable manifold which can serve as a model for a spatial finite universe. Since it possesses much fewer matched back-to-back circle pairs on the cosmic microwave background (CMB) sky than the other compact flat spaces, it can escape the detection by a search for matched circle pairs. The suppression of temperature correlations $C(\vartheta)$ on large angular scales on the CMB sky is studied. It is shown that the large-scale correlations are of the same order as for the 3-torus topology but express a much larger variability. The Hantzsche-Wendt manifold provides a topological possibility with reduced large-angle correlations that can hide from searches for matched back-to-back circle pairs. | \label{sec:intro} An interesting aspect of cosmology concerns the global spatial structure of our Universe, that is the question for its topology. For review papers of cosmic topology and for discussions concerning topological tests, see \cite{Lachieze-Rey_Luminet_1995,Luminet_Roukema_1999,Levin_2002,% Reboucas_Gomero_2004,Luminet_2008,Mota_Reboucas_Tavakol_2010,% Mota_Reboucas_Tavakol_2011,Fujii_Yoshii_2011}. Since the standard $\Lambda$CDM concordance model of cosmology is based on the spatial flat space, the topological question might be restricted to space forms admissible in the three-dimensional Euclidean space $\mathbb{E}^3$. There are 18 possible space forms which are denoted as $E_1$ to $E_{18}$ in \cite{Lachieze-Rey_Luminet_1995,Riazuelo_et_al_2004,Fujii_Yoshii_2011}. The space $E_{18}$ is the usual simply connected Euclidean space without compact directions. The remaining 17 space forms possess compact directions and are thus multiply connected. The usual three-dimensional Euclidean space $\mathbb{E}^3$ can be considered as their universal cover which is tessellated by the multiply connected space forms into spatial domains which have to be identified. Eight of the 17 space forms are non-orientable manifolds which are usually not taken into account in cosmology. Thus there are 9 orientable multiply connected manifolds and 6 of them are compact. The focus is usually put on the 6 compact space forms $E_1$ to $E_6$. From these the 3-torus topology $E_1$ has attracted the most attention and its cosmological implications are well understood. The $E_1$ space has the simplifying property that the statistical cosmological properties are independent of the position of the observer for which the statistics, e.\,g.\ of the cosmic microwave background (CMB) radiation, is computed. This simplification is not possible for the other compact orientable multiply connected manifolds $E_2$ to $E_6$ and the statistics of the CMB simulations, on which the focus is put in this paper, have to be computed for a sufficiently large number of observer positions on the manifold in order to obtain a representative result for such an inhomogeneous manifold. A method to detect a non-trivial topology of our Universe is the search for the circles-in-the-sky (CITS) signature \cite{Cornish_Spergel_Starkman_1998b}. Since the space is multiply connected, the sphere from which the CMB radiation is emitted towards a given observer can overlap with another sphere belonging to a position in the universal cover which is, due to the topology, to be identified with that of the considered observer. The intersection of such spheres leads to circles on the CMB sky where the temperature fluctuations are correlated according to the assumed topology. The simplest situation is realised by two circles whose centres are antipodal on the CMB sky. Such circle pairs are called ``back-to-back''. This is the type of matched circle pairs which is the easiest one to discover in CMB sky maps. The non-back-to-back matched circle pairs have two further degrees of freedom due to the position of the centre of the second circle. This significantly enlarges the numerical effort in the CMB analysis and, in addition, increases the background of accidental correlations which can swamp the true signal of a matched circle pair. There are a lot of papers devoted to the CITS search \cite{ Roukema_1999,Roukema_2000, Cornish_Spergel_Starkman_Komatsu_2003, Roukema_et_al_2004, Aurich_Lustig_Steiner_2005b, Then_2006a, Key_Cornish_Spergel_Starkman_2007, Aurich_Janzer_Lustig_Steiner_2007, Bielewicz_Banday_2011, Vaudrevange_Starkman_Cornish_Spergel_2012, Rathaus_BenDavid_Itzhaki_2013, Aurich_Lustig_2013, Planck_Topo_2013}. The result is that there is no convincing hint for a matched back-to-back circle pair in the CMB data with a radius above $25^\circ\dots 30^\circ$. Smaller back-to-back circle pairs are not detectable \cite{Aurich_Lustig_2013}. This leads to the question whether space forms without back-to-back circle pairs can get missed by these searches. The non-back-to-back circle search in \cite{Vaudrevange_Starkman_Cornish_Spergel_2012} does not find hints of such a topology. However, it is shown in \cite{Aurich_Lustig_2013} that the search grid in \cite{Vaudrevange_Starkman_Cornish_Spergel_2012} is too coarse such that even large back-to-back circles with radii up to $50^\circ$ are not found (see figure 4 in \cite{Aurich_Lustig_2013}). The analysis in \cite{Aurich_Lustig_2013} refers to back-to-back circle pairs, but the increased background in the general case worsen the detectability of non-back-to-back circles. Furthermore, due to foreground contamination in two regions of the CMB map, the search in \cite{Vaudrevange_Starkman_Cornish_Spergel_2012} excludes circles which intersect these regions. Thus it is safe to say that \cite{Vaudrevange_Starkman_Cornish_Spergel_2012} does not find a hint in favour of such a topology, but it cannot exclude one. Restricting to the Euclidean case with its 6 compact orientable space forms, one can ask which topology does not possess back-to-back circle pairs with radii above $25^\circ\dots 30^\circ$. The answer depends on the value of the parameters $L_i$, which define the size and shape of the Dirichlet cell, see equation (\ref{Eq:Def_Gamma}) below. Let us assume that these parameters are chosen in such a way that all spatial dimensions of the Dirichlet cell are of the same order. This ensures that the largest circle pairs arise from the identification of the faces of the Dirichlet cell. The topologies $E_1$ to $E_5$ identify at least two pairs of faces by pure translations, i.\,e.\ without an accompanying rotation. These translations lead to back-to-back circle pairs for all observer positions even in the case of an inhomogeneous manifold. For example, the spaces $E_4$ and $E_5$, belonging to a hexagonal tiling of the Euclidean space, possess three back-to-back circle pairs due to the three pairs of faces that are identified by pure translations. In this respect, the manifold $E_6$ is special since every identification of a pair of faces is defined by a translation and a rotation by $\pi$, i.\,e.\ a so-called half-turn corkscrew motion. The space form $E_6$ is also called Hantzsche-Wendt space \cite{Hantzsche_Wendt_1935} and is the topic of this paper. Therefore, the Hantzsche-Wendt space is a candidate for cosmic topology worth studying its implications on the CMB sky. The CMB angular power spectrum is computed for a single observer position in \cite{Scannapieco_Levin_Silk_1999} for low multipoles, and a suppression of temperature correlations on large angular scales is found. Besides this, there are no further CMB analyses in literature, although \cite{Riazuelo_et_al_2004} describes the eigenmodes of that space form allowing the computation of CMB fluctuations for an observer at the origin of the coordinate system. The aim of this paper is to provide a CMB analysis of the temperature correlations on large angular scales for a huge sample of observer positions in order to allow a comparison with the CMB observations. | The observed low amplitudes of large-angle temperature correlations $C(\vartheta)$ could be explained naturally by multiply connected spaces if their sizes fits well within the surface of last scattering from which the CMB radiation originates. Because of missing convincing hints for matched circle pairs in the CMB sky, the explanation for the suppression of correlations as a consequence of a non-trivial cosmic topology is currently somewhat disfavoured. We thus devote this paper to the Hantzsche-Wendt manifold which is a compact and orientable manifold and lives in the flat three-dimensional Euclidean space $\mathbb{E}^3$. It is shown that the regular Hantzsche-Wendt space with lengths $L\gtrsim 3$ in units of the Hubble length $L_{\hbox{\scriptsize H}}$ possesses only for carefully selected observer positions a single matched back-to-back circle pair. For general observers there will be none. There are non-back-to-back circle pairs, but they are much harder to detect due to the enhanced background of spurious signals. It is shown that Hantzsche-Wendt spaces with $L\simeq 3$ possess large-angle correlations which are reduced around a factor of two or more in comparison to the $\Lambda$CDM concordance model. Furthermore, the amplitude of the correlations in the Hantzsche-Wendt topology is comparable to that of the 3-torus topology, if spaces of the same volume are considered. However, in contrast to the Hantzsche-Wendt space, the 3-torus has matched back-to-back circle pairs at that size. So we conclude that a Hantzsche-Wendt space around $L\simeq 3$ is a very interesting topology for the spatial structure of our Universe which might escape the detection by searches after matched circle pairs and nevertheless has low large-angle temperature correlations. | 14 | 3 | 1403.2190 |
1403 | 1403.7898_arXiv.txt | {SGR J1745$-$2900 is a magnetar near the Galactic center. X-ray observations of this source found a decreasing X-ray luminosity accompanied by an enhanced spindown rate. This negative correlation between X-ray luminosity and spindown rate is hard to understand. The wind braking model of magnetars is employed to explain this puzzling spindown behavior. During the release of magnetic energy of magnetars, a system of particles may be generated. Some of these particles remain trapped in the magnetosphere and may contribute to the X-ray luminosity. The rest of the particles can flow out and take away the rotational energy of the central neutron star. A smaller polar cap angle will cause the decrease of X-ray luminosity and enhanced spindown rate of SGR J1745$-$2900. This magnetar is expected to have a maximum spindown rate shortly. | Magnetars are a special kind of pulsars. They are assumed to be neutron stars whose radiative activities are powered by their magnetic energy (Duncan \& Thompson 1992). Both the radiative and timing properties of magnetars vary with time. During the outburst of a magnetar, the star's X-ray luminosity increase significantly and then decay gradually (Rea \& Esposito 2011). The outburst may also accompanied by timing events, e.g., spin-up glitch (Kaspi et al. 2003), or spin-down glitch (Archibald et al. 2013; Tong 2014), and/or period derivative changes. Many magnetars show different degrees of period derivative variations (Woods et al. 2007; Dib \& Kaspi 2014). The timing variabilities imply that the magnetic dipole braking in vacuum is a poor approximation in the case of magnetars and magnetars may be wind baking (Tong et al. 2013). The twisted magnetosphere model tried to understand the radiative and timing behaviors of magnetars using an untwisting neutron star magnetosphere (Thompson et al. 2002; Beloborodov 2009). During the outburst a magnetar is expected to have a decreasing X-ray luminosity and decreasing spindown rate. However, the timing and flux evolution of the Galactic center magnetar SGR J1745$-$2900 showed a negative correlation between X-ray luminosity and spindown rate (Kaspi et al. 2014). During the nearly four months observations, the magnetars's X-ray luminosity decreased by a factor of two. While, the spindown rate has increased by a factor of 2.6. And the spindown rate is still increasing. Kaspi et al. (2014) discussed changes in the open field line regions. However, there is no quantitative calculation at present. The structure of open and closed field line regions of magnetar magnetosphere has been calculated in the wind braking model (Tong et al. 2013). The puzzling spindown behavior of the Galactic center magnetar may be understandable in the wind braking model. The spindown rate of SGR J1745$-$2900 may contain some contribution from the Galatic center black hole (Rea et al. 2013). Therefore, understanding its spindown behavior is very important. In the wind braking model of magnetars (Tong et al. 2013), a particle outflow is generated during the decay of the magnetic field. Some of these particles may remain confined in the magnetosphere and contribute to the X-ray luminosity. The rest of the particles can flow out to infinity (i.e., wind), thus dominate the spindown of the magnetar. The particle outflow may be mainly confined in a specific polar cap of the central neutron star. For a smaller polar cap angle, it will result in a smaller X-ray luminosity and larger spindown rate. This may explain the puzzling spindown behavior of SGR J1745$-$2900. Calculations in the wind braking model for SGR J1745$-$2900 are given in Section 2. Discussions and conclusions are presented in Section 3. | In pulsar studies, the magnetic dipole braking assumption is often employed. However, it assumes an orthogonal rotator in vacuum (Shapiro \& Teukolsky 1983). A real pulsar must have a magnetosphere. The magnetosphere of magnetars may be twisted compared with normal pulsars (Thompson et al. 2002; Beloborodov 2009). However, the twisted magnetosphere model does not consider the rotation of the central neutron star (Thompson et al. 2002). All the field lines are closed in the twisted magnetosphere model. There is no open field line (i.e., no polar cap). Considering current modeling of normal pulsar spindown (wind braking, Xu \& Qiao 2001; Li et al. 2014), magnetars may also be spun down by a particle wind (Tong et al. 2013). The particle wind luminosity (powered by the magnetic energy) may be much higher than the rotational energy loss rate. This will make wind braking of magnetars qualitatively different from wind braking of normal pulsars (Tong et al. 2013). The particle luminosity can also vary dramatically as that of the X-ray luminosity (since they are both powered by the magnetic energy). This may explain why magnetars can have so many timing events (for a summary see Tong \& Xu 2014). Compared with magnetic dipole braking, the wind braking model considers the existence of a neutron star magnetosphere. Compared with the twisted magnetosphere model, the wind braking model considers the rotation of the central neutron star from the starting point. The wind braking model of magnetars (Tong et al. 2013; and the calculations here) considered an aligned rotator, and a uniform charge density over the polar cap. The spindown behavior is mainly determined by the total particle outflow. It is not very sensitive to the inclination angle and charge distribution. Assuming the observed X-ray luminosity is solely due to other sources, it is unavoidable that a particle wind is generated during the decay of the magnetic field (Thompson \& Duncan 1996; Beloborodov \& Thompson 2007). This particle wind will have a comparable luminosity to that of the X-ray emissions $L_{\rm p} \sim L_{\rm x} \sim 10^{35} \,\rm erg \,s^{-1}$ (Thompson \& Duncan 1996; Duncan 2000; Tong et al. 2013). Using Equation (\ref{B0}), the corresponding surface dipole field is $B_0 \approx 6.8\times 10^{14} \,(\theta_{\rm s}/0.05)^2 \,\rm G$. It is similar to the surface dipole field in the case of magnetic dipole braking. Only a small fraction of the particles flows out to infinity. Most of them remain trapped in the magnetosphere. They will collide with the stellar surface, scattering off X-ray photons etc (Thompson et al. 2002; Beloborodov \& Thompson 2007; Tong et al. 2010). Another X-ray component is generated which is comparable to the original X-ray component. Therefore, the outflowing particles must contribute a significant fraction to the X-ray luminosity. Previous studies favor a magnetospheric origin for the X-ray luminosity during the outburst (Beloborodov 2011). The upper limit of quiescent X-ray luminosity may put strong constraint on the contribution from other persistent energy sources (Mori et al. 2013). In summary, the assumption is reasonable that the X-ray luminosity (at least half of it) may be dominated by the contribution of outflowing particles. According to the numerical calculations, the total particle luminosity of SGR J1745$-$2900 is $L_{\rm p} =1.7\times 10^{35} \,\rm erg \,s^{-1}$. If all these particles can flow out, this corresponds to the maximum spindown case $L_{\rm w} = L_{\rm p}$ (Tong et al. 2013, Section 3.2). Therefore, SGR J1745$-$2900 has a maximum spindown rate. Using Equation (31) in Tong et al. (2013), the maximum period derivative is $\dot{P}_{\rm max} = 2.2 \times 10^{-11}$. It is about two times higher than the period derivative in the second ephemeris of SGR J1745$-$2900 (Kaspi et al. 2014). Using the period second derivative measurement (Kaspi et al. 2014), the time required to reach this maximum spindown state is $(\dot{P}_{\rm max} -\dot{P})/\ddot{P} \approx 240 \,\rm days$. Therefore, SGR J1745$-$2900 may reach a state with maximum spindown rate in about one year. If the particle luminosity decreases a little at that time, the correspond period derivative will also be a little smaller. The X-ray luminosity during the maximum spindown state will be very small since a very small fraction of particles is trapped. Other sources of X-ray luminosity may only contribute a relatively small part of the X-ray luminosity (Mori et al. 2013). From previous experiences of magnetar outburst (Rea \& Esposito 2011), both the X-ray luminosity and spindown rate will decrease long after the outburst. The timing behavior of magnetar Swift J1822.3$-$1606 is governed by the change of wind luminosity (Tong \& Xu 2013). On the other hand, the timing behavior of magnetar SGR J1745$-$2900 may be dominated by the change of polar cap angle. In general, both the particle luminosity and the polar cap angle vary with time after the outburst. This may explain the different radiative and timing correlations in magnetars (Dib \& Kaspi 2014 and references therein). Combined with timing studies of pulsars (Li et al. 2014 and references therein), not only magnetars but also normal pulsars are wind braking (see Equation (\ref{Edotw})). One consequence of the wind braking model of magnetars is a magnetism-powered pulsar wind nebula (Tong et al. 2013). There is one weak evidence (Younes et al. 2012). From the pulsar wind nebulae observations in normal pulsars, the nebula luminosity is only about $10^{-4}$ times the total particle luminosity\footnote{The particle luminosity is equal to the rotational energy loss rate in the case of normal pulsars. } (Kargaltsev et al. 2013). For SGR J1745$-$2900, its particle luminosity is about $10^{35} \,\rm erg \,s^{-1}$. At a distance of $8\,\rm kpc$, with an X-ray efficiency of about $10^{-4}$, it is unlikely that the nebula can be observed using current telescopes (Kargaltsev et al. 2013). Furthermore, the particle wind in the case of magnetars may only exist for several years (the same duration as the outburst). And its luminosity also decreases with time. This will make its detection more difficult. Current non-detections of wind nebula are not constraining (e.g., Archibald et al. 2013; Scholz et al. 2014). In conclusion, in the wind braking model of magnetars, change of polar cap angle may cause a negative correlation between the X-ray luminosity and the spindown rate. A polar cap angle $0.5$ times the initial value will explain the decrease of X-ray luminosity (by a factor of two) and enhancement of spindown rate (by a factor of 2.6) of SGR J1745$-$2900. SGR J1745$-$2900 is expected to reach a state with maximum spindown rate shortly. \appendix | 14 | 3 | 1403.7898 |
1403 | 1403.1536_arXiv.txt | Recently a weak X-ray emission around $E_\gamma \simeq 3.5$ keV was detected in the Andromeda galaxy and various galaxy clusters including the Perseus galaxy cluster but its source has been unidentified. Axino, the superpartner of axion, with a mass $ 2E_\gamma$ is suggested as a possible origin of the line with R-parity violating decay into photon and neutrino. Moreover, most of parameter space is consistent with recent observation by the BICEP2 experiment. | The recent two independent analyses~\cite{7keV1,7keV2} based on X-ray observation data show an emission line at $E \simeq 3.5$ keV in the spectra coming from various galaxy clusters and the Andromeda galaxy. The observation is statistically significant ($ \sim 3\sigma-4 \sigma$) and more importantly is quite consistent with the location of the line in energy spectra and the signal flux. The observed flux and the best fit energy are \begin{eqnarray} {\Phi}^{\rm MOS}_\gamma &=& 4.0^{+0.8}_{-0.8} \times 10^{-6}\, {\rm photons ~cm^{-2} ~s^{-1}}\,,\\ E^{\rm MOS}_\gamma &=& 3.57 \pm 0.02\, {\rm keV}\,, \end{eqnarray} where we take the values from the XMM-Newton MOS spectra, and the results from the PN observations are similar~\cite{7keV1} and consistent with the measured values in the other analysis~\cite{7keV2}. No source of X-ray line including atomic transition in thermal plasma is known at this energy, which indicates that the observed line may suggest the existence of a new source. It would be tantalizing if a dark matter (DM) provided a possible source for the line signal. Indeed, a decaying DM of a mass $m_{\rm DM} \simeq 2 E_\gamma \simeq 7$ keV and a lifetime $\tau_{{\rm DM} \to \gamma X} \simeq 10^{28}\, {\rm s}$ is immediately suggested to explain the observed line~\cite{7keV1,7keV2}. An annihilating DM of a mass $m_{\rm DM} \simeq E_\gamma \simeq 3.5$ keV and an annihilation cross section $\langle \sigma v\rangle_{{\rm 2 DM} \to \gamma X} \sim 2 \Gamma_{\chi}/n_\chi \sim (10^{-31}-10^{-33})~{\rm cm^3 ~s^{-1}}$ can also account for the signal, where $n_\chi =\rho_\chi/m_\chi \sim (10^3-10^5)~{\rm cm^{-3}}$ is the DM number density of galaxy clusters. However, the realization of such an annihilating DM is very challenging since the corresponding annihilation cross section is too small compared to a typical value for a thermal WIMP (weakly interacting massive particle) DM. Other annihilation channels are also limited due to the small DM mass. Hereafter we will focus on a decaying DM model. Possible DM candidates such as a sterile neutrino and a long lived axion have been suggested as an explanation of this signal \cite{Ishida:2014dlp,Finkbeiner:2014sja,Higaki:2014zua,Jaeckel:2014qea, Lee:2014xua,Abazajian,Krall}.\footnote{For the cases of decaying sterile neutrino and gravitino warm dark matter, the authors of Ref.~\cite{Abazajian:2001vt} estimated expected x-ray fluxes from galaxy clusters and field galaxies.} To explain the 3.5 keV line with 7 keV axion DM \cite{Higaki:2014zua,Jaeckel:2014qea}, the required axion decay constant $f_a \simeq 10^{14-15}$ GeV, which is much larger than the conventional values preferred by most axion models~\cite{Axion,AxionReview}. In this letter, as an alternative, we examine axino ($\tilde{a}$) as a dark matter candidate and show how axino can fit the observed data. With an axion~\cite{Axion,AxionReview} as a solution of the strong CP problem, a light axino with a mass $m_{\tilde{a}}\sim \frac{M_{SUSY}^2}{f_a} \sim 7~{\rm keV}$ is an excellent DM candidate in supersymmetric models \cite{Covi:1999ty, Covi:2001nw, Covi:2009pq}. Moreover, it has been shown that axino in the preferred mass range can be a warm dark matter (WDM) satisfying the relic density constraint \cite{Covi:2009pq} through thermal production via thermal scatterings and/or non-thermal production via out-of-equilibrium decays. WDM is known to provide a solution to the small scale conflict between the observations and the N-body simulations with cold dark matter (CDM), where the overproduction of galactic substructures~\cite{Moore:1999nt}, local groups~\cite{Zavala:2009ms}, and local voids~\cite{Tikhonov:2008ss} compared to the observations has been found. A lower limit on WDM mass is $m_{\rm WDM} > 3.3$ keV from the recent high red-shift Lyman-$\alpha$ forest data~\cite{Viel:2013fqw}. The small scale behaviors of WDM with $ m_{\tilde{a}} \gtrsim 4-5$ keV are not so different from those of CDM \cite{Maccio':2012uh,Schneider:2013wwa}. Consequently, the 7 keV axino can alleviate a little of the small scale problems of CDM. | Recent observation of $E_\gamma \simeq 3.5$ keV X-ray line in galaxy clusters and Andromeda galaxy opens a new way to see dark matter particle in a light mass domain: $m_{DM} \simeq 3.5$ keV for an annihilating dark matter and $7$ keV for a decaying dark matter. In general, a long lived particle, which produces enough number of photons, could be a good candidate of the source of X-rays. In this letter, we studied the axino decay through the bilinear R-parity violating interaction. We found that the parameter space which fits the observed line is naturally compatible with most axion models as well as recent observation by the BICEP2 experiment. Observation of a neutrino line at the same energy, $E_\nu =E_\gamma=m_{\tilde{a}}/2$, as in the X-ray data, corroborates the axino DM scenario. \vspace{0.5 cm} | 14 | 3 | 1403.1536 |
1403 | 1403.1193_arXiv.txt | We study the possibility that primordial magnetic fields generated in the transition between inflation and reheating posses magnetic helicity, $H_M$. The fields are induced by stochastic currents of scalar charged particles created during the mentioned transition. We estimate the rms value of the induced magnetic helicity by computing different four-point SQED Feynman diagrams. For any considered volume, the magnetic flux across its boundaries is in principle non null, which means that the magnetic helicity in those regions is gauge dependent. We use the prescription given by Berger and Field and interpret our result as the difference between two magnetic configurations that coincide in the exterior volume. In this case the magnetic helicity gives only the number of magnetic links inside the considered volume. We calculate a concrete value of $H_M$ for large scales and analyze the distribution of magnetic defects as a function of the scale. Those defects correspond to regular as well as random fields in the considered volume. We find that the fractal dimension of the distribution of topological defects is $D = 1/2$. We also study if the regular fields induced on large scales are helical, finding that they are and that the associated number of magnetic defects is independent of the scale. In this case the fractal dimension is $D=0$. We finally estimate the intensity of fields induced at the horizon scale of reheating, and evolve them until the decoupling of matter and radiation under the hypothesis of inverse cascade of magnetic helicity. The resulting intensity is high enough and the coherence length long enough to have an impact on the subsequent process of structure formation. | Introduction} Large scale magnetic fields are widespread in the Universe. From galaxies to clusters of galaxies coherent magnetic fields are detected, with intensities that range from $\mu $Gauss to tenth of $\mu $Gauss. Our galaxy as well as nearby galaxies show magnetic fields coherent on the scale of the whole structure, while in galaxy clusters the coherent length is much less than the cluster's size \cite{carr-tay,bagchi-09}. A remarkable fact recently discovered by observations, is that high redshift galaxies also posses coherent fields with the same intensitis as present day galaxies \cite{high-z,wolfe-08,kronberg-apj-08}. This result challenges the generally accepted mechanism of magnetogenesis, namely the amplification of a primordial field of $\mathcal{O}\sim 10^{-31}-10^{-21}$ Gauss by a mean field dynamo \cite{moffatt,zeldovich,brand-02,bran-sub-05} acting during a time of the order of the age of the structure: either the primordial fields are more intense so the galactic dynamo saturates in a shorter time, or the dynamo does not work as it is currently thought. It is hoped that future observations of high redshift environments will shed more light on the features of primordial magnetic fields \cite{ska,lofar,edges}. In view of the lack of success in finding a primordial mechanism for magnetogenesis that produces a sufficiently intense field, either to feed an amplifying mechanism, or to directly explain the observations (see Refs. \cite{kandus-11,ryu-12} as recent reviews), researchers began to delve on magnetohydrodynamical effects that could compensate the tremendous dilution of the field due to flux conservation during the expansion of the universe. Among the possibilities there is primordial turbulence \cite{son-99,gra-cal-02,cal-kan-10,giov-11}. Possible scenarios for it are the reheating epoch, the phase transitions (at least the electroweak one) and possibly the epoch of reionization, all dominated by out of equilibrium processes. A key ingredient to produce stable, large scale magnetic fields in three-dimensional MHD turbulence, is the transfer of magnetic helicity from small scales to large scales, at constant flux \cite{frish-75,pouquet-76} (see also Ref. \cite{mala-mull-13} and references therein). Magnetic helicity, $H_{M}$, is defined as the volume integral of the scalar product of the magnetic field $\mathbf{B}$ with the vector potential $\mathbf{A}$ \cite{berger,biskamp-03}. In three dimensions, and in the absence of ohmic dissipation, it is a conserved quantity that accounts for the non-trivial topological properties of the magnetic field \cite{berger}, such as the twists and links of the field lines. Unlike the energy that performs a natural, direct cascade, i.e., from large scales toward small ones where it is dissipated, magnetic helicity has the remarkable property of \emph{inverse cascading}, that is, magnetic helicity stored in small scales evolves toward larger scales \cite{frish-75,pouquet-76}. The fact that magnetic energy and magnetic helicity spectra are dimensionally related as $E_{k}^{M}\sim kH_{k}^{M}$ \cite{biskamp-03} produces a dragging of the former toward large scales, thus enabling the field to re-organize coherently at large scales \footnote{This mechanism however imposes severe constraints on the dynamo action. See Refs. \cite{bran-sub-05,blackman}}. It must be stressed that in a cosmological context, the inverse cascade mentioned above operates on scales of the order of the particle horizon or smaller. This is due to the fact that turbulence is a causal phenomenon. Magnetic helicity on the other hand can be induced at any scale, the topology of the fields then remains frozen if the scales are super-horizon and if there is no resistive decay. For subhorizon scales it is a sufficient condition for its conservation that the conductivity of the plasma be infinite \cite{biskamp-03}. The interpretation of $H_{M}$ as the number of twists and links must be considered with care because from its very definition it is clear that $H_{M}$ is gauge dependent. In their seminal work, Berger and Field \cite{berger} proved that if the field lines do not cross the boundaries of the volume of integration, i.e., the field lines close inside the considered volume, then $H_{M}$ as defined \emph{is} a gauge invariant quantity. These authors also addressed the case of open field lines, and wrote down a definition of gauge invariant magnetic helicity based on the difference of two such quantities for field configurations that have the same extension outside the considered volume. In this case the quantity obtained can be interpreted as the numbers of links inside the volume. In general it is not difficult to find Early Universe mechanisms that produce magnetic fields endowed with magnetic helicity: generation of helical magnetic fields has been already addressed in the framework of electroweak baryogenesis \cite{corn-97,vachas-01,copi-08,chu-11} and of leptogenesis \cite{long-13}. The main problem is still in the low intensities obtained in more or less realistic scenarios. The magnetic fields we consider in this work are induced by stochastic currents of scalar charges created gravitationally during the transition Inflation-Reheating \cite{ckm-98,kcmw-00,giov-shap-00} (see \cite{cal-hu-08} for more details), and such field configuration is of open lines. In the light of the analysis of Berger and Field, we shall discuss a criterion by which the result obtained can be considered as gauge invariant. The fields induced are random, the mean value of the magnetic helicity is zero, but not the corresponding rms deviation. We assume that those fields are weak enough to neglect their backreaction on the source currents, and show that the rms magnetic helicity can be written as the sum of four SQED Feynman graphs, one of them representing the mean value of $H_M$ and consequently identically null. The remaining three add to a non null value. We compute the value of the helicity for large scales and find that the number density of links scales with the distance $\kappa^{-1/2}$ from a given point as $\kappa^{5/2}$, which means that their fractal dimension is $D=1/2$ This number density takes into account defects due to both regular and random fields. We also calculate the value of $H_M$ due to regular fields on a large scale. In this case the number density scales as $\kappa^3$, the corresponding fractal dimension being $D=0$. Using the relation $B^2\left(\kappa\right)\propto H_M\left(\kappa\right)\kappa$, we compare the associated helical intensity to the one obtained by computing directly the correlation function of the magnetic field at the same scale $\kappa^{-1}$. We find that both expressions coincide, which means that the fields generated by the considered mechanism are indeed helical. We estimate the intensity of those smooth fields on a galactic scale, finding an intensity too small to seed the dynamo. We finally address the evolution of fields generated at scales of the order of the particle horizon at the end of reheating, through the inverse cascade of magnetic helicity mechanism, until matter-radiation equilibrium. This evolution is based on the assumption that during radiation dominance the plasma is in a (mild) turbulent state. We find that the number density of magnetic links scales as $\kappa$, the corresponding fractal dimension then being $D=4$. The field intensity as well as the scale of coherence are in a range that could have and impact on the process of structure formation \cite{ryu-12}. We work with signature $\left( -,+,+,+\right) $ and with natural units, i.e., $c=1=\hbar $, $e^2=1/137$. We use the Hubble constant during Inflation, $H$, which we assume constant, to give dimensions to the different quantities, i.e. we consider spacetime coordinates $\left[ x\right] = H^{-1}$, Lagrangian density $\left[ \mathcal{L}\right] =H^{4}$, four vector potential $\left[ A^{\mu }\right] =H$, field tensor $\left[ F^{\mu \nu }\right] =H^{2}$, scalar field $\left[ \Phi \right]=H$. The paper is organized as follows: Section \ref{sed} contains a brief description of scalar electrodynamics in curved spacetime. In Section \ref{mh} we define magnetic helicity and describe briefly its main properties. In Section \ref{mhrf} we develope the formalism to study magnetic helicity of random fields and estimate its rms value in different scenarios: In Subsection \ref{demh} we compute the SQED Feynman graphs that describe the magnetic helicity two-point correlation function. In Subsection \ref{dim} we provide some physical quantities relevant for our study. In Subsection \ref{hm-tir} we describe the transition Inflation-Reheating and quote some useful formulae for our work. In Subsection \ref{gd} we apply the analysis of Berger and Field to our fields and show the gauge invariance of our results. In Subsection \ref{mhls} we calculte the magnetic helicity rms value on large scales, and compute the density and fractal dimension of the distribution of defects. In Subsection \ref{hmgal} we compute the rms value of magnetic helicity due to solely smooth fields, and find that the fields induced by the mechanism considered in this work are completely helical, but very weak. Finally in Subsection \ref{hm-ss} we analyze the evolution of fields induced on scales of the order of the horizon at reheating along radiation dominance. By considering conservation of magnetic helicity and assuming full inverse cascade is operative, we find at decoupling a magnetic field of intensity and coherence that could impact on the process of structure formation. In Section \ref{dc} we sumarize and discuss our results. We leave details of the calculations to the Appendices. | 14 | 3 | 1403.1193 |
|
1403 | 1403.1470_arXiv.txt | Opacity is a property of many plasmas, and it is normally expected that if an emission line in a plasma becomes optically thick, its intensity ratio to that of another transition that remains optically thin should decrease. However, radiative transfer calculations undertaken both by ourselves and others predict that under certain conditions the intensity ratio of an optically thick to thin line can show an increase over the optically thin value, indicating an enhancement in the former. These conditions include the geometry of the emitting plasma and its orientation to the observer. A similar effect can take place between lines of differing optical depth. Previous observational studies have focused on stellar point sources, and here we investigate the spatially-resolved solar atmosphere using measurements of the I(1032 \AA)/I(1038 \AA) intensity ratio of \ion{O}{6} in several regions obtained with the Solar Ultraviolet Measurements of Emitted Radiation (SUMER) instrument on board the Solar and Heliospheric Observatory (SoHO) satellite. We find several I(1032 \AA)/I(1038 \AA) ratios observed on the disk to be significantly larger than the optically thin value of 2.0, providing the first detection (to our knowledge) of intensity enhancement in the ratio arising from opacity effects in the solar atmosphere. Agreement between observation and theory is excellent, and confirms that the \ion{O}{6} emission originates from a slab-like geometry in the solar atmosphere, rather than from cylindrical structures. | Opacity is a common property of many astrophysical and laboratory plasmas. In most circumstances, one would expect that opacity in an emission line would lead to a reduction in its intensity compared to the optically thin value. However, theoretical work by Bhatia and co-workers (Bhatia \& Kastner 1999; Bhatia \& Saba 2001; Kastner \& Bhatia 2001) using the escape factor method indicated that in certain circumstances the intensity of an emission line could be enhanced over its optically thin value due to the effects of opacity. This research did not explain how this (apparently counter-intuitive) result came about, which had to await the more sophisticated calculations of Kerr et al. (2004), who determined the radiation transport in the plasma using the CRETIN code (Scott 2001). The CRETIN results provided the origin of the line enhancement effect, namely that the ion in an upper state of a transition can be pumped in the optically thick case by photons traversing the plasma at many different angles. As a consequence, the line intensity enhancement effect, and its apparent magnitude, is very dependent both on the geometry of the plasma and the orientation of the observer (i.e. the line-of-sight to the plasma by which it is viewed). Subsequently, Kerr et al. (2005) extended this work by using an analytical approach to consider several different geometries. They found that the detection of line intensity enhancement could, in theory, discriminate between different plasma geometries and the orientation of the observer. This would in principle provide a powerful new diagnostic for astrophysical sources, many of which are spatially unresolved. Observationally, searches have been undertaken for line intensity enhancements in stellar spectra, using the ratio of lines of differing optical depth. Rose et al. (2008) found some evidence for the effect in the I(15.01 \AA)/I(16.78 \AA) ratio of \ion{Fe}{17} for the active cool dwarf EV Lac, with a measured value of 2.50$\pm$0.50 from XMM-Newton satellite observations compared to a theoretical optically thin result of $\leq$\,1.93. More recently, Keenan et al. (2011) analysed Far Ultraviolet Spectroscopic Explorer (FUSE) satellite spectra of the active late-type stars $\epsilon$ Eri, II Peg and Prox Cen, and measured several I(1032 \AA)/I(1038 \AA) ratios of \ion{O}{6} that were larger (by up to 30\%) than the optically thin value of 2.0. Although we are confident that the above detections are secure, they are very limited in number, and additionally are restricted to spatially unresolved (distant stellar) objects. In the present paper we therefore extend the work to search for \ion{O}{6} line intensity enhancements in a spatially resolved source, namely the Sun, and also build on our previous theoretical research for \ion{O}{6} (Keenan et al. 2011) to calculate \ion{O}{6} models for cylindrical as well as slab and spherical geometries. | We first note from Figure 1 that the ratios measured above the limb of the solar disk rapidly become much larger than the value of 2.0 predicted for an optically thin plasma in coronal steady-state. However, it is well established that the \ion{O}{6} ratio can show very large values (up to $\sim$\,4) in spectra obtained for solar regions that lie above the surface, where the electron density is low (see, for example, Nakagawa 2008). As a consequence, processes other than collisional excitation can make a significant contribution to the \ion{O}{6} line emission, including the resonant scattering of chromospheric \ion{O}{6} radiation and/or the absorption and subsequent re-emission of Doppler-shifted \ion{C}{2} 1036.3 and 1037.0 \AA\ photons by \ion{O}{6} 1038 \AA\ (Kohl \& Withbroe 1982). Hence we do not consider further any observations made above the solar limb, and focus solely on those on the disk, where the \ion{O}{6} line emission will be dominated by the high electron density collisional excitation component, and therefore should have an optically thin I(1032)/I(1038) intensity ratio of 2.0. For the ratio measurements on the solar disk, several show values significantly greater than 2.0, even allowing for observational uncertainties, supporting the detection of line intensity enhancement due to opacity in our dataset. Further support comes from a longitudinal analysis of the ratios in Figure 1. In such an analysis, we investigate if the changes in the observed ratios from one position to the next within a region are correlated --- i.e. we are witnessing real changes in the ratio due to a variation in position --- or are they simply random in nature. One would expect some correlation between ratio value and position, as the angle of observation $\theta$ and/or column density changes while moving from one location to another (Figures 3 and 4). The longitudinal analysis utilised a linear mixed effects model (Laird \& Ware 1982) to investigate the variation of line ratio with position within each region. We found that the within region covariance was highly significant, with a p-value of $<$\,0.0001. The p-value is the probability that, within a region, the correlation of line ratios is zero. Hence our analysis indicates that the probability of this is $<$\,0.01\%, providing strong evidence to support the existence of such a correlation between the line ratios. A comparison of the observed ratios in Figure 1 with the theoretical results in Figures 3 and 4 provides some useful information on the emitting plasma. We note that the largest observed ratio is R = 2.24$\pm$0.02, in exact agreement with the maximum theoretical value for the slab geometry in Figure 3 of R = 2.24, providing observational support for the accuracy of the calculations. However, it also indicates that the emitting plasma must be slab-like in nature, as the maximum theoretical ratio for the cylindrical geometry in Figure 4 is only R = 2.10. Such a slab geometry might be expected for \ion{O}{6} emission, which arises from the upper transition region, being formed at a temperature of 10$^{5.5}$ K in ionization equilibrium (Bryans et al. 2009). The smallest measured ratio, R = 1.51$\pm$0.01 near the solar limb, is also in good agreement with that predicted for the slab geometry in Figure 3, namely R = 1.48, while for the cylinder the lowest theoretical value is R = 1.41. Once again, this not only provides observational support for the theory but is consistent with what would be expected for a slab-like emission layer. At the limb, the line-of-sight to the plasma would be at a large angle $\theta$ to the perpendicular to a slab structure near the solar surface. For such large angles, the I(1032)/I(1038) intensity ratios are predicted to have their smallest values (Figure 3). By contrast, for a cylindrical geometry the line-of-sight could be at any angle to the cylinder surface (Figure 4), depending on the orientation of the cylinder to the observer. Both the largest and smallest ratio values in Figure 3 are predicted to occur at column densities n$_{e}${\em l} (where n$_{e}$ is electron density and {\em l} is pathlength) of around 10$^{17}$--10$^{17.5}$ cm$^{-2}$. Doschek et al. (1998) derived quiet Sun electron densities of n$_{e}$ $\simeq$ 10$^{9.7}$ cm$^{-3}$ from \ion{O}{5} diagnostic emission lines, which are formed at T$_{e}$ = 10$^{5.4}$ K (Bryans et al. 2009), similar to that for \ion{O}{6} (10$^{5.5}$ K). Hence the \ion{O}{5} density should reflect that in the \ion{O}{6} emitting region, which the solar atmospheric model of Avrett \&\ Loeser (2008) indicates has a thickness of around 500 km. Combining these plasma parameters yields a column density for the \ion{O}{6} region of n$_{e}${\em l} $\simeq$ 10$^{17.4}$ cm$^{-2}$, consistent with that predicted from Figure 3 to achieve values of the I(1032)/I(1038) ratio both significantly larger and smaller than the optically thin result of 2.0. However, we stress that our models do not assume constant n$_{e}$ nor {\em l}, with the theoretical line ratios only dependent on the product n$_{e}${\em l} (or more precisely $\tau_0$), and hence these parameters can (and indeed very likely do) vary between observations. A referee notes that calculations which include full geometry-dependent radiative transfer in realistic solar surface models (Wood \& Raymond 2000) indicate that the brightness of a low density region adjacent to a bright one can be significantly enhanced by scattered photons. If so, then one would expect an anticorrelation between values of I(1032)/I(1038) and the ratio of the local \ion{O}{6} intensity to that of the surrounding region, as positions with small local/surrounding intensity ratios (i.e. low brightness region adjacent to a brighter one) will show increased scattering and hence larger I(1032)/I(1038) ratios. However, comparisons of I(1032)/I(1038) measurements with brightness ratios derived using 7 $\times$ 7 pixel and 7 $\times$ 20 pixel areas reveal no such anticorrelation, indicating that the scattering process is unlikely to be responsible for the observed \ion{O}{6} line enhancements. We also note that if another plasma process were responsible for the apparent line enhancement effect, then the excellent agreement between the present theory and SUMER measurements would be a major coincidence. In summary, we have (to our knowledge) provided the first definitive detection of intensity enhancement in solar transition region emission lines due to the effects of opacity. This detection not only confirms the presence of an interesting plasma process, but also illustrates how such observations can provide information on the physical conditions and geometry of the emitting plasma. The dependence of the line enhancement on plasma geometry is particularly important, as it may provide a way to diagnose, at least to some extent, the shape of an astrophysical source, most of which are spatially unresolved. For the future, it would be interesting to extend the work to obtain time-series spectra of a solar feature (e.g. active region) as it moves across the solar surface from the limb to disk center due to rotation, hence changing its orientation with respect to an observer and allowing a 3-dimensional map of the feature to be developed for comparison with theory. Additional observations of stellar and other potential time-varying astrophysical sources, once again obtained over significant amounts of time (e.g. a stellar rotation period), would also be of use to assess what geometrical information on the emitting plasma could be inferred from the line enhancement technique. On the theoretical side, we note that our opacity calculations assume a static solar atmosphere, when in reality it is highly dynamic. Hence, as noted by a referee, it would be useful if opacity calculations for \ion{O}{6} could be incorporated into a complex stellar atmosphere simulation code such as Bifrost (Gudiksen et al. 2011), which would allow a more realistic comparison of theoretical results with (dynamic) solar and stellar observations. In terms of our own theoretical work, our next steps will include more detail of surface variation and also the possibility of scattering involving different regions of the plasma. | 14 | 3 | 1403.1470 |
1403 | 1403.4852_arXiv.txt | We present up to date cosmological bounds on the sum of active neutrino masses as well as on extended cosmological scenarios with additional thermal relics, as thermal axions or sterile neutrino species. Our analyses consider all the current available cosmological data in the beginning of year 2014, including the very recent and most precise Baryon Acoustic Oscillation (BAO) measurements from the Baryon Oscillation Spectroscopic Survey. In the minimal three active neutrino scenario, we find $\sum m_\nu < 0.22$~eV at $95\%$~CL from the combination of CMB, BAO and Hubble Space Telescope measurements of the Hubble constant. A non zero value for the sum of the three active neutrino masses of $\sim 0.3$~eV is significantly favoured at more than $3$ standard deviations when adding the constraints on $\sigma_8$ and $\Omega_m$ from the Planck Cluster catalog on galaxy number counts. This preference for non zero thermal relic masses disappears almost completely in both the thermal axion and massive sterile neutrino schemes. Extra light species contribute to the effective number of relativistic degrees of freedom, parameterised via $\neff$. We found that when the recent detection of B mode polarization from the BICEP2 experiment is considered, an analysis of the combined CMB data in the framework of LCDM+r models gives $\neff=4.00\pm0.41$, suggesting the presence of an extra relativistic relic at more than $95 \%$ c.l. from CMB-only data. | In standard cosmology, hot, thermal relics are identified with the three light, active neutrino flavours of the Standard Model of elementary particles. The masses of these three neutrino states have an impact in the different cosmological observables, see Refs.~\cite{sergio,sergio2} for a detailed description. Traditionally, the largest effect caused by neutrino masses on the Cosmic Microwave Background (CMB) anisotropies, is via the \emph{Early Integrated Sachs Wolfe effect (ISW)}. Light active neutrino species may turn non-relativistic close to the decoupling period, affecting the gravitational potentials and leaving a signature which turns out to be maximal around the first acoustic oscillation peak in the photon temperature anisotropy spectrum. More recently, the Planck satellite CMB data~\cite{planck}, has opened the window to tackle the neutrino mass via gravitational lensing measurements: neutrino masses are expected to leave an imprint on the lensing potential (due to the higher expansion rate) at scales smaller than the horizon when neutrinos turn non relativistic states~\cite{lensingnu}. However, the largest effect of neutrino masses on the several cosmological observables comes from the suppression of galaxy clustering at small scales. Neutrinos, being hot thermal relics, possess large velocity dispersions. Consequently, the non-relativistic neutrino overdensities will only cluster at wavelengths larger than their free streaming scale, reducing the growth of matter density fluctuations at small scales, see e.g Refs.~\cite{Reid:2009nq,Hamann:2010pw,dePutter:2012sh,Giusarma:2012ph,Zhao:2012xw,Hinshaw:2012fq,Hou:2012xq, Sievers:2013wk,Archidiacono:2013lva,Giusarma:2013pmn,Archidiacono:2013fha,Riemer-Sorensen:2013jsa,Hu:2014qma}. Non degenerate neutrinos have different free streaming scales and in principle, with perfect measurements of the matter power spectrum, the individual values of the neutrino masses could be identified. In practice, the former is an extremely challenging task. Cosmological measurements are, for practical purposes, only sensitive to the total neutrino mass, i.e. to the sum of the three active neutrino masses. CMB Measurements from the Planck satellite, including the lensing likelihood and low-$\ell$ polarization measurements from WMAP 9-year data~\cite{Bennett:2012fp} provide a limit on the sum of the three active neutrino masses of $\sum m_\nu<1.11$~eV at $95\%$~CL. When a prior on the Hubble constant $H_0$ from the Hubble Space Telescope~\cite{Riess:2011yx} is added in the analysis, the constraint is strongly tightened, being $\sum m_\nu<0.21$~eV at $95\%$~CL, due to the huge existing degeneracy between $H_0$ and $\sum m_\nu$, see Ref.~\cite{Giusarma:2012ph}. The addition of Baryon Acoustic Oscillation (BAO) measurements from the Sloan Digital Sky Survey (SDSS)-II Data Release 7~\cite{dr71,dr72}, from the WiggleZ survey~\cite{wigglez}, from the Baryon Acoustic Spectroscopic Survey (BOSS)~\cite{Dawson:2012va}, one of the four surveys of SDSS-III~\cite{Eisenstein:2011sa} Data Release 9~\cite{anderson} and from 6dF~\cite{6df} to Planck CMB measurements also significantly improves the neutrino mass constraints, leading to $\sum m_\nu<0.26$~eV at $95\%$~CL (see also the recent work of \cite{dePutter:2014hza}). However, the former bounds are obtained assuming that neutrinos are the only hot thermal relic component in the universe. The existence of extra hot relic components, as sterile neutrino species and/or thermal axions will change the cosmological neutrino mass constraints, see Refs.~\cite{Hamann:2010bk,Giusarma:2011ex,Giusarma:2011zq,Hamann:2011ge,Giusarma:2012ph,Archidiacono:2013lva,Archidiacono:2013fha,Melchiorri:2007cd,Hannestad:2007dd,Hannestad:2008js,Hannestad:2010yi,Archidiacono:2013cha}. Massless, sterile neutrino-like particles, arise naturally in the context of models which contain a dark radiation sector that decouples from the Standard Model. A canonical example are asymmetric dark matter models, in which the extra radiation degrees of freedom are produced by the annihilations of the thermal dark matter component~\cite{Blennow:2012de}, see also Refs.~\cite{Diamanti:2012tg,Franca:2013zxa} for extended weakly-interacting massive particle models. On the other hand, extra sterile massive, light neutrino species, whose existence is not forbidden by any fundamental symmetry in nature, may help in resolving the so-called neutrino oscillation anomalies~\cite{Abazajian:2012ys,Kopp:2013vaa}, see also Refs.~\cite{Melchiorri:2008gq,Archidiacono:2012ri,Archidiacono:2013xxa,Mirizzi:2013kva,Valentino:2013wha} for recent results on the preferred sterile neutrino masses and abundances considering both cosmological and neutrino oscillation constraints. Another candidate is the thermal axion~\cite{PecceiQuinn}, which constitutes the most elegant solution to the strong CP problem, i.e. why CP is a respected symmetry of Quantum Chromodynamics (QCD) despite the existence of a natural, four dimensional, Lorentz and gauge invariant operator which badly violates CP. Axions are the Pseudo- Nambu-Goldstone bosons associated to a new global $U(1)_{PQ}$ symmetry, which is spontaneously broken at an energy scale $f_a$. The axion mass is inversely proportional to the axion coupling constant $f_{a}$ \bea m_a = \frac{f_\pi m_\pi}{ f_a } \frac{\sqrt{R}}{1 + R}= 0.6\ {\rm eV}\ \frac{10^7\, {\rm GeV}}{f_a}~, \label{eq:massaxion} \eea where $R=0.553 \pm 0.043 $ is the up-to-down quark masses ratio and $f_\pi = 93$ MeV is the pion decay constant. Axions may be copiously produced in the early universe via thermal or non-thermal processes, providing therefore, a possible hot relic candidate in the thermal case, to be considered together with the standard relic neutrino background. Both extra, sterile neutrino species and axions have an associated free streaming scale, reducing the growth of matter fluctuations at small scales. Indeed, it has been noticed by several authors~\cite{Hamann:2013iba,Wyman:2013lza} that the inclusion of Planck galaxy cluster number counts data~\cite{Ade:2013lmv} in the cosmological data analyses, favours a non zero value for the sterile neutrino mass: the free streaming sterile neutrino nature will reduce the matter power at small (i.e. cluster) scales but will leave unaffected the scales probed by the CMB. A similar tendency for $\sum m_\nu>0$ appears, albeit to a smaller extent~\cite{Hamann:2013iba}, when considering CFHTLens weak lensing constraints on the clustering matter amplitude~\cite{Heymans:2013fya}. Extra dark radiation or light species as neutrinos and axions will also contribute to the effective number of relativistic degrees of freedom $\neff$, defined as \begin{equation} \rho_{rad} = \left[1 + \frac{7}{8} \left(\frac{4}{11}\right)^{4/3}\neff\right]\rho_{\gamma} \, , \end{equation} where $\rho_{\gamma}$ is the present energy density of the CMB. The canonical value $\neff=3.046$ corresponds to the three active neutrino contribution. If there are extra light species at the Big Bang Nucleosynthesis (BBN) epoch, the expansion rate of the universe will be higher, leading to a higher freeze out temperature for the weak interactions which translates into a higher primordial helium fraction. The most recent measurements of deuterium~\cite{Cooke:2013cba} and helium~\cite{Izotov:2013waa} light element abundances provide the constraint $\neff=3.50\pm 0.20$~\cite{Cooke:2013cba}. It is the aim of this paper to analyse the constraints on the three active neutrino masses, extending the analyses to possible scenarios with additional hot thermal relics, as sterile neutrino species or axions, using the available cosmological data in the beginning of this year 2014. The data combination used here includes also the recent and most precise distance BAO constraints to date from the BOSS Data Release 11 (DR11) results~\cite{Anderson:2013vga}, see also Refs.~\cite{Samushia:2013yga,Sanchez:2013tga,Chuang:2013wga}. The structure of the paper is as follows. Section~\ref{sec:params} describes the different cosmological scenarios with hot thermal relics explored here and the data used in our numerical analyses. In Sec.~\ref{sec:results} we present the current limits using the available cosmological data in the three active neutrino massive scenario, and in this same scheme but enlarging the hot relic component firstly with thermal axions, secondly with additional dark radiation (which could be represented, for instance, by massless sterile neutrino species) and finally, with massive sterile neutrino species. We draw our conclusions in Sec.~\ref{sec:concl}. | \label{sec:concl} Standard cosmology includes hot thermal relics which refer to the three light, active neutrino flavours of the Standard Model of elementary particles. The largest effect of neutrino masses on the different cosmological observables arises from their free streamig nature: the non-relativistic neutrino overdensities will contribute to clustering only at scales larger than their free streaming scale, suppressing the growth of matter density fluctuations at small scales. CMB measurements from the Planck satellite, including the lensing likelihood, low-$\ell$ polarization measurements from WMAP 9-year data and Baryon Acoustic Oscillation (BAO) measurements from a number of surveys lead to the bound $\sum m_\nu<0.26$~eV at $95\%$~CL. However, the existence of extra hot relic components, as dark radiation relics, sterile neutrino species and/or thermal axions will change the cosmological neutrino mass constraints. Dark radiation (i.e. purely massless species) may arise in several extentions of the Standard Model of elementary particles, as, for instance, in asymmetric dark matter models. On the other hand, the existence of extra massive species is well motivated by either the so-called neutrino oscillation anomalies (in the case of sterile neutrino species) or by the strong CP problem (in the case of thermal axions). Both extra, sterile neutrino species and axions have an associated free streaming scale, reducing the growth of matter fluctuations at small scales. These extra species will also contribute to the effective number of relativistic degrees of freedom $\neff$, being $\neff=3.046$ the standard value, corresponding to the three active neutrino contribution. The existence of extra light species at the Big Bang Nucleosynthesis (BBN) epoch modifies the light element abundances, especially the primordial helium mass fraction. We have presented here the constraints on the masses of the different thermal relics in different scenarios using the available cosmological data in the beginning of this year 2014. The data combination used here includes also the recent and most precise distance BAO constraints to date from the BOSS Data Release 11 (DR11) results~\cite{Anderson:2013vga}, see also Refs.~\cite{Samushia:2013yga,Sanchez:2013tga,Chuang:2013wga}. The tightest limit we find in the minimal three active massive neutrino scenario is $\sum m_\nu < 0.22$~eV at $95\%$~CL from the combination of CMB data, BAO data and HST measurements of the Hubble constant. The addition of the constraints on $\sigma_8$ and $\Omega_m$ from the CFHTLens survey displaces the bounds on the neutrino mass to higher values. However, the constraint on $\sigma_8$ and $\Omega_m$ from the Planck-SZ cluster catalog on galaxy number counts favours a non zero value for the sum of the three active neutrino masses of $\sim 0.3$~eV at $4\sigma$, see also Refs.~\cite{Hamann:2013iba,Wyman:2013lza}. When considering simultaneously thermal axions and active massive neutrino species, and including CMB, BOSS BAO DR11, additional BAO measurements, WiggleZ power spectrum (full shape) information, the $H_0$ HST prior and BBN light element abundances, the $95\%$~CL bounds are $\sum m_\nu <0.25$~eV and $m_a<0.57$~eV ($\sum m_\nu <0.21$~eV and $m_a<0.61$~eV) using recent (previous) deuterium estimates from \cite{Cooke:2013cba} (\cite{fabio}) and helium constraints from Ref.~\cite{Izotov:2013waa}. Neither the addition of weak lensing constraints on the $\sigma_8-\Omega_m$ relationship from the CFHTLens experiment nor from the Planck SZ cluster number counts favours non-zero thermal relic masses, except for few cases in which the Planck SZ cluster number counts information is considered together with the HST $H_0$ prior (or SNIa luminosity distances) and all the BAO measurements. Only in this case there exists a mild $\sim 2.2\sigma$ preference for a non zero axion mass of $0.6$~eV. Concerning neutrino masses, there exists evidence for a neutrino mass of $\sim 0.2$~eV at the $\sim 3\sigma$ level exclusively for the case in which CMB data is combined with BOSS BAO DR11 measurements and full-shape power spectrum information from the WiggleZ galaxy survey. In the case in which we consider both massive neutrinos and $\Delta \neff$ dark radiation species, the neutrino mass bounds are less stringent than in standard three neutrino massive case due to the large degeneracy between $\sum m_\nu$ and $\neff$, finding $\sum m_\nu < 0.31$~eV and $\neff=3.45_{-0.54}^{+0.59}$ at $95\%$~CL from the combination of CMB data and BOSS DR11 BAO measurements. Contrarily to the massless dark radiation case, but similarly to the thermal axion scenario, the addition of the constraints on the $\sigma_8$ and $\Omega_m$ cosmological parameters from the Planck-SZ cluster catalog on galaxy number counts does not lead to a non zero value for the neutrino masses. After considering the inclusion of Planck SZ clusters and CFHTLens information to CMB data, BOSS DR11 BAO, additional BAO measurements and the HST $H_0$ prior, the $95\%$~CL bounds on the active and the sterile neutrino parameters are $\sum m_\nu < 0.39$~eV, $m^\textrm{eff}_s<0.59$~eV and $\neff<4.01$. Big Bang Nucleosynthesis constraints reduce both the mean value and the errors of $\neff$ significantly. After the addition of the most recent measurements of deuterium~\cite{Cooke:2013cba} and helium~\cite{Izotov:2013waa}, and using the theoretically derived fitting functions of Ref.~\cite{fabio}, we find $\sum m_\nu < 0.24$~eV and $\neff=3.25_{-0.24}^{+0.25}$ at $95\%$~CL from the analysis of CMB data, WiggleZ power spectrum measurements and the HST $H_0$ prior finding no evidence for $\neff>3$. If previous estimates of the deuterium primordial aundances are used in the analysis~\cite{fabio}, there exists a $4 (2.5)\sigma$ preference for $\neff>3$, with (without) HST data included in the numerical analyses. If the additional sterile neutrino states are considered as massive species, a $\sim 3.5 \sigma$ preference for $\neff>3$ still appears when considering BBN measurements (with previous estimates of the deuterium abundances from Ref.~\cite{fabio}) and the HST prior on the Hubble constant. The $2.5-4\sigma$ preference for $\neff> 3$ always appears for both the massless and the massive extra hot relic scenarios when considering the theoretical fitting functions of Refs.~\cite{Steigman:2012ve,Cooke:2013cba}, independently of the deuterium measurements used in the analyses. Accurate measurements as well as sharp theoretical predictions of the primordial deuterium and helium light element abundances are therefore crucial to constrain the value of $\neff$. Finally, we have considered the recent B-mode polarization measurements made by the BICEP2 experiment. Assuming that this detection is produced by a primordial tensor component, we have found that in a LCDM$+r$ scenario the presence of extra relativistic particles is significantly suggested by current Planck+WP+BICEP2 data with $N_{eff}=4.00\pm0.41$ at $68 \%$ c.l.. An extra relativistic component therefore solves the current tension between the Planck and BICEP2 experiments on the amplitude of tensor modes. | 14 | 3 | 1403.4852 |
1403 | 1403.4255_arXiv.txt | We observed the cluster CIZA J2242.8+5301 with the Arcminute Microkelvin Imager at $16$ GHz and present the first high radio-frequency detection of diffuse, non-thermal cluster emission. This cluster hosts a variety of bright, extended, steep-spectrum synchrotron-emitting radio sources, associated with the intra-cluster medium, called radio relics. Most notably, the northern, Mpc-wide, narrow relic provides strong evidence for diffusive shock acceleration in clusters. We detect a puzzling, flat-spectrum, diffuse extension of the southern relic, which is not visible in the lower radio-frequency maps. The northern radio relic is unequivocally detected and measures an integrated flux of $1.2\pm0.3$ mJy. While the low-frequency ($<2$ GHz) spectrum of the northern relic is well represented by a power-law, it clearly steepens towards $16$ GHz. This result is inconsistent with diffusive shock acceleration predictions of ageing plasma behind a uniform shock front. The steepening could be caused by an inhomogeneous medium with temperature/density gradients or by lower acceleration efficiencies of high energy electrons. Further modelling is necessary to explain the observed spectrum. | \label{sec:intro} Radio relics are diffuse, strongly-polarised, Mpc-wide synchrotron objects found at the periphery of disturbed galaxy clusters \citep[e.g.][]{2001A&A...373..106F}. Relics are thought to trace large-scale, fast, outward-travelling shock fronts (Mach numbers up to $4$) induced by major mergers between massive clusters \citep{1998A&A...332..395E, 2002ASSL..272....1S, 2012A&ARv..20...54F}. These objects usually extend perpendicularly to the merger axis of their host cluster and display narrow transverse sizes, resulting from a spherical-cap-shaped regions of diffuse emission seen side-on in projection \citep{2012A&ARv..20...54F}. Integrated radio spectral indices of elongated relics below $<1.2$ GHz range between $-1.6<\alpha<-1.0$ ($F_{\nu}\propto\nu^{\alpha}$) and the spectra display no curvature up to $\sim2$ GHz \citep{2012A&ARv..20...54F}. \citet{1998A&A...332..395E} suggest relics are formed through the diffusive shock acceleration mechanism \citep[DSA; e.g., ][]{1983RPPh...46..973D}. In this scenario, intra-cluster-medium (ICM) particles are accelerated by shocks to relativistic speeds in the presence of $\mu$G level magnetic fields at the outskirts of clusters \citep[e.g.][]{2009A&A...503..707B, 2010A&A...513A..30B}. Due to low acceleration efficiencies, mildly-relativistic (rather than thermal) electrons likely cross the shock surface multiple times by diffusing back through the shock after each passage. These re-accelerated electrons then exhibit synchrotron radio emission. CIZA J2242.8+5301 \citep[`Sausage' cluster;][]{2007ApJ...662..224K, 2010Sci...330..347V} hosts a remarkable example of double, Mpc-wide, narrow radio relics. Twin relics are thought to form after a head-on collision of two roughly equal-mass clusters \citep{1999ApJ...518..603R}. The northern relic (RN) is bright ($0.15$ Jy at $1.4$ GHz) with an integrated spectral index between $153$ MHz and $2.3$ GHz of $\alpha_\mathrm{int}=1.06\pm0.04$ \citep{2013A&A...555A.110S}. RN displays spectral index steepening and increasing curvature from the outer edge of the relic towards the inner edge, thought to be due to synchrotron and inverse Compton losses in the downstream area of a shock with an injection spectral index of $\sim-0.65$. The cluster contains a fainter counter-relic towards the south, a variety of diffuse patches of emission and a number of radio head-tail galaxies \citep{2013A&A...555A.110S}. Relics have been primarily studied at low radio frequencies ($<1.5$ GHz), making accurate determination of the injection, acceleration and loss mechanisms difficult. Most of the $\sim40$ radio relics with published spectra \citep{2012A&ARv..20...54F} have measurements up to $2.3$ GHz, while only two relics have spectra derived up to $5$ GHz \citep[Abell 521, 2163;][]{2008A&A...486..347G, 2001A&A...373..106F}. The scarcity of high radio-frequency observations of relics is caused by two factors: (i) the steep spectrum means that relics are significantly fainter at high frequencies; (ii) there are few radio telescopes with the required compact uv coverage needed to detect relics. To begin to address this, we performed exploratory observations at $16$ GHz with the Arcminute Microkelvin Imager \citep[AMI;][]{2008MNRAS.391.1545Z} of the `Sausage' cluster. AMI is the only cm-wavelength radio telescope with the required capabilities for detecting Mpc-wide, low-redshift, diffuse targets at sub-arcminute resolution. In this letter, using two different AMI configurations, we image the `Sausage' cluster at high ($40$ arcsec) and low ($3$ arcmin) resolutions. By combining the data with measurements from the Giant Metrewave Radio Telescope (GMRT) and the Westerbork Synthesis Radio Telescope (WSRT), we derive the RN spectrum over the widest frequency coverage ever performed for a radio relic (between $153$ MHz and $16$ GHz) and compare our results with predictions from spectral-ageing models. At the redshift of the `Sausage' cluster, $z=0.192$, $1$ arcmin corresponds to a scale of $0.191$~Mpc. All images are in the J2000 coordinate system. \vspace{-10pt} | High radio-frequency observations of steep-spectrum, diffuse, cluster emission have not previously been made owing to a lack of suitable instrumentation. We have observed the `Sausage' merging cluster at $16$ GHz at low ($3$ arcmin) and high ($40$ arcsec) resolution with the AMI array and we successfully detect diffuse radio relic emission for the first time at frequencies beyond $5$ GHz. Our main results are: \begin{itemize} \item The northern relic measures an integrated flux density of $1.2\pm0.3$ mJy ($6\sigma$ peak detection in a uniformly-weighted map). We investigate in detail its integrated spectrum and conclude there are clear signs of spectral steepening at high frequencies. If thermal electrons are accelerated, the steepening can be caused by a lower acceleration efficiency for the high-energy ($\gamma>3\times10^4$) electrons, a negative ICM density/temperature gradient across the source or turbulent downstream magnetic fields amplifying the emission of electrons in the cut-off regime. However, these scenarios are unlikely because of low-acceleration efficiencies at weak-Mach-number shocks. Further theoretical modelling is required. \item We also detect a peculiar, flat-spectrum ($\alpha_\mathrm{int}\approx-0.5$) patch of diffuse emission towards the south-east of the cluster, which cannot be explained by the CI model. \end{itemize} The surprising high-frequency spectral steepening results and flat-spectra presented here suggest that the simple CI model, which has been widely used in the literature to explain the formation of radio relics, needs to be revisited. Furthermore, there is a clear need for high-quality radio observations of relics at cm and mm-wavelengths that resolve radio relics. \vspace{-12pt} | 14 | 3 | 1403.4255 |
1403 | 1403.4241_arXiv.txt | text{ I show that the {\it WFIRST} microlensing survey will enable detection and precision orbit determination of Kuiper Belt Objects (KBOs) down to $H_{\rm vega}=28.2$ over an effective area of $\sim 17\,\rm deg^2$. Typical fractional period errors will be $\sim 1.5\%\times 10^{0.4(H-28.2)}$ with similar errors in other parameters for roughly 5000 KBOs. Binary companions to detected KBOs can be detected to even fainter limits, $H_{\rm vega}=29$, corresponding to $R\sim 30.5$ and effective diameters $D\sim 7\,$km. For KBOs $H\sim 23$, binary companions can be found with separations down to 10 mas. This will provide an unprecedented probe of orbital resonance and KBO mass measurements. More than a thousand stellar occultations by KBOs can be combined to determine the mean size as a function of KBO magnitude down to $H\sim 25$. Current ground-based microlensing surveys can make a significant start on finding and characterizing KBOs using existing and soon-to-be-acquired data. } \newcommand{\msun}{{\rm \ M_\odot}} \newcommand{\bdv}[1]{\mbox{\boldmath$#1$}} \newcommand{\bd}[1]{{\rm #1}} \def\au{{\rm AU}} \def\bx{{\bf x}} \def\bv{{\bf v}} \def\vega{{\rm vega}} \def\cam{{\rm cam}} \def\snr{{\rm SNR}} \def\sinc{{\rm sinc}} \def\kms{{\rm km}\,{\rm s}^{-1}} \def\masyr{{\rm mas}\,{\rm yr}^{-1}} \def\muas{{\mu\rm as}} \def\kpc{{\rm kpc}} \def\var{{\rm var}} \def\pc{{\rm pc}} \def\orb{{\rm orb}} \def\obs{{\rm obs}} \def\max{{\rm max}} \def\rel{{\rm rel}} \def\ast{{\rm ast}} \def\eff{{\rm eff}} \def\hel{{\rm hel}} \def\geo{{\rm geo}} \def\feh{{\rm [Fe/H]}} \def\e{{\rm E}} \def\bpi{{\bdv\pi}} \def\bmu{{\bdv\mu}} \def\bnu{{\bdv\nu}} \def\btheta{{\bdv\theta}} \def\balpha{{\bdv\alpha}} \def\bDelta{{\bdv\Delta}} \def\bp{{\bf p}} \def\pix{{\rm pix}} \def\hbn{{\hfil\break\noindent}} \def\la{{<\atop \sim}} \def\ga{{>\atop \sim}} \def\apj{{ApJ}} \def\aj{{AJ}} \def\apjl{{ApJL}} \def\aap{{A\&A}} \def\pasp{{PASP}} \def\mnras{{MNRAS}} \begin{document} \jkashead % | \label{sec:intro}} Kuiper Belt Objects (KBOs) provide an extraordinary probe of the origin and history of the Solar System. When Pluto was discovered by Clyde Tombaugh \citep{slipher30,slipher30b} and was then found to be in a 3:2 resonance with Neptune, it was hardly guessed that it was only the largest of a vast class of such objects. Subsequent discovery of KBOs in 2:1 resonance, in various kinematic and composition subclasses, of binary KBOs, and of a break in the size distribution at $R\sim 26.5$ \citep{bernstein04} have placed extremely detailed constraints on early Solar System evolution, even leading to radical conjectures like the idea that Uranus and Neptune originally formed much closer to the orbits of Jupiter and Saturn \citep{nice}. I show that the {\it WFIRST} microlensing survey will, without any adjustment, yield a KBO survey that is both substantially deeper and three orders of magnitude wider and more precise than existing deep KBO surveys based on {\it Hubble Space Telescope (HST)} data. | \label{sec:conclude}} Space-based microlensing surveys are an extremely powerful probe of KBOs basically because microlensing motivates very high cadence observations over long time baselines and fairly wide fields that happen by chance to lie near the ecliptic. The very large number of images allows one to construct essentially noiseless (compared to the individual images) templates, and so construct essentially blank images from the ``crowded'' fields via image subtraction. In particular, the {\it WFIRST} microlensing survey, without any modification, can yield 4500-6500 KBOs down to $H_\vega=28.2$. The last magnitude of such a search requires algorithmic and/or hardware development to carry out the computations in a timely manner. However, the more restricted search to $H_\vega=27.1$ (with 4500-5000 KBOs) can be carried out by simple brute force searches using today's technology. Because the detections arise from a near-continuous time series over much less than a year, and centered at quadrature, the statistical characterization of the orbital parameters is best understood in a 6-D cartesian framework. The same framework allows one to directly map the expected (or, more generally, allowed) orbit space into a cartesian search space. In particular, I find that for {\it WFIRST}, the period errors scale as $\sigma(P)/P\sim 0.09\%\times 10^{0.4(H_\vega-H_{\rm break})}$, where $H_{\rm break}=25.1$ is the break in the luminosity function. Binary companions that are separated by a few pixels can be found down to about $H_\vega\sim 29$, regardless of the limit of the primary search. The limit is deeper because the search space is smaller, implying fewer noise spikes. These binaries can provide statistical mass information, or if followed up by additional observations, individual mass measurements. Binary companions with separations down to 0.1 pixels (11 mas) can be found for roughly equal (but not exactly equal) masses for primaries $H\leq 23$ from the offset between centers of mass and light, and for larger subpixel separations down to $H\sim 25$. Exactly (or very nearly) equal-mass binaries at sub-pixel separations can be detected from image elongation. Analogs to essentially all binaries currently being found ($R\la 24$, $\theta_c\ga 0.01^{\prime\prime}$) will be found by {\it WFIRST}, but it will also probe a huge parameter space of binary companions that has not yet been explored. A side benefit of the fact that microlensing searches are carried out in the most crowded fields (prior to image subtraction) is the high probability of occultations. On average, each KBO at the break will occult 0.4 stars with $H_*<21$ (so reliably detected in the deep drizzled image) and with at least $4\,\sigma$ detections. Over 1000 occultations of detected KBOs will enable measurement of the KBO albedo as functions of orbital properties and absolute magnitude. Finally, using the same techniques outlined in this paper, it should be possible to find roughly 100 KBOs using current and soon-to-be-initiated ground-based microlensing surveys. | 14 | 3 | 1403.4241 |
1403 | 1403.4288_arXiv.txt | The extent to which angular momentum transport in accretion discs is primarily local or non-local and what determines this is an important avenue of study for understanding accretion engines. Taking a step along this path, we analyze simulations of the magnetorotational instability (MRI) by calculating energy and stress power spectra in stratified isothermal shearing box simulations in several new ways. We divide our boxes in two regions, disc and corona where the disc is the MRI unstable region and corona is the magnetically dominated region. We calculate the fractional power in different quantities, including magnetic energy and Maxwell stresses and find that they are dominated by contributions from the lowest wave numbers. This is even more dramatic for the corona than the disc, suggesting that transport in the corona region is dominated by larger structures than the disc. By calculating averaged power spectra in one direction of $k$ space at a time, we also show that the MRI turbulence is strongly anisotropic on large scales when analyzed by this method, but isotropic on small scales. Although the shearing box itself is meant to represent a local section of an accretion disc, the fact that the stress and energy are dominated by the largest scales highlights that the locality is not captured within the box. This helps to quantify the intuitive importance of global simulations for addressing the question of locality of transport, for which similar analyses can be performed. | Angular momentum transport in accretion discs has been a long standing topic of research (for a review, see e.g. \cite{2008bhad.book.....K}, \cite{2009AnRFM..41..283S}, \cite{2011ppcd.book..283K}, \cite{2012PhyS...86e8202B}, \cite{2013LRR....16....1A}, \cite{2013arXiv1304.4879B}). Rapid variability observed in systems like Active Galactic Nuclei (AGN) hints at some kind of an enhanced mechanism for transport caused by turbulence. \cite{1973A&A....24..337S} constructed a 1-D model of an accretion disc with an $\alpha$ parametrization for such an enhanced diffusion mechanism that they used to approximate the turbulent transport. Because the formalism is of practical value, results of possible hydrodynamic and magnetohydrodynamic transport mechanisms are often quantified in terms of $\alpha$ but understanding the various mechanisms and the validity of quantifying them in such a simple parameterization are both topics of active research. One limitation of a ``local'' formalism for transport is that these accretion engines also commonly have jets and coronae which are large scale phenomena. The question of what determines the fraction of local vs. non-local transport for a general accretor is an engaging avenue of research. The magnetorotational instability (MRI) (\cite{1991ApJ...376..214B}, \cite{1998RvMP...70....1B}) has emerged as a plausible solution to the long standing angular momentum transport problem, at least for highly ionized discs. The radial-azimuthal stresses from shearing box simulations, when averaged over the whole box, first show a definitive exponential growth, and subsequently saturate in a fully developed turbulent state. In interpreting the results from simulations, it is necessary to be careful in distinguishing conceptual lessons that can be learned form specific results that may depend on the choice of initial $B$ fields, boundary conditions, domain size, and resolution. For a detailed convergence study and review of previous work of both local and global simulations see \cite{2011ApJ...738...84H}. While the MRI is commonly thought of as a source of local transport, an important result of stratified MRI simulations (e.g. local: \cite{2000ApJ...534..398M}, global: \cite{2010MNRAS.408..752P}) in this context is that MRI turbulent discs lead to formation of a laminar coronal region where magnetic fields dominate thermal pressure at a few scale heights (typically 2 scale heights) above from the mid-plane thought to be the corona. The transport properties are expected to be different in the corona and the formation of such hints at the emergence of non-local processes. The main focus of our paper is to quantitatively assess the locality and anisotropy of MRI generated turbulence by calculating energy and stress spectra for a set of stratified shearing box simulations with different domain sizes and resolution. We are also explore whether the scale of the dominant transport structures are strongly affected by the numerical setup and the extent to which convergence among our simulations emerges. We describe our numerical setup in section 2. In section 3, we discuss our results based on spectral calculations. We synthesize the interpretation of our results with those of previous MRI literature in section 4. We conclude in section 5. | Using energy and stress power spectra and fractional power as a function of wave number, we have demonstrated that the MRI leads to predominantly anisotropic and non-local turbulent structures at least within a shearing box. These findings are broadly consistent with previous MRI studies that employ correlation functions. However, our computation of the fractional power spectrum in energy and stress helps assess the question of locality more directly than previous approaches. We find that not only that MRI leads to non-local structures but that these structures dominate transport. Because our simulations were conducted for a shearing box, we cannot assess how nonlocal the transport would be in a global simulation from the MRI but the method we have used to assess this would be applicable. Our findings based on study of a range of domain sizes do suggest that the anisotropy and non-locality would likely persist on non-local scales within a global disc. An important physical implication of this result is to highlight that the transport found in MRI simulations may not be congruent with the local, isotropic and radial-only transport model of \cite{1973A&A....24..337S}, further motivating the opportunities to improve the basic semi-analytic framework for modeling accretion discs. | 14 | 3 | 1403.4288 |
1403 | 1403.1652_arXiv.txt | We describe the design, construction, and initial validation of the variable-delay polarization modulator (VPM) designed for the PIPER cosmic microwave background polarimeter. The VPM modulates between linear and circular polarization by introducing a variable phase delay between orthogonal linear polarizations. Each VPM has a diameter of 39 cm and is engineered to operate in a cryogenic environment (1.5 K). We describe the mechanical design and performance of the kinematic double-blade flexure and drive mechanism along with the construction of the high precision wire grid polarizers. | Introduction}% A Variable-delay Polarization Modulator (VPM) changes the state of polarization of electromagnetic radiation via the introduction of a variable phase delay between two orthogonal polarization components.\cite{Chuss06} This leads to a transfer function in which an output Stokes parameter $U^\prime$, which is defined at a $45^\circ$ angle with respect to the polarization separation basis (defined by the VPM grid), is modulated according to \begin{align} U^\prime=U\cos{\phi}+V\sin{\phi}. \end{align} Here, $U$ is the linear Stokes parameter that is the difference between the linear polarization components oriented at $\pm45^\circ$ with respect to the wires, $V$ is the Stokes parameter corresponding to circular polarization, and $\phi$ is the introduced phase delay between the two orthogonal polarizations. In recent work, VPMs have been realized by the arrangement of a wire grid polarizer in front of and parallel to a moving mirror \cite{Krejny08} (See Fig.~\ref{fig:dia}). This phase delay is a function of the incidence angle, $\theta$, and the grid-mirror separation, $d$. In the limit where the wavelength is much larger than the wire, the phase delay can be approximated using the geometric path difference, \begin{equation} \phi\approx\frac{4\pi\lambda}{d}\cos{\theta}. \end{equation} \begin{figure} \includegraphics[width=2.5in]{VPM.pdf} \caption{\label{fig:dia}Overview of the VPM concept adapted from Chuss et al.\cite{Chuss12a} The geometric path difference between the two polarizations is highlighted by the thick solid lines.} \end{figure} For greater fidelity, a transmission line model has been used to connect the grid-mirror separation with the introduced phase delay as a function of the wire grid geometry.\cite{Chuss12a} In addition, metrology techniques for realizing high precision measurements of the VPM response have been developed.\cite{Eimer11} The Primordial Inflationary Polarization ExploreR (PIPER)\cite{Chuss10,Kogut12} and the Cosmology Large Angular Scale Surveyor (CLASS)\cite{Eimer12} will both employ VPMs as the first element of their optics in their measurement of the polarization of the cosmic microwave background on large angular scales. The motivation for using the VPM is twofold. First, the VPM can be made large enough to be placed at the primary aperture of a relevant telescope. This allows sky polarization to be modulated while leaving any subsequent instrumental contribution to the polarization unmodulated, thereby mitigating potential mixing between temperature and polarization. In addition, the small linear distances (fraction of a wavelength) required for the grid-mirror separation variation allow a rapid ($\sim$few Hz) modulation in polarization that moves the signal out of the $1/f$ noise of the environment. Devices employing small linear motions have potential advantages in both reliability and power dissipation over those that utilize large angular motions for cryogenic applications. The PIPER flexures were designed for more than 3$\times10^6$ cycles, sufficient to survive 8 flights at 3 Hz operation. It is anticipated that similar designs could be employed having much longer lifetimes. Superconducting bearings for wave plates \cite{Klein11} are a good solution for low-friction rotational operation; however, parasitic heat removal is a challenge for such non-contacting solutions. This paper describes the construction and initial validation of the PIPER VPMs. PIPER is a balloon-borne cosmic microwave background polarimeter that will operate at 4 frequencies between 200 and 600 GHz in separate flights. The instrument is enclosed in a large bucket dewar, and each of the elements of the two telescopes is cooled to 1.5 K by a combination of evaporating liquid helium and superfluid pumps.\cite{Singal11} Cooling the telescope reduces background radiation and mitigates the coupling to variable grid emission. Because of this, the VPM must be engineered to work at 1.5 K. The details of the optical design \cite{Eimer10} and detectors \cite{Benford10} for PIPER are described in other papers. This work focuses on the design, construction and initial validation of the VPMs. | We have designed, constructed, and validated a cryogenic variable-delay polarization modulator (VPM) for the PIPER suborbital cosmic microwave background polarimeter. The achieved specifications for the VPM are shown in Table~\ref{tab:vpmsum}. \begin{table}[htbp] \centering \begin{tabular}{@{} lcc @{}} % \toprule Property & Value & Units \\ \hline Maximum mirror throw & 1.0 & mm \\ Mirror tilt at maximum throw & 5 & arc seconds \\ Clear aperture & 39 & cm\\ Wire diameter & 40 & $\mu$m \\ Wire separation & 117.0 & $\mu$m \\ Wire separation error & 5.7 & $\mu$m\\ Grid flatness & 8.7 & $\mu$m\\ Min. wire resonance & 190 & Hz \\ Polarization Efficiency & $>99$ & \%\\ \hline \end{tabular} \caption{The parameters of the VPM for PIPER.} \label{tab:vpmsum} \end{table} | 14 | 3 | 1403.1652 |
1403 | 1403.6244_arXiv.txt | We analyzed temporal and spectral properties, focusing on the short bursts, for three anomalous X-ray pulsars (AXPs) and soft gamma repeaters (SGRs), including \object{SGR 1806-20}, \object{1E 1048-5937} and \object{SGR 0501+4516}. Using the data from \textit{XMM-Newton}, we located the short bursts by Bayesian blocks algorithm. The short bursts' duration distributions for three sources were fitted by two lognormal functions. The spectra of shorter bursts ($< 0.2~\rm s$) and longer bursts ($\geq 0.2~\rm s$) can be well fitted in two blackbody components model or optically thin thermal bremsstrahlung model for \object{SGR 0501+4516}. We also found that there is a positive correlation between the burst luminosity and the persistent luminosity with a power law index $\gamma = 1.23 \pm 0.18 $. The energy ratio of this persistent emission to the time averaged short bursts is in the range of $10 - 10^3$, being comparable to the case in Type \uppercase\expandafter{\romannumeral1} X-ray burst. | Anomalous X-ray pulsars (AXPs) and soft gamma repeaters (SGRs) are isolated neutron stars, now regarded as ``magnetars". As X-ray pulsars, their rotation periods vary from $\sim 2$ to $\sim 10\rm ~s$, while spin-down rates cover $ 10^{-13} - 10^{-11} \rm ~ s~s^{-1}$ \citep{Mereghetti2008}. Except for some strange magnetars (e.g. \object{SGR 0418+5729}, \citealt{Rea2013}), both of these two parameters are larger than in normal radio pulsars, which results in an ultra-strong magnetic field exceeding the quantum critical value ($B_{\rm{QED}} = 4.4 \times 10^{13} ~ \rm{G}$) in AXPs/SGRs. Here, the assumption is used that the AXPs/SGRs are braked by magnetic dipoles in a vacuum. During outburst, their persistent soft X-ray luminosity ($\sim 10^{34}-10^{36}~\rm erg~s^{-1}$) usually exceeds their rotational energy loss rates ($\sim 10^{33} \rm~erg~s^{-1}$) \citep{Mereghetti2008}. This characteristic is considered as an important boundary between magnetar and normal pulsars. However, the discovery of \object{PSR J1846-0258} blurred this boundary as this object has magnetar-like bursts and a persistent X-ray luminosity comparable with its rotational energy loss rate \citep{Gavriil2008}. AXPs/SGRs also have temporal activities with different time scales, such as glitches/anti-glitches (lasting several dozen days, including the recovery stage, \citealt{Archibald2013}), outburst (lasting several months to years) and short burst (lasting $\sim 0.1~\rm s$). \citet{Duncan1992} first presented the ``magnetar" concept and discussed the formation of a magnetar. They suggested that an $\alpha\Omega$ dynamo operating in a neutron star with initial period $P \sim 1 \rm~ ms $ could generate a dipole magnetic field much stronger than $10^{13}~\rm G$. \citet{Thompson1995} regarded SGRs as a class of magnetar and suggested that the large scale reconnection or instability of magnetic field could account for short bursts and the giant flare in \object{SGR 0526-66}. \citet{Thompson1996} considered the diffusive crust activity producing low amplitude Alfv{\' e}n waves in the magnetosphere as an effective way to transfer the magnetic energy into persistent X-ray emission. \citet{Kouveliotou1998} measured the spin-down rate of \object{SGR 1806-20} and confirmed the ultra-strong magnetic field in the dipole magnetic field assumption. \citet{Kouveliotou1998}, combined with results from subsequent papers (e.g. \citealt{Marsden1999}, \citealt{Dib2009}) on other sources, was regarded as substantial evidences for magnetar model. \citet{Lyubarsky2005} considered magnetic reconnection in relativistic treatment within magnetar framework. \citet{Perna2011} did a quantitative simulation to trigger the short bursts based on a starquake caused by the breaking of neutron star crust. However, some challenges inevitably appeared, such as the existence of low magnetic field magnetars (\object{SGR 0418+5729}, \citealt{Rea2013}; \object{Swift J1822.3−1606}, \citealt{Rea2012}; \object{3XMM J185246.6+003317}, \citealt{Zhou2014}, \citealt{Rea2014}), and the predictions that magnetars should have large spatial velocities and energetic-associated supernovae \citep{Duncan1992}, neither of which has been observed yet \citep{Vink2006, Mereghetti2008}. \citet{Chatterjee2000} developed an accretion disk model for AXPs/SGRs, whereby the emission is powered by accretion from a fossil disk. \citet{Wang2006} deduced there might be a disk around \object{4U 0142+61} from the spectral-energy distribution in the optical/infrared band, where the disk may come from a supernova fallback. The accretion-based models were usually criticized because of the lack of a mechanism to explain the giant flares and bursts. Thus, these models require input from the magnetar model to become a ``hybrid" and complete model \citep{Mereghetti2008}. Nevertheless, combining the accretion model and strange matter state, Xu and coworkers \citep{Xu2003, Zhou2004, Xu2006, Xu2007} suggested that the solid quark stars, instead of neutron stars, could generate giant flares and bursts in the process of accretion-induced star-quakes. Massive white dwarfs with larger rotational energy release than neutron stars are also regarded as an alternative model for AXPs/SGRs \citep{Malheiro2012}. Comparing the magnetar model and the accretion model, the main difference is the origin of energy. Magnetic energy release is responsible for the persistent and burst radiation in the magnetar model, while it is gravitational energy of accreted matter or elastic energy of solid matter \citep{Zhou2014a} which produces this emission in the accretion model. One way to distinguish the mechanisms of persistent and burst radiation is analyzing the spectra, including the continuum and emission or absorption lines. \citet{Ibrahim2003} and \citet{Ibrahim2007} did a series of studies of spectral features at $\sim 5~\rm keV$ and $\sim 20~\rm keV$ from \object{SGR 1806-20}. They regarded these features as evidence for the proton-cyclotron resonance (PCR), which indicates that the surface magnetic field could reach $\sim 10^{15}\rm ~ G$. \citet{Bernardini2009} found a spectral feature at $\sim 1.1\rm~ keV$ in the AXP \object{XTE J1810-197} which requires a $\sim 10^{14}\rm~G$ magnetic field if it is from PCR. \citet{Tiengo2013} discovered a phase-dependent feature with a ``V'' shape in the phase-resolved, persistent spectrum of \object{SGR 0418+5729}, and they interpreted this result as evidence for a twisted magnetic field. \citet{Vigano2014} showed that the spectral features in the thermally dominated range could also be the result of inhomogeneous surface temperatures, without any dependence on the magnetic field. However, this interpretation does not adequately describe the phase-dependent feature of \object{SGR 0418+5729}. The uncertainty in determining the emission mechanisms from line features arises mainly because the observations are not sufficient to distinguish between the theoretical models. Thus, previous studies have focused on the the interpretation of continuum spectra since these data are better able to constrain the models\citep{Fenimore1994}. With this fact in mind, \citet{Nakagawa2011} and \citet{Enoto2012} studied the continuum of persistent radiation and weak burst spectra of \object{SGR J0501+4516} and \object{SGR J1550-5418} from \textit{Suzaku} observations. They found these spectra to have similar shapes, and thus they claimed the persistent emission has the same origin as the weak bursts (however, see \citealt{Lin2012} and \citealt{Lin2013} who found the opposite using data from \textit{XMM-Newton} and \textit{Swift}). Analysis of the temporal properties is also an effective way to research the radiation mechanism. \citet{Cheng1996} discovered the short bursts in \object{SGR 1806-20} and the starquakes have similar temporal characteristics: they both have lognormal waiting time distributions as well as have power law energy distributions $dN \propto E^{-1.6\pm0.2}dE$. Thus, they suggested that short bursts in SGRs may be powered by starquakes. More detailed analysis for \object{SGR 1900+14} \citep{Govgucs1999} and \object{SGR 1806-20} \citep{Govgucs2000} confirmed these former discoveries. \citet{Gotz2004} analyzed the spectral evolution of short bursts in \object{SGR 1806-20} using data from \textit{INTEGRAL} and found a negative relationship between hardness ratio and intensity. In subsequent work, \citet{Gotz2006} confirmed this correlation and analyzed the intensity distribution of short bursts. \citet{Nakagawa2007} showed the spectral and temporal properties for \object{SGR 1806-20} and \object{SGR 1900+14} using data from \textit{HETE-2}. \citet{Woods2005} claimed the existence of two classes of bursts in AXPs/SGRs, basing on the existence of extended X-ray tails (tens to hundreds of seconds) and the correlation with pulses for some bursts. To summarize, the analysis of short bursts, whether spectral or temporal, is important to determine the mechanism of magnetar radiation. An effective way to locate bursts is by Bayesian blocks algorithm. This algorithm was developed by \citet{Scargle1998} and \citet{Scargle2013} to analyze the structures in photon counting data and to detect Gamma-ray Bursts. \citet{Lin2013} first used this algorithm to search for short bursts in SGRs, and they found the technique to be especially helpful in distinguishing dim bursts. They analyzed the morphological properties of the short bursts and fitted the duration distributions with two lognormal functions for \object{SGR 0501+4516}, and then they verified the power law distribution of the fluence. As a Bayesian method, the Bayesian blocks algorithm inevitably has prior parameters to determine. Furthermore, this algorithm has time complexity $ O(n^2) $ \citep{Scargle2013}, so additional work is necessary to reduce the computing time. More details will be shown in Section 3. In this paper, we analyze the temporal and spectral properties of three AXPs/SGRs: \object{SGR 1806-20}, \object{1E 1048-5937} and \object{SGR 0501+4516}. We locate short bursts using the Bayesian blocks algorithm, and we analyze the spectral and temporal data with the aim to constrain the potential energy origins of bursts. In Section 2, we describe the observations and data reduction. The details of detecting short bursts by using tje Bayesian blocks algorithm are presented in Section 3. In Section 4, we show that the short burst duration distributions, evolution of the flux and the relationship between short bursts and persistent emission. We discussed the accretion model and the magnetar model, basing on our results, in Section 5. | In this paper, we showed the temporal and spectral analysis of short bursts in three AXPs/SGRs using the Bayesian blocks algorithm. The Bayesian blocks method checks each count recorded by the detector and determines whether it is a change point, which means that the time resolution for each block could reach the limit of the detector. Thus, the beginning and the end of each burst could also be determined in this precision, which makes it possible to analyze the duration of bursts precisely. We found the duration distributions for AXPs/SGRs can be fitted by the sum of two lognormal functions. Among all of these three sources, the mean values of the two components are at $\sim 0.1 \rm~ s$ and $\sim 1 \rm~ s$, respectively. Phenomenologically, one of this sources is dominated by longer ones, while the other two are dominated by shorter ones. \citet{Govgucs2001} first showed the statistics of duration using \textit{RXTE} data and indicated the distributions peaked at $\sim 100~\rm ms$ for \object{SGR 1806-20} and \object{SGR 1900+14}. They also divided these short bursts into two components, named ``single pulse burst" and ``multi-peaked burst". These two components peak at $88.1~\rm ms$ and $229.9~\rm ms$ in \object{SGR 1806-20}, or peak at $46.7~\rm ms$ and $148.9~\rm ms$ in \object{SGR 1900+14}. The longer components are much shorter than our results ($\sim 1~\rm s$). This difference has two origins, the method to detect short bursts and the way to divide bursts into two classes. These results show that the Bayesian blocks algorithm has the ability to find bursts which are dim but long enough, and the existence of the long time scale tail of short bursts in AXPs/SGRs. The ability of Bayesian blocks algorithm to find dim bursts is apparently affected by the count rates of the source. We regard the modeling of waiting time as a possible evidence to this conclusion. \citet{Cheng1996} first showed the waiting time distribution of SGR and compared it from with the cases in earthquake. \citet{Govgucs1999} and \citet{Govgucs2000} showed that the waiting time distribution may be fitted by a lognormal function for \object{SGR 1900+14} and \object{SGR 1806-20}. However, there is an unexpected bump in the short time scale of the distribution. They regarded this structure as a result of the uncertainty to determine the shape of bursts, which may make a multi-peaked burst become several single pulse bursts with shorter waiting times. In our results, \object{SGR 1806-20} and \object{1E 1048-5937} also showed the similar phenomenon, while \object{SGR 0501+4516} showed a better lognormal distribution. Comparing these two results, we regard the undetectable weak bursts as the reason for divergences in distribution, which is notable in \object{SGR 1806-20} and \object{1E 1048-5937} because of the low count rates. \object{SGR 0501+4516} is the nearest one among these three sources and it was also in its most luminous phase, which make it easy to detect the dim bursts. In this case, we attribute the differences of the three sources in our samples to the undetected weak bursts in \object{SGR 1806-20} and \object{1E 1048-5937}. Of course, the possibility can not be ruled out that our samples are completed in this energy band ($1-10 ~\rm keV$) for \object{SGR 1806-20} and \object{1E 1048-5937}, which do not have weaker bursts. However, the possibility is quite limited, considering the fact that the count rates for these two sources are only $\sim1~\rm cte s^{-1}$, and that waiting times we got in \object{SGR 1806-20} are $\sim 10$ times longer than the ones in \citet{Govgucs2000}. The spectra were also analyzed for long and short time scale bursts, using the first observation from \object{SGR 0501+4516}. In our results, this observation contains the most burst photons and could be divided easily into two components with little interlock. We chose two models, two black bodies and OTTB, in our burst spectra fitting. Two black bodies model is one of the simplest model and widely used in burst spectra fitting \citep{Feroci2004}. \citet{Olive2004} analyzed an intermediate burst from \object{SGR 1900+14} and found that two black bodies model could provide an acceptable fit to both time resolved spectra and integrated spectrum. They attributed the higher temperature component to multi-temperature trapped fireball and regarded the lower one as the emission from star surface. \citet{Israel2008} suggested that the higher temperature component came from the surface of neutron star, while the lower one was emitted from a magnetospheric region. Ignoring the mechanism, double black bodies model could provide acceptable fits with reduced $\chi^2 \sim 1.1$ to the burst spectra for both shorter and longer burst in our work. We also used an alternative model (OTTB) to fit the burst spectra. This model is also widely used in the burst spectra fitting in AXPs/SGRs, but it is not always effective \citep{Feroci2004, Olive2004}. However, it works well in our burst spectra fitting too. Thus, we examined the chosen burst spectra using two universal models and both models works well judged by reduced $\chi^2 \sim 1.1$. These two models reflect different physical processes and we can not claim which is the truth on the surface of AXPs/SGRs. Fortunately, in both of these two models, the characteristic parameters, black body temperature or plasma temperature, show negligible variety in the error range. In that case, we prefer to regard that the two classes bursts we divided originate from the same resource, but how could the two time scale bursts be generated is still an issue to be considered. The relationships between the short bursts and the persistent emission were analyzed to find hints for the energy origin of AXPs/SGRs. We show a power law with $\gamma = 1.23\pm0.18$ between the luminosity of persistent emission and burst. In accretion model, this phenomenon is natural, since the persistent radiation represents the accretion rate, while the burst radiation represents the consumption rate of the accreted matter. Considering the an equilibrium condition during an observation, the positive correlation is apparent between the accretion rate and the consumption rate, which results in the positive relationship between the luminosity of persistent emission and burst. In the magnetar model, this phenomenon is also natural. Both the persistent emission and the bursts are from the magnetic energy. During an outburst, some seismic activities may trigger magnetic reconnections or crystal fractures, which are responsible for the short bursts \citep{Thompson1996}. During this process, the magnetosphere will become more twisted. The corresponding persistent flux will also increase \citep{Beloborodov2009}. We also introduced the energy ratio ($L_{\rm p}/L_{\rm aver,b}$) from Type \uppercase\expandafter{\romannumeral1} X-ray bursts to judge the energy release in AXPs/SGRs. In Type \uppercase\expandafter{\romannumeral1} X-ray bursts, the range of this ratio covers from several tens to $\sim 1000$ and does not vary with the persistent luminosity \citep{Galloway2008}. This character is regarded as a strong evidence for the nuclear burning model. We show that the energy ratios in AXPs/SGRs have the similar statistic character with the ones in Type \uppercase\expandafter{\romannumeral1} X-ray bursts. Considering that burst fluxes we got are lower limits, the energy ratio we got is the upper limit of the real one. The energy ratios in our sample cover from $\sim 10$ to $\sim 2000$, which is comparable with in Type \uppercase\expandafter{\romannumeral1} X-ray bursts. However, the nuclear burning model is not so suitable for AXPs/SGRs based on two reasons. On one hand, AXPs/SGRs are isolated stars without apparent accretion. On the other hand, the time scale of short bursts is much shorter than the prediction in nuclear burning model. In Type \uppercase\expandafter{\romannumeral1} X-ray bursts, the energy ratio can be calculated for each burst, while in AXPs/SGRs, this ratio can only be analyzed for a long period with many bursts. Although the ratios in Type \uppercase\expandafter{\romannumeral1} X-ray bursts and AXPs/SGRs are different, they both show that there should be a connection between the energy origin of persistent radiation and the resource of bursts. Nevertheless, we notice that there is no relevant prediction about this ratio in AXPs/SGRs models yet. We regard that this ratio ($L_{\rm p}/L_{\rm aver,b}$) may reflect some essence like in Type \uppercase\expandafter{\romannumeral1} X-ray bursts and should be involved into consideration in a successful model. We would like to thank the pulsar group of PKU for helpful discussions and comments, and Hao Tong from XAO of CAS for supplements about magnetar model. We also thank Kathryn Plant and Laura Lopez for revising the whole manuscript. This research is based on data and software provided by the the ESA \textit{XMM-Newton} Science Archive (XSA) and the NASA/GSFC High Energy Astrophysics Science Archive Research Center (HEASARC). This work is supported by the 973 program (No. 2012CB821801), the National Natural Science Foundation of China (Grant Nos. 11225314, Grant Nos. 11133002, NSFC-11103020 and NSFC-11473027), National Found for Fostering Talents of Basic Science (No. J0630311), and XTP project XDA04060604, XDB09000000 (Supported by the Strategic Priority Research Program ``The Emergence of Cosmological Structures" of the Chinese Academy of Sciences, Grant No. XDB09000000). Z.S. Li is supported by China Postdoctoral Science Foundation (2014M560844). | 14 | 3 | 1403.6244 |
1403 | 1403.6591_arXiv.txt | We study weak lensing properties of filaments that connect clusters of galaxies through large cosmological $N$-body simulations. We select 4639 halo pairs with masses higher than $10^{14}h^{-1}\mathrm{M}_\odot$ from the simulations and investigate dark matter distributions between two haloes with ray-tracing simulations. In order to classify filament candidates, we estimate convergence profiles and perform profile fitting. We find that matter distributions between haloes can be classified in a plane of fitting parameters, which allow us to select straight filaments from the ray-tracing simulations. We also investigate statistical properties of these filaments, finding them to be consistent with previous studies. We find that $35\%$ of halo pairs possess straight filaments, $4\%$ of which can directly be detected at $S/N\geq2$ with weak lensing. Furthermore, we study statistical properties of haloes at the edges of filaments. We find that haloes are preferentially elongated along filamentary structures and are less massive with increasing filament masses. However, the dependence of these halo properties on masses of straight filaments is very weak. | The standard model of structure formation that assumes the existence of cold dark matter predicts the hierarchal structure formation (e.g., \citealt{1980lssu.book.....P, 2005Natur.435..629S}). As a result, the large scale structure of the Universe shows a complex structure, the so called cosmic web, which is seen in both $N$-body simulations and observations such as the Sloan Digital Sky Survey (SDSS; \citealp{2000AJ....120.1579Y}). Two main components that constitute the cosmic web are voids and filaments. Voids are defined by nearly empty, i.e., underdense region. On the other hand, filaments are slightly overdense regions which have not collapsed into haloes. Massive haloes such as clusters of galaxies form at the intersections of filaments. \citet{1996Natur.380..603B} theoretically investigated the formation of filaments as a result of non-linear evolution. Properties of filaments have also been studied through $N$-body simulations \citep{2005MNRAS.359..272C, 2010MNRAS.401.2257M}. Several methods have been developed for characterizing and describing the filamentary structure, including second moment and minimal spanning tree methods \citep{2002SPIE.4847...86M, 2010MNRAS.401.2257M}. In addition, properties of haloes that exist near filaments were studied through $N$-body simulations (e.g., \citealt{2012MNRAS.427.3320C, 2012MNRAS.421L.137L}). These studies showed that the formation processes and properties of haloes are different between low mass and high mass haloes. The blind detection of filaments is not easy (e.g., \citealt{1998astro.ph..9268K, 2008MNRAS.383.1655S, 2008MNRAS.385.1431H}). One way to locate filaments is to look for intercluster filaments, i.e., filaments connecting clusters of galaxies. Attempts to detect intercluster filaments with X-ray \citep{1995A&A...302L...9B, 2000ApJ...528L..73S, 2003A&A...403L..29D, 2008PhDT.......221W} have not been very convincing because it is difficult to distinguish whether those X-ray signals come from filaments or haloes. An alternative way for detecting the distribution of matter in filaments is provided by weak gravitational lensing. Weak lensing is useful because it does not depend on their dynamical state and kind of matter. Indeed there have been several claims of weak lensing detections of filaments between clusters of galaxies \citep{1998astro.ph..9268K, 2002ApJ...568..141G, 2004ogci.conf...34D, 2012Natur.487..202D, 2012MNRAS.426.3369J, 2013ApJ...777...43M}. Given developments of methods and several observations for the detection of filaments, more detailed understanding of properties of filaments in simulations are needed. In this paper, we classify dark matter distributions between halo pairs using weak lensing mass maps, and study statistical properties of filaments as well as haloes at the edges of the filaments. This paper is organized as follows. In Section~\ref{sec.analysis}, we describe our analysing techniques, focusing on the basics of weak gravitational lensing and the characterization of haloes. In Section~\ref{sec.simulation}, we describe our simulations, the selection method of filament candidates and the detection method used for searching filaments. In Section~\ref{result}, we describe results of our detection method and properties of haloes. We summarize our results in Section~\ref{conclusion}. | \label{conclusion} In this paper, we classified dark matter distributions between halo pairs with weak gravitational lensing, and investigated statistical properties of filaments and haloes. We selected filament candidates from the halo catalogue generated from a large set of $N$-body simulations. We classified these candidates into four regions by fitting background-subtracted convergence profiles. We have shown that straight filaments on convergence maps can be classified in the $\kappa_0-\theta_c$ plane, where $\kappa_0$ and $\theta_c$ are two fitting parameters characterizing the convergence profile. Specifically, filament candidates were divided into four regions in the $\kappa_0-\theta_c$ plane and filaments in region~4 were found to be straight filaments. We also found that this classification does not depend on separations between two haloes. We have studied statistical properties of filaments. The number of filaments classified into region~2 and region~4 decreases as a function of separation between the two haloes. On the other hand, the number of filaments in region~1 and region~3 shows increasing trends as separations increase. These trends can be explained by the decrease of correlations with surrounding matter and the projection effect. These trends derived in this paper are broadly consistent with previous study \citep{2005MNRAS.359..272C}. While only $\sim 4$\% of straight filaments in region 4 can be detected with weak lensing, stacked weak lensing allows us to detect the mean mass distribution of filaments easily. In the HSC survey, we can achieve significant detection of filamentary structures at $S/N\geq5$ with stacked lensing. The matter density observed with $Planck$ is larger than that used in this paper. This difference would make it easy to form filaments and cause a difference in their statistical properties presented in this paper. Therefore, the statistical property of filaments would serve as a new tool for constraining cosmology. We have also studied statistical properties of haloes at the edges of filaments. % We found that haloes are less massive with increasing filament masses and are elongated along the halo-halo axis due to interaction with filaments. This can be explained by that fact that haloes grow by accretion of matter in filaments. Massive haloes already experienced accretion from filaments, which result in smaller filament masses and more elongation along the halo-halo axis. On the other hand, the dependences of these halo properties on filament masses are very weak, suggesting the necessity of large-scale surveys to observationally confirm these statistical properties. | 14 | 3 | 1403.6591 |
1403 | 1403.6628_arXiv.txt | We investigate the neutrino $\leftrightharpoons$ cosmic ray connection for sources in the Galaxy in terms of two observables: the shape of the energy spectrum and the distribution of arrival directions. We also study the associated gamma ray emission from these sources. | 14 | 3 | 1403.6628 |
||
1403 | 1403.1008_arXiv.txt | Observations have shown that the spatial distribution of satellite galaxies is not random, but aligned with the major axes of central galaxies. This alignment is dependent on galaxy properties, such that red satellites are more strongly aligned than blue satellites. Theoretical work conducted to interpret this phenomenon has found that it is due to the non-spherical nature of dark matter halos. However, most studies over-predict the alignment signal under the assumption that the central galaxy shape follows the shape of the host halo. It is also not clear whether the color dependence of alignment is due to an assembly bias or an evolution effect. In this paper we study these problems using a cosmological $N$-body simulation. Subhalos are used to trace the positions of satellite galaxies. It is found that the shapes of dark matter halos are mis-aligned at different radii. If the central galaxy shares the same shape as the inner host halo, then the alignment effect is weaker and agrees with observational data. However, it predicts almost no dependence of alignment on the color of satellite galaxies, though the late accreted subhalos show stronger alignment with the outer layer of the host halo than their early accreted counterparts. We find that this is due to the limitation of pure N-body simulations that satellites where satellite galaxies without associated subhalos (`orphan galaxies') are not resolved. These orphan (mostly red) satellites often reside in the inner region of host halos and should follow the shape of the host halo in the inner region. | In the currently favored cold dark matter cosmology, cosmic structures are built up of dark matter halos. The formation of halos is hierarchical, in that small halos form first and subsequently merge to form bigger ones. After mergers, smaller halos become the subhalos of the more massive host halo. Galaxies are thought to form in the centers of halos \citep{White1978}, and most of them become satellites when their host halos merge with a more massive ones. After mergers, the motions of subhalos/satellites are mainly dominated by the gravitational potential of the hosts, and in principle they can be well traced using numerical simulations \cite[e.g.,][]{ Springel2001} or analytical models \cite[e.g.,][]{WhiteFrenk91, Taylor2001, Gan2010}. It was found from $N$-body simulations \cite[e.g.,][]{Jing2002} that dark matter halos are not spherical, but rather they are tri-axial. The non-spherical shapes are related to the formation history of halos, which happens preferentially along filaments. As the assembly history of a halo should be imprinted in the phase-space distribution of its satellite galaxies, observational attempts have been made to infer halo shapes using satellite distributions. Though the task is challenging, progress has been made using the distribution of stellar velocity \citep{Olling2000}, satellite tidal streams \citep{Ibata2001, Lux2012, Vera2013}, and gravitational lensing \cite[e.g.,][]{Hoekstra2004,Er2011}. The measurements of halo shapes from satellite kinematics or weak lensing rely on an estimate of the host potential. In fact, more useful insight can also be gained from the pure spatial distribution of satellites. Most studies have focused on how satellites are distributed with respect to the shape of the central galaxy, known as galaxy alignment. The observational study of the alignment of galaxies has a long history \cite[e.g.,][]{1969ArA.....5..305H,2005ApJ...628L.101B, Yang2006,2007MNRAS.376L..43A, Libeskind2007, LiC2013}. Based on large galaxy surveys such as 2dFGRS and Sloan Digital Sky Survey, there is general agreement that the distribution of satellite galaxies is typically along the major axis of the central galaxy. Moreover the alignment signal depends on galaxy color. \citet{Yang2006} found a stronger alignment signal for red central galaxies or red satellites. Such an effect is also seen at high redshift \citep{Wang2010}. A rough idea to explain the observed galaxy alignment is that if satellite galaxies follow the distribution of dark matter, and the central galaxy also shares a shape similar to the host halo, then the non-spherical nature of dark matter halos naturally produces an alignment effect. In fact, many theoretical works follow this idea \citep{2005ApJ...629...L5, 2005ApJ...629..219Z, AB2006, Kang2005, Kang2007, Libeskind2007, Fal2007, Bailin2008, Fal2008, Ang2009, Fal2009, AB2010, Deason2011, 2013SSRv..177..155L}. However, most studies over-predict the alignment signal when the central galaxy is assumed to follow the same shape as the whole host halo. Furthermore, most studies are unable to reproduce the alignment dependence on galaxy color unless a dependence of central galaxy alignment with the host halo is assumed \citep{AB2010}. The main difficulty faced by theoretical studies of galaxy alignment is how to assign the shape of the central galaxy. The most natural way is to use hydro-simulations including physics governing galaxy formation, such as gas cooling, star formation and feedback. Unfortunately, current simulations are typically unable to produce a galaxy population which matches observational data (see, however, Vogelsberger et al. 2013 and references therein). In this paper, we revisit the problem of galaxy alignment using an $N$-body simulation which allows good statistics with a large number of massive halos. Since the simulation does not include models for galaxy formation, we instead use the subhalos as tracers of satellite galaxies. For central galaxies, we follow previous studies and assume that the shape of the central galaxy follows the shape of its host halo. In our study, the halo shape is measured at different iso-density surfaces using a method different from previous studies. In an upcoming paper, we will present the results using smoothed particle hydrodynamics(SPH) simulations performed with Gadget-2 (paper II, in preparation). In addition to the overall alignment signal, we investigate the dependence of alignment on the accretion and formation history of subhalos. Since red satellites have stronger alignment with central galaxies \cite[e.g,][]{Yang2006}, and in general red satellites are accreted at earlier times, it is natural to ask whether the stronger alignment of red satellites is already set before their accretion into the host halo or it is an evolution effect that red satellites follow more closely the shape of dark matter halo after accretion. To study this question, we study the alignment of subhalos as a function of their formation and accretion time. To probe if the color dependence is imprinted in the large-scale environment or is due to an evolution effect, we also study the alignment of neighboring halos which are within one to a few virial radii from the host halos. The paper is organized as follows. In Section \ref{cha:axis_define}, we briefly describe the simulation and how we determine the shapes of dark matter halos. In Section \ref{cha:angular_distri} and \ref{cha:mass_dependence}, we show the alignment of subhalos and its dependence on the subhalo mass. In Section \ref{cha:time_dependence} we investigate if the alignment signal is dependent on the accretion or formation time of the subhalo, and present the results of alignment of neighboring halos. We summarize and briefly discuss our results in Section \ref{cha:concl}. | \label{cha:concl} Recent observations have found that satellite galaxies are not randomly distributed, but rather they align with the major axis of the central galaxy \cite[e.g.,][]{2005ApJ...628L.101B, Yang2006, 2007MNRAS.376L..43A}. This intriguing result has generated great interest, with many studies investigating the origin of this phenomenon. The common conclusion from previous studies \cite[e.g.,][]{AB2006, Kang2007} is that the alignment arises from the non-spherical nature of dark matter halos in the cold dark matter cosmology \cite[e.g.,][]{Jing2002}. However, there is no model which can accurately predict the observed alignment signal and its dependence on galaxy properties. The most difficult part of theoretical modeling is in assigning the shape of central galaxies, and most models do not agree on this aspect. In this paper, we re-visit the alignment problem using a cosmological $N$-body simulation. Compared to previous studies, we focus on the origin of the alignment with its dependence on the formation/accretion of subhalos. We investigate if this dependence is from the assembly bias or an evolution effect. Our results are summarized as follows, \begin{itemize} \item We use a new method to characterize the tri-axial halo shape following \cite{Jing2002}. Unlike the most widely used inertia tensor method which depends on the mass distribution within a given radius and is often contaminated by subhalos, the new method is able to determine the tri-axes of the halo at given local mass over-density, and excludes the effects on subhalos on the shape determination. We find that the measured halo shapes at different radii are well aligned. The mean alignment angle between the inner and outer part of halo is about $26.0^{\circ}, 38.9^{\circ}, 48.6 ^{\circ} $ for host halos with $\Mvir \geq 10^{14} \Msun$, $10^{14} \Msun > \Mvir \geq 10^{13} \Msun$ and $ 10^{13} \Msun > \Mvir \geq 10^{12} \Msun$ respectively. The alignment between the inner and outer shapes is increasing with halo mass. \item We study the alignment of both subhalos and neighboring halos around selected host halos. Both subhalos and neighboring halos are found to align preferentially with the outer axes of host halos. Consistent with previous results \cite[e.g.,][]{Kang2007}, we find that the alignment of subhalos is stronger than the data if the outer axis of host halo is used for the shape of central galaxy. Better agreement with the data is achieved if the central galaxy follows the shape of host halo determined at the inner region. We also find that if the alignment between the central galaxy and the outer axis follows a Gaussian distribution with a mean of $0^{\circ}$ and a deviation of $25^{\circ}$, the predicted alignment also agree with the data. \item The alignments of subhalos and neighboring halos depend on the mass of the host halos, such that more massive host halos have stronger alignment. This is due to the reasons that more massive halos are more flattened (embedded in filaments) and more massive halos have better alignment between its inner and outer axes. This is consistent with the observations that satellites in massive red central galaxies are more strongly aligned. \AD{In Fig~\ref{ca}, we show the dependence of halo flattening on the mass. It is seen that higher mass halos are more flattened (with lower c/a). Also found is that the inner region of halo is more flattened than the outer part. These results are consistent with previous studies \cite[e.g.,][]{Jing2002,2005ApJ...629..781K, Allgood2006, Maccio2008}. We also note that resolution (lower mass halos have fewer particles) has no effect on these results as the distribution shows consistent tendency up to the low-mass end in either case. In Fig~\ref{ca}, we include halos with their axes poorly detected. If these halos are excluded, the line of inner axes goes down at the low mass end.} There is weak (if any) dependence of alignment on the mass of subhalos or neighboring halos themselves. \item We study the alignment of subhalos with dependence on their formation time. It is found that there is no dependence along the inner major axes of host halos, and a strong dependence along the outer axes of the host halos such that the early accreted subhalos have lower alignment than the recent accreted ones. This is not consistent with the results of \cite{AB2010}, and is also inconsistent with observational evidence that red satellite galaxies (accreted more early) have stronger alignment with the central galaxy than blue satellites. \end{itemize} \begin{figure} \centerline{\psfig{figure=ellip_ca.eps,width=0.5\textwidth}} \caption{Average axis ratio as a function of halo mass. Every point represent the average $c/a$ for a given mass bin. The error bars stand for 1 $\sigma$ standard deviation. All 2000 halos in our sample are included in this plot. } \label{ca} \end{figure} The main contribution of this paper is that we find that the mis-alignment between the inner and outer axes of the dark matter halos can account for the observed alignment of satellite galaxies including the mass dependence. We confirm the results from most studies that the shape of the central galaxy cannot follow the shape of the whole dark matter halo, otherwise the predicted alignment signal is too strong. However, better agreement with observations can be obtained if the central galaxy follows the shape of the dark matter halo defined at the inner region (measured at an overdensity of $100\times 5^4 \rho_{\rm crit}$). In this case, the dependence of alignment on halo mass is also reproduced. Finally, we discuss whether the stronger alignment of red satellites due to the assembly bias or an evolution effect after accretion. As our simulation do not include models for galaxy formation, we use the subhalo population to address this question. Our conclusion is that the main contribution to the strong alignment of red satellites is from an evolution effect. If red satellites reside in subhalos that form at earlier times or having higher mass at accretion, Figure~\ref{ang_self} and Figure~\ref{formation} have shown that those subhalos do not have stronger alignment. Also the alignment of neighboring halos with higher mass or higher formation redshift are also identical. These results indicate that the stronger alignment of red satellites is not set at the time of accretion. On the contrary, their strong alignment should come from evolution effects after accretion. Actually, Figure~\ref{ang_comp} further shows that the alignment of neighboring halos is lower than the subhalo, implying that subhalos acquire stronger alignment after accretion. However, our results from Figure~\ref{accretion} seems not to support the evolution scenario. It shows that early accreted subhalos have lower alignment, indicating a negative evolution effect. It is also inconsistent with the results of \cite{AB2010} who have found that early accreted satellites have stronger alignment. We note that there are differences between our analysis and theirs. In our work, we do not have galaxies, but only subhalos. Also we do not include those disrupted subhalos (`orphan' galaxies in the model of \citealt{AB2010}). Thus in our work, those subhalos accreted at earlier times may have been disrupted, and this effect is more efficient for subhalos with more `radial' orbits as they will come to the host center and suffer strong tidal disruption. In addition, cautions should be taken here that the alignment on `inner' axes are determined less accurately because of limited resolution especially for low-mass halos and also for iso-density surfaces with small ellipticity (the latter one is also the case when determining the major axes observationally). Further investigation using simulations with higher resolution should be helpful. We simply conclude that to accurately predict the alignment of satellite galaxies found in observations and study its dependence on galaxy properties, we should use hydrodynamical simulation with gas physics and star formation, which can self-consistently predict the shape of central galaxies and the distribution of satellite galaxies around the central galaxy. Although current simulations with star formation still have difficulties to achieve better agreement with the data, we will show in an upcoming paper that the alignment of satellites and their color dependence can be better reproduced. | 14 | 3 | 1403.1008 |
1403 | 1403.1839_arXiv.txt | {} {Using simultaneous X-ray and radio observations from solar flares, we investigate the link between the type III radio burst starting frequency and hard X-ray spectral index. For a proportion of events the relation derived between the starting height (frequency) of type III radio bursts and the electron beam velocity spectral index (deduced from X-rays) is used to infer the spatial properties (height and size) of the electron beam acceleration region. Both quantities can be related to the distance travelled before an electron beam becomes unstable to Langmuir waves.} { To obtain a list of suitable events we considered the RHESSI catalogue of X-ray flares and the Phoenix 2 catalogue of type III radio bursts. From the 200 events that showed both type III and X-ray signatures, we selected 30 events which had simultaneous emission in both wavelengths, good signal to noise in the X-ray domain and $>$ 20 seconds duration.} {We find that $>50~\%$ of the selected events show a good correlation between the starting frequencies of the groups of type III bursts and the hard X-ray spectral indices. A low-high-low trend for the starting frequency of type III bursts is frequently observed. Assuming a background electron density model and the thick target approximation for X-ray observations, this leads to a correlation between starting heights of the type III emission and the beam electron spectral index. Using this correlation we infer the altitude and vertical extents of the flare acceleration regions. We find heights from 183 Mm down to 25 Mm while the sizes range from 13 Mm to 2 Mm. These values agree with previous work that places an extended flare acceleration region high in the corona. We also analyse the assumptions that are required to obtain our estimates and explore possible extensions to our assumed model. We discuss these results with respect to the acceleration heights and sizes derived from X-ray observations alone.} {} | \label{intro} Electromagnetic signatures during flares allow us to diagnose remotely what is occurring in the solar atmosphere. We can detect the presence of non-thermally distributed electrons via an enhanced signal in radio and X-ray wavelengths. Through a series of assumptions we can deduce properties of these electrons and the ambient environment that caused them to emit photons. Unfortunately the spatial characteristics of the source regions for the electron energisation out of a thermal distribution remain largely unknown. The enhanced X-ray emission in the tens of keV range and above is believed to come from non-thermal electrons in the low atmosphere during solar flares (see \citet{Fletcher_etal2011}, for an observational review). When the electrons enter the high density plasma of the solar chromosphere they lose all their energy and thermalise via electron-ion Coulomb collisions. Bremsstrahlung X-rays are emitted but only contain some $10^{-5}$ of the incident non-thermal electron energy \citep[see][as a recent review]{Holman_etal2011}. Spacecraft detect the X-ray source both directly and through X-rays reflected from the solar surface \citep[e.g.][]{KontarJeffrey2010}. We can use the X-ray signature to deduce the temporal, spatial, and energetic profile of the energised electrons \citep{Kontar_etal2011}. A common soft-hard-soft trend \citep[e.g.][]{ParksWinckler1969,Benz1977} has been observed where the X-ray spectral index starts high, becomes low during the most intense part of the flare, and then finishes high. Type III radio emission at frequencies $\leq 4$~GHz is believed to be caused by high energy electrons streaming through the corona and interplanetary space (see \citet{Nindos_etal2008} for a recent review). The bump-in-tail instability causes the electrons to induce high levels of Langmuir waves in the background plasma \citep{GinzburgZhelezniakov1958}. Non-linear wave-wave interactions then convert a small fraction of the energy contained in the Langmuir waves into electromagnetic emission near the local plasma frequency or at its harmonics \citep[e.g.][]{Melrose1980}. Consequently, we detect radio emission drifting from high to low frequencies as the high energy electrons propagate through the corona and out into interplanetary space. Ground based telescopes are used to detect the radio waves $\geq 10$ MHz that are able to penetrate the Earth's atmosphere. Hard X-ray and type III radio observations are known to be statistically correlated in time during solar flares \citep{Kane1981,Raoult_etal1985,Hamilton_etal1990,Aschwanden_etal1995,ArznerBenz2005}. Indeed recent statistical studies \citep{Saint-HilaireBenz2003,Benz_etal2005,Benz_etal2007,Michalek_etal2009} that include all types of coherent radio emission leave little doubt as to the connection between coherent radio and hard X-ray emission. There has also been copious studies which deal with individual events \citep[e.g.][]{KaneRaoult1981,Kane_etal1982,Benz_etal1983,Dennis_etal1984,Vilmer_etal2002,Christe_etal2008} in more intricate detail. Such studies are helpful to delve deeper into this complicated phenomenon. One such study of locations of the HXR and radio sources observed for the 20 February 2002 flare \citep{Vilmer_etal2002} shows a close correspondence between the change of the HXR configuration in the 25-40 keV range and the radio source at the highest frequency imaged with the Nan\c{c}ay Radioheliograph (410 MHz - 73 cm). This event strongly supports the idea of a common acceleration (injection) site for HXR and radio emitting electrons located in the current sheet formed above the loop, in close agreement with the simple cartoon derived previously by \citet{Aschwanden_etal1995,AschwandenBenz1997}. Figure \ref{fig:overview} shows the simple cartoon illustrating the location of the electron beam acceleration site in the corona with respect to the locations of the radio and HXR emitting sites. It is well known however that such a simple link between HXR and radio type III sources is not always as simple as what could be derived from this simple scenario \citep[see e.g.][for reviews]{PickVilmer2008,Vilmer2012}. \begin{figure} \includegraphics[width=0.99\columnwidth]{flare_cartoon6.eps} \caption{Cartoon showing the relation between the presumed electron beam acceleration site in the corona with respect to the radio and HXR emitting sites.} \label{fig:overview} \end{figure} Regarding the acceleration region of energetic particles in solar flares, it has been postulated recently that the coronal X-ray sources could indicate their location \citep{Krucker_etal2010, Ishikawa_etal2011, ChenPetrosian2012}. The sources are found quite low in altitude \citep[see also][]{Xu_etal2008,Kontar_etal2011b,Guo_etal2012}, some 20~Mm above the soft X-ray loops. The approximated volume for these sources is around $10^{26}~\rm{to}~ 10^{27}~\rm{cm}^3$ giving rough 1D estimates of $10^9$ cm (10 Mm). It is common for coronal X-ray sources to have a higher spectral index (softer spectrum) than X-ray footpoints \citep{Emslie_etal2003, BattagliaBenz2007,KruckerLin2008}. With current X-ray instrumentation it can be hard to observe faint sources in the presence of very strong footpoint sources and so the high densities of coronal sources could be a selection effect. On the other hand, in the case of the simple scenario shown in Figure \ref{fig:overview}, it has been demonstrated in \citet{Reid_etal2011} how an anti-correlation between type III starting frequencies and HXR spectral index can be used to deduce the acceleration height and size of the electron beam. Indeed, the observed anti-correlation between the starting frequency of type III bursts and the hard X-ray spectral index is used together with numerical / analytical models of electron transport in the corona to derive these quantities. Such an anti-correlation between type III starting frequencies and HXR spectral indices has been also previously reported in the literature, since it was noticed that the X-ray / type III correlation increases systematically with the peak spectral hardness of HXR emission and the type III burst starting frequency \citep{Kane1981,Hamilton_etal1990}. A simple physical explanation for this observed property is linked to the fact that through propagation effects an electron beam with a hard spectrum will excite Langmuir waves faster and closer to the beam injection site than a beam with a softer spectrum. This faster instability onset is the explanation for the correlation between the HXR spectrum and the type III starting frequency, in the case when the two electron populations have a common origin and similar spectral properties. In this work, we re-examine on a statistical basis the link between HXR spectra and type III starting frequencies. The aim is twofold: \begin{itemize} \item To estimate the proportion of events for which a simple link (as the one inferred from figure 1) is found \item To deduce for these events for which an anti-correlation is found between type III frequencies and HXR spectral indices, the characteristics of electron acceleration sizes and heights. \end{itemize} We start by defining our selection criteria for events in Section \ref{selection} and analysing the two observables: starting frequency of radio type III bursts and HXR spectral indices. In Section \ref{electron_beam} we derive from these observables to starting heights of the radio emission and the electron beam spectral indices and investigate how they are related. In Section \ref{properties} we recall the model of electron beam propagation which is used to derive from the previous quantities the values of the acceleration height and size of the energetic electron beam for each of the studied events. We discus in Section \ref{discussion} the results as well as the assumptions of the model and discuss the flare morphology of the different events as revealed from combined X-ray and radio images. We finally draw our conclusions in Section \ref{conclusion}. | \label{conclusion} We have looked at a series of type III bursts and hard X-ray flares from a period of 6 years. When the two emissions occur at the same time, the X-ray flux above 25 keV is high enough for detection, and the event duration is 20 seconds or longer we found a correlation between the starting height of the groups of type III radio bursts and the spectral index of the energetic electrons in approximately 50$\%$ of events. Moreover, some events that did not show a significant correlation for the entire flare duration showed trends in the starting frequency of the groups of type IIIs that was mirrored in the X-ray spectral indices over shorter times. This correlation is expected to be harder to detect with higher energy flares on account of more complicated radio signatures in frequency space. We observed a tendency for the starting frequency of the type III bursts to display a low-high-low trend. Starting at a low frequency, the starting frequency became larger during the impulsive peak of the flare and then went lower during the decay of the flare. This is analogous to the soft-hard-soft nature of the X-ray spectral index. We used a model for the electron transport to predict the heights and vertical extents of the flare acceleration regions. With a couple of assumptions we deduce parameters of the electron acceleration sites from the observable radio and X-ray emissions. Deduced altitudes ranged from 183 Mm to 25 Mm with a mean of 100 Mm. The vertical extent of the acceleration regions ranged from 16 Mm to 2 Mm with a mean vertical extent of 8 Mm. The result was found for flares with a GOES class of M, C or B that caused only type III radio emission at decimetric wavelengths. We finally note that the acceleration heights are as a mean larger than the ones deduced from X-ray analysis alone \citep{Aschwanden_etal1998}. It must be noticed, however, that the flares analysed here are not confined flares which potentially explains the higher altitude heights. Our study is effective at inferring acceleration region characteristics that are largely unavailable but very important to our understanding of flare physics. What is particularly key for future work is the inclusion of imaging at many more radio frequencies than are currently available. With imaging in a wide frequency range we would be able to detect, in some events, the upward and downward propagating electron beams responsible for type IIIs and reverse type IIIs. Detection of lower intensity X-rays would also shed more light on the spatial characteristics of flares acceleration regions as we could obtain the electron beam spectral index from the weak coronal emission produced by upward travelling electron beams. | 14 | 3 | 1403.1839 |
1403 | 1403.7132_arXiv.txt | We introduce a sub-grid model for the non-equilibrium abundance of molecular hydrogen in cosmological simulations of galaxy formation. We improve upon previous work by accounting for the unresolved structure of molecular clouds in a phenomenological way which combines both observational and numerical results on the properties of the turbulent interstellar medium. We apply the model to a cosmological simulation of the formation of a Milky Way-sized galaxy at $z=2$, and compare the results to those obtained using other popular prescriptions that compute the equilibrium abundance of H$_2$. In these runs we introduce an explicit link between star formation and the local H$_2$ abundance, and perform an additional simulation in which star formation is linked directly to the density of cold gas. In better agreement with observations, we find that the simulated galaxy produces less stars and harbours a larger gas reservoir when star formation is regulated by molecular hydrogen. In this case, the galaxy is composed of a younger stellar population as early star formation is inhibited in small, metal poor dark-matter haloes which cannot efficiently produce H$_2$. The number of luminous satellites orbiting within the virial radius of the galaxy at $z=2$ is reduced by 10-30 per cent in models with H$_2$-regulated star formation. | The process of galaxy formation involves the interplay of many non-linear phenomena that span a wide range of length and time-scales. A galaxy like our Milky Way, for example, forms from a region that initially extends to roughly one comoving Mpc, yet its angular momentum is determined by the mass distribution within tens of comoving Mpc. Star formation (SF), on the other hand, takes place in the densest cores of giant molecular clouds (GMCs), on scales of the order of 0.1 pc. The challenge in simulations of galaxy formation is to capture this vast dynamic range, while simultaneously accounting for the different physical processes that intervene on relevant scales. This is usually achieved with ad hoc sub-grid models that attempt to emulate the most important small-scale phenomena. In particular, one of the biggest uncertainties in simulations of galaxy formation is the means by which gas is converted into stars \citep[see][for a recent review]{Dobbs+13}. The standard approach to this problem, motivated by observations, is to adopt a Schmidt-like law \citep{Schmidt+59}, often coupled to conditions on the local gas properties. However, there are several issues with this method. First, its parameters are poorly constrained and are usually fine-tuned to match the observed Kennicutt-Schmidt (KS) relation \citep{Kennicutt+89, Kennicutt+98}. Secondly, there is a growing body of evidence that the local star formation rate (SFR) correlates more tightly with the observed density of molecular hydrogen than with that of the total gas density \citep[e.g.][]{Kennicutt+07,Bigiel+08,Leroy+08}, though there is yet no consensus as to whether this reflects a causal relation. In particular, numerical simulations of isolated molecular clouds suggest that the presence of molecules does not boost the ability of the gas to cool and form stars \citep{Glover+12b}. The tight spatial correlation between H$_2$ and young stars may then be due to the fact that they are both formed in high density regions where gas is effectively shielded from the interstellar radiation field. Despite the ongoing debate, there are strong motivations for including a treatment of molecular hydrogen in cosmological simulations of galaxy formation. Observations of H$_2$ proxies (such as CO luminosity), for example, have progressed tremendously over the past decade \citep[see][for a recent review]{Carilli+13}, underlining the need for robust theoretical templates to aid in the design of observational campaigns and the interpretation of their results. Furthermore, numerical simulations constitute a unique tool to test the impact of H$_2$-regulated SF on the global structure of galaxies, provided their H$_2$ content can be reliably determined. Tracking H$_2$, however, requires solving a challenging network of rate equations which are coupled to a radiative-transfer computation for H$_2$-dissociating photons. Given that the spatial resolution of current simulations is comparable in size to GMCs, these calculations must be done at the sub-grid level and include a description of gas structure on the unresolved scales (e.g. a clumping factor for the gas density). Recently, several authors have incorporated simple algorithms to track molecular hydrogen in hydrodynamical simulations of galaxy formation. For instance, \citet{Pelupessy+06} monitored the H$_2$ distribution in dwarf-sized galaxies within a fixed dark matter (DM) potential and showed that the resulting molecular mass depends strongly on the metallicity of the interstellar medium (ISM). Similar conclusions were drawn by \citet[][see also Feldmann et al. (2011)]{Gnedin+09} \nocite{Feldmann+2011}, who followed the evolution of the H$_2$ content for 100 Myr in a cosmologically simulated galaxy at $z=4$. These authors showed that it is only possible to form fully shielded molecular clouds when the gas metallicity is high (i.e. $Z\sim 10^{-2}-10^{-1}$ Z$_\odot$), and argued that H$_2$-regulated SF can act as an effective feedback mechanism, delaying SF in the low-metallicity progenitors of a galaxy. The implications of these results for galaxy formation in low-mass haloes were studied further by \citet{Kuhlen+12,Kuhlen+13}, who suggested the possible existence of a large population of low-mass, gas-rich galaxies that never reached the critical column density required for the H$_2$/\HI\, transition and are thus devoid of stars. Their work, however, was based on an analytic model for H$_2$ formation that assumes chemical equilibrium between its formation and destruction rates \citep{Krumholz+08,Krumholz+09b,McKee+10}. None the less, \citet{Krumholz+11} showed that this model agrees well with a time-dependent solution to the chemical network provided the local metallicity of the gas is above $10^{-2}$ Z$_\odot$, lending support to these conclusions. \citet{Christensen+12} modelled the non-equilibrium abundance of H$_2$ in a dwarf galaxy that was simulated down to redshift $z=0$, connecting SF explicitly to the local H$_2$ content of the gas. These authors showed that, compared to simulations rooted on the Schmidt law, molecule-based SF produces a galaxy which is more gas rich, has bluer stellar populations and a clumpier ISM. On the other hand, strong stellar feedback, when included, tends to mitigate these differences by regulating the formation and destruction rates of GMCs \citep{Hopkins+12}. In this paper, we introduce a new time-dependent sub-grid model for tracking the non-equilibrium abundance of H$_2$ in cosmological simulations of galaxy assembly. Our approach builds upon the work of \citet{Gnedin+09} and \citet{Christensen+12} by including additional information on the unresolved distribution of gas temperatures and densities. In particular, our model: (i) explicitly accounts for the distribution of sub-grid densities, as determined by observations and numerical simulations of turbulent GMCs; (ii) invokes a gas temperature-density relation that was determined from detailed numerical studies of the ISM \citep{Glover+07a}; and (iii) consistently takes into account that denser, unresolved clumps have larger optical depths. As an application, we employ the model in a high-resolution simulation that follows the formation of a Milky Way-sized galaxy down to $z=2$. In order to explore the interplay between SF, H$_2$ abundance and galactic structure, we re-simulate the same volume using different algorithms for computing the density of molecular hydrogen and the local SFR. This paper is organized as follows. In Section \ref{sec:mol_hydrogen_formation}, we introduce our model for tracking the non-equilibrium H$_2$ abundance and compare it with other commonly adopted prescriptions that have been discussed in the literature. In Sections \ref{sec:code} and \ref{sec:analysis}, we describe our suite of simulations and present our main results. Finally, we summarize our main conclusions and then critically discuss some of our assumptions in Section \ref{sec:discussion}. | \label{sec:discussion} We have presented a sub-grid model for tracking the non-equilibrium abundance of molecular hydrogen in cosmological simulations of galaxy formation. The novelty of the model is that it phenomenologically accounts for the distribution of unresolved sub-grid densities determined from observations and simulations of the turbulent ISM. In this sense, it improves upon previous time-dependent schemes \citep[e.g.][]{Gnedin+09,Christensen+12} in which the H$_2$ formation rate on dust grains is amplified by a fixed clumping factor. We have implemented our model in the {\sevensize{RAMSES}} code in order to run simulations that track the evolution of the H$_2$ content of a massive galaxy at $z=2$, and to study the imprint of H$_2$-regulated SF. The resulting H$_2$ fractions in different environments spanning a vast range of gas metallicities and interstellar UV fields are consistent with observations of the Milky Way Galaxy, the LMC and the SMC. In order to better understand what determines the properties of a galaxy, we ran a suite of simulations of the same DM halo, each with a different prescription for computing the H$_2$ distribution. In the runs where H$_2$ is calculated explicitly, SF was regulated by the local H$_2$ abundance, while, for another (STD), we adopted the traditional Schmidt law based on the total gas density. Our main findings can be summarized as follows: \begin{enumerate} \item All simulations produce a galaxy which is in broad agreement with several high-redshift observational data sets. However, the different models for star and H$_2$ formation result in important differences in the stellar and gas masses of the galaxies (see Table \ref{table:par_r200}). \item In particular, if SF is molecule-regulated and the H$_2$ abundances are computed with a detailed treatment of photo-dissociation including a simplified radiative-transfer scheme: \begin{enumerate} \item The galaxy produces less stars and is in better agreement with the observed stellar-to-halo mass relation with respect to the STD model (which, however, cannot be ruled out as we did not study the effect of increasing the spatial resolution of the simulations). \item The galaxy harbours a larger gas reservoir and its gas fraction better matches observations of high redshift galaxies. \item Early SF is inhibited in metal-poor haloes with mass $M\lsim10^{10}$ M$_\odot$ in which gas and dust densities are too low to trigger efficient conversion of \HI~into H$_2$. As a result, the main galaxy assembles by accreting many gas-rich substructures and, consequently, hosts a younger stellar population and harbours a larger cold gas reservoir than in the STD case. Also, the number of satellites of the main galaxy at $z=2$ is reduced by 30 per cent at all stellar masses compared with the STD simulation. \end{enumerate} \item Regardless of the H$_2$ model: \begin{enumerate} \item The main galaxy in our simulations (with the exception of the KMT-EQ model) has similar spatial distribution and total mass of molecular hydrogen at $z=2$. This is mainly due to the fact that the average metallicity of the gas in its dominant progenitors is already $0.1$ Z$_\odot$ at $z=9$. As a result, H$_2$ formation is rapid, mitigating any subtle differences independent of whether stars form from atomic or molecular gas. \item The molecular mass in a cell scales linearly with that of the metals (above a model-dependent threshold density). This is a consequence of assuming that dust traces the metals in the simulations. \end{enumerate} \item If H$_2$ destruction by SNe is substantial, SF is suppressed by nearly 20 per cent at all times relative to an identical model in which this destruction channel is neglected. \item Contrary to the assumption that gas is fully molecular in high-redshift galaxies (commonly used to interpret CO observations, e.g. \citet{Genzel+10,Tacconi+10,Magnelli+13}), the atomic gas fraction in our simulated galaxy is comparable to the molecular contribution, independent of the H$_2$ formation model. \item Using the STD model to form stars, but `painting on' H$_2$ in post-processing using the KMT-UV prescription, gives a reasonable estimate of the total H$_2$ mass of the galaxy. However, as already mentioned above, the resulting galaxy contains more stars (in particular old stars) than in all of the molecule-regulated schemes. \end{enumerate} Although our non-equilibrium H$_2$ model represents a significant improvement over previous methods, many challenges remain. For instance, we have assumed that the sub-grid density PDF of GMCs can be accurately described by a lognormal distribution. This is based on several observations of molecular clouds which, in some cases, show high-density tails in star forming regions \citep[e.g.][]{Kainulainen+09,Schneider+13}. Several complex physical phenomena like energy injection, turbulence, gravity and external compression influence the density structure of molecular clouds. Yet, numerical studies of the ISM have shown that the lognormal model is a good approximation when an isothermal gas flow is supersonically turbulent \citep[e.g.][]{Vazquez-Semadeni+94,Glover+07a,Glover+07b,Federrath+09,Federrath+13}. Power-law tails in the high-density regime form under the presence of self-gravity (which generates dense cores and super-critical filaments). Non-isothermal turbulence can also increase the occurrence of dense clumps. None the less, these uncertainties are likely sub-dominant to those associated with modelling the effects of SNa feedback on GMCs, which must be tackled with small-scale simulations of the ISM. In addition, we have set the dispersion of the lognormal density distribution to be $\sigma\simeq 1.5$, consistent with a constant clumping factor $C_\rho=10$. This choice was motivated by theoretical work that relates local density enhancements to the three-dimensional rms Mach number, $\mathcal{M}$, with values of $\mathcal{M}\sim 5.5$ \citep[e.g.][]{Padoan+97,Ostriker+01,Price+11}. Moreover, the same value for the clumping factor has been adopted in the literature to best match the H$_2$ content in observations and simulations \citep[e.g.][]{Gnedin+09,Christensen+12}. However, observations of GMCs have revealed substantial variations in the Mach number \citep{Schneider+13}. In future implementations, the realism of our model can be improved by adjusting the clumping factor (as well as the PDF of the sub-grid density) in cells with different mean densities and temperatures. From the technical point of view, this is straightforward to do: the difficulty lies in linking the mean properties of a cell to the sub-grid parameters that regulate the density PDF. One intriguing possibility could be to implement a simplified description of supersonic turbulence along the lines of that proposed by \citet{Teyssier+13}. We plan to return to these issues in future work. | 14 | 3 | 1403.7132 |
1403 | 1403.7418_arXiv.txt | The areas of sunspots are the most prominent feature of the development of sunspot groups. Since the size of sunspot areas depend on the strength of the magnetic field, accurate measurements of these areas are important. In this study, a method which allows to measure true areas of the sunspots is introduced. A Stonyhurst disk is created by using a computer program and is coincided with solar images. By doing this, an accurate heliographic coordinate system is formed. Then, the true area of the whole sunspot group is calculated in square degrees with the aid of the heliographic coordinates of each picture element forming the image of the sunspot group. This technique's use is not limited with sunspot areas only. The areas of the flare and filaments observed on the chromospheric disk can also be calculated with the same method. In addition to this, it is possible to calculate the area of any occurrence on the solar disk, whether it is related to an activity or not. | \label{S-Introduction} Sunspots are the most obvious feature of solar magnetic activity. To understand the development of the solar activity, revealing the morphologic and kinematic behaviors of the sunspots on the solar surface is required. Therefore, analyzing the emergence patterns, developments and decay of the sunspots on the solar surface are the most important steps to constitute the sunspot group models. Evolution of the groups on the surface is observed with the evolution the both umbral and penumbral areas \cite{Gafeira2012}, \cite{Hathaway2008}. McIntosh \cite{McIntosh1990} described the classification of the sunspot groups depending on the appearance and the area covered on the surface. Here, the areas of the sunspots are an important criterion and they enable the groups to be distinguished from each other. The size and distribution of the sunspots are an indication of the complexity of the activity field which the group is in \cite{Zirin1988}. Hence, the positions of the sunspots in the group also show the magnetic field distribution. Because the size of both umbral and penumbral areas of the sunspots are proportional to the magnitude of the magnetic field strength \cite{Schlichenmaier2010}, the sunspot areas which are reaching a certain size or being disintegrated or tending to merge with other sunspots are indicating the different phases of the group development \cite{McIntosh1990}. An accurate measurement of the sunspot areas, therefore, could provide important information. Areas and heliographic positions of the sunspot groups were regularly calculated and archived at the Royal Greenwich Observatory from 1874 until 1976. The results were published in Greenwich Photoheliographic Results. After 1976, Debrecen Heliophysical Observatory (DHO) took over this mission. Now, the daily data about areas and positions of the sunspot groups are published in Debrecen Photoheliographic Data. Video images of the sunspot groups are used for area measurement and an isodensity line is fitted to the edge of the spot at DHO. The sunspot group areas are calculated as follows: the area is divided into small squares with a grid system, then, the number of the squares in the area are counted and added up. And then, the total area is transformed into area on the solar disk \cite{Gyori1998}, \cite{Sarychev2006}. Many researchers are using the circular marking method in which the area of a sunspot is determined with the area of a circle superimposed on it. All of the sunspot areas in the group are individually calculated and their sum will give the total area of the sunspot group \cite{Meadows2002}, \cite{Arlt2013}. The area of the sunspot group $\it{A_M}$, on the solar disk is calculated by \begin{eqnarray} A_M &=& \frac{2 A_S 10^6}{\pi D^2 \cos(\rho)} \nonumber \end{eqnarray} \noindent where $\it{A_S}$ is the measured size of a sunspot group in the image, D is the diameter of the image, $\rho$ is the angular distance of the sunspot group's center from the center of the disk. The area of the sunspot group is given in millionths of the apparent solar disk in these studies and includes the correction for foreshortening. Whereas in the method explained in here, the area of the group is calculated in square degrees. Nowadays, automated sunspot recognition techniques are developed and the researches are almost concentrated in this field instead of the development the old studies which have hand-made measurements. A Software package called SAM at DHO is used for automated recognition. At the University of Bredford, an automated program is used to produce Solar Feature Catalogue. Another automated program called StarTool is applied to digital images \cite{Gyori2005}. The articles written by Gy\H{o}ri \cite{Gyori1998} and Fonte \& Fernandes \cite{Fonte2009} have detailed descriptions about automated recognition to determine the edge of the sunspots with the image processing techniques. But this approach brings some mistakes on the boundaries of the sunspots due to the blurring and smoothing processes and wipes out some parts of the group, especially small sunspots close to big penumbral structures and some umbral spots close to each other. Therefore, it can be said that these techniques are rough estimates and can not give true areas of the sunspot groups. Semi-automated approaches may solve these points by adjusting the threshold values visually. | % \label{S-Discussion} The principle of the area calculation is based on the pixels of digital images in this method. When the size of every pixel is known in heliographic coordinates, the total calculated area of the sunspot group will be close to the actual value. Using high-resolution images will increase the calculation's accuracy. As a result of this, the pixel area will be small in square degrees and the pixel number in the area will increase. Therefore, 4096${\times}$4096 pixel sized, high-resolution images of the SDO were used. The diameter of the Sun's disk is 3746 pixels in these images. This means that 1 pixel is equal to 1.8 arcminutes in the center of disk and 5.5 arcminutes around 70$^{\circ}$ longitude on the equator of the Sun. Edge detection of the sunspot groups is an important stage of the method. Since the Contour Trace algorithm is used, the selection of the threshold intensity value is one of the most critical points. Also, since the border of the area is identified visually, the threshold value must be selected appropriately. Intensity value of the pixels are in the 0-255 range, experimental measurements showed that the amount of the noticeable change can be 5 units above or below the proper value. When the threshold value is 5 units below, the areas of sunspot are becoming smaller by 2-3\%. This means an area difference of 0.5-0.8 square degrees in a 25$^{\circ}$ squared area, because the sunspot group is getting more fragmented and most of the penumbral region is getting smaller. On the other hand, when the threshold value is 5 units above, since the borders of the some sunspots of the group are merging with each other and penumbral regions are getting wider, the areas of sunspot are becoming larger by 4-6\% and this means an area difference of 1-1.5 square degrees in a 25$^{\circ}$ squared area. These are the average values and the percentage of the area variation strongly depends on the location, size and fragmentation of the sunspot group on the solar disk. Especially, if the sunspot group is more fragmented, the change in the percentage will be greater. The manual re-arrangement of the group area is another critical point. When looking at Fig. 1, some difficulties will be seen in which of the bordered areas are belong or not belong the sunspot group. However, it is not difficult to decide and this should be done properly, if not, area of the sunspot group will be incorrectly calculated. Locating the graticule on the solar disk appropriately is the easiest part of the method. But, nevertheless, adjustment should be made carefully. The size of the graticule must be equal to the size of the solar disk, and the central point of the graticule must match exactly with the central point of the solar disk image. A common error while doing this is to shift the graticule a few pixels in any direction, which will be on the order of 2-3 pixels. If the graticule is shifted 3 pixels, the sunspot area will change approximately with a 0.1\% in the center and 1-1.5\% close to the edge of the disk. When the graticule is shifted to the one direction (for example east side), the sunspot groups on that side will be shifted to the center and others on the opposite side will be more close to edge of the other side. When a pixel shifts to the center, area of the pixel will decrease in square degrees, in the other case, area will increase. Therefore, while the sunspot group approaches to the center, the sunspot area will quantitatively decrease, and if the group approaches to the edge, the area will increase. All of the errors mentioned above are the critical points of this method. If the selections are made appropriately, the area of the groups will be calculated correctly. More importantly, in the other methods, the areas covered by the pixel or small squares are accepted as being equal to each other on the solar disk, but they actually are different in size. In the method explained here, the areas of the pixels change in square degrees depending on their latitude and longitude. Therefore, this is an effective method to calculate areas of the sunspots from digital images of the solar disk. Finally, with this method, not only the areas of the sunspots, but also areas of the flares and the plages in the chromospheric solar disk images and the areas of the network in Ca II images can be calculated. \begin{appendix} \renewcommand{\thesubsection}{\Alph{subsection}} | 14 | 3 | 1403.7418 |
1403 | 1403.5137_arXiv.txt | We simulate the evolution of dense-cool clumps embedded in the intra-cluster medium (ICM) of cooling flow clusters of galaxies in response to multiple jet-activity cycles, and find that the main heating process of the clumps is mixing with the hot shocked jets' gas, the bubbles, while shocks have a limited role. We use the PLUTO hydrodynamical code in {{{{two dimensions with imposed axisymmetry}}}}, to follow the thermal evolution of the clumps. We find that the inflation process of hot bubbles, that appear as X-ray deficient cavities in observations, is accompanied by complicated induced vortices inside and around the bubbles. The vorticity induces efficient mixing of the hot bubbles' gas with the ICM and cool clumps, resulting in a substantial increase of the temperature and entropy of the clumps. For the parameters used by us heating by shocks barely competes with radiative cooling, even after 25 consecutive shocks excited during 0.5~Gyr of simulation. Some clumps are shaped to filamentary structure that can turn to observed optical filaments. We find that not all clumps are heated. Those that cool to very low temperatures will fall in and feed the central supermassive black hole (SMBH), hence closing the feedback cycle in what is termed the cold feedback mechanism. | \label{s-intro} A negative feedback mechanism determines the thermal evolution of the intra-cluster medium (ICM) in the inner regions of cooling flow (CF) clusters and groups of galaxies (e.g., \citealt{Binney1995, Farage2012, Pfrommer2013}). This feedback mechanism is driven by active galactic nucleus (AGN) jets that inflate X-ray deficient cavities (bubbles; e.g., \citealt{Dong2010, OSullivan2011, Gaspari2012a, Gaspari2012b, Birzan2011, Gitti2012, GilkisSoker2012, RefaelovichSoker2012}). Examples of bubbles in cooling flows include Abell 2052 \citep{Blanton2011}, NGC 6338 \citep{Pandge2012}, NGC 5044 \citep{David2009}, HCG 62 \citep{Gitti2010}, Hydra A \citep{Wise2007}, NGC 5846 \citep{Machacek2011} NGC 5813 \citep{Randall2011}, A 2597 \citep{McNamara2001}, Abell 4059 \citep{Heinz2002}, NGC 4636 \citep{Baldi2009}, NGC 5044 \citep{Gastaldello2009, David2011}, and RBS 797 \citep{Schindler2001, Cavagnolo2011, Doria2012}. A relevant feature is that in most cases the two opposite bubbles of a bubble pair depart from exact axisymmetrical morphology. This implies a relative motion of the ICM and the source of the jets, and in some cases a change in direction of the jets' axis. We will not be able to simulate these flow patterns with our numerical grid on which we impose axial symmetry. The process of bubble inflation is at the heart of the feedback mechanism, as it is related to other processes such as vortex shedding in the ICM (e.g., \citealt{RefaelovichSoker2012, Walgetal2013}), sound wave excitation \citep{Sternberg2009, GilkisSoker2012}, and mixing of the ICM with hot shocked jets' material \citep{GilkisSoker2012, Sokeretal2013}. The bubbles seem to be a key ingredient in the feedback mechanism not only in cooling flows, but also in other astrophysical objects \citep{Sokeretal2013}, such as core collapse supernovae \citep{PapishSoker2014}. Vortices inside the bubbles and in their surroundings play major roles in the formation of bubbles, their evolution, and their interaction with the ICM (e.g. \citealt{Heinz2005, Sternberg2008b, Sternberg2009, RefaelovichSoker2012}). \cite{Omma2004} find that a turbulent vortex trails each cavity, and that this vortex contains a significant quantity of entrained and uplifted material (also \citealt{Roediger2007}), and \cite{GilkisSoker2012} find that vigorous mixing caused by vortices implies that the region within few$\times 10 \kpc$ is multi-phase. These processes lead to the formation of small cool regions, that if are not heated by another jet-activity episode cool and flow inward to feed the AGN. The process of feeding the AGN with cold clumps in the feedback mechanism cycle is termed the \emph{cold feedback mechanism}, and was suggested by \cite{Pizzolato2005}. The cold feedback mechanism has been later strengthened by observations of cold gas and by more detailed studies (e.g., \citealt{Revaz2008, Pope2009, Wilman2009, Pizzolato2010, Wilman2011, Nesvadba2011, Cavagnolo2011, Gaspari2012a, Gaspari2012b, McCourt2012, Sharma2012, Farage2012, GilkisSoker2012, Waghetal2013, BanerjeeSharma2014, McNamaraetal2014, Li2014, VoitDonahue2014, Voitetal2014}). To inflate the wide bubbles very close to the origin of the jets, termed `fat bubbles,' either slow (sub-relativistic) massive wide (SMW) jets (bipolar outflows) \citep{Sternberg2007}, precessing jets \citep{Sternberg2008a, Falceta-Goncalves2010}, or a relative motion of the jets to the medium \citep{Bruggen2007, Soker2009, Morsony2010, Mendygral2012} are required. No fat bubbles are formed when the jets penetrate to too large a distance, while in intermediate cases elongated and/or detached from the center bubbles are formed (e.g., \citealt{Basson2003, Omma2004, Heinz2006, VernaleoReynolds2006, AlouaniBibi2007, Sternberg2007, ONeill2010, Mendygral2011, Mendygral2012}). In the present study we will inflate bubbles by SMW jets but our results hold for bubbles inflated by precessing jets or a relative motion of the ICM as well. Our demonstration that bubbles in cooling flow clusters are inflated by SMW outflows \citep{Sternberg2007}, and our suggestion that such SMW could also have been active during galaxy formation \citep{Sokeretal2009} require many AGN to form SMW bipolar outflows. Such common SMW bipolar outflows are supported by recent observations (e.g., \citealt{Moe2009, Dunn2010, Tombesi2012, Aravetal2013, Harrisonetal2014}). In our setting, two opposite jets are launched along a common axis. The heating of the gas perpendicular to the jets' axis need not be $100\%$ efficient, as observations show that heating does not completely offset cooling (e.g., \citealp{Wis04, McN04, Cla04, Hic05, Bre06, Sal08, Wilman2009}), and a \emph{moderate CF} exists \citep{Soker2001}. \textit{Moderate} implies here that the mass cooling rate to low temperatures is much lower than the cooling rate expected without heating, but it is much larger than the accretion rate onto the supermassive black hole (SMBH) at the center of the cluster. The cooling gas is either forming stars (e.g., \citealp{Odea08, Raf08}), forming cold clouds (e.g., \citealt{Edge2010}), accreted by the SMBH to maintain the cold feedback mechanism \citep{Pizzolato2010}, or is expelled back to the ICM and heated when it is shocked or mixed with the hot jets' material. The mixing of cold ICM clumps with the hot shocked jets' material is the focus of our present study. In section \ref{s-numerical-setup} we describe the numerical code and setup. In section \ref{s-global_flow_structure} we describe the global flow structure, and in section \ref{s-clump_bubble_interaction} we turn to study the interaction of jet-inflated bubble with the ICM. Our study of multiple jet-launching episodes, up to 25 episodes, is described in section \ref{multiepisodes} where we follow the entropy of the cold clumps. In section \ref{s-summary} we summarize our main findings and their implications. | \label{s-summary} We used the PLUTO hydrodynamic code \citep{Mignone2007} to study the heating of dense clumps embedded in the intra-cluster medium (ICM) of cooling flow clusters of galaxies. We conducted {{{{2D axisymmetric}}}} hydrodynamic simulations, i.e., the flow is 3D but with an imposed azimuthal symmetry around the $z$ axis, to study the influence of multiple jets-activity cycles on the thermal evolution of the dense clumps. The initial cross section of each clump is a circle in the meridional plane of the {{{{2D}}}} numerical grid, which implies a torus in 3D. Only one side of the equatorial plane was simulated. In some cases only one jet-launching episode was simulated, and in others we run 25 activity episodes, with an off period of $10 \Myr$ between $10 \Myr$ long active phases. We reproduced (Fig. \ref{figure: 3clumps_r45_rhofactor1.3_t50}) the formation of a fat bubble by a slow massive wide (SMW) jet \citep{Sternberg2007, GilkisSoker2012}, and the formation of multiple sound waves with a single jet-activity episode \citep{Sternberg2009, GilkisSoker2012}. We strengthened the finding of \cite{GilkisSoker2012} and \cite{Sokeretal2013} that vorticity plays major roles in the structure and evolution of bubbles and their interaction with the ICM. Our addition here is the study of dense clumps and the simulation of many jet-activity episodes. We considered two main heating mechanisms in AGN feedback: heating by shock waves initiated by jets and mixing of cold gas with shocked hot jet material. The thermal evolution of dense clumps is summarized in Figs.~\ref{figure: clump temperature and entropy} and \ref{figure: temperature and entropy history, 1.3, periodic jet}. For the parameters used in our study (see section \ref{s-numerical-setup}) we found that heating by shock waves cannot compete with radiative cooling over a long time (for an opposite view see \citealt{Randall2011}). Shocks increase the temperature of the clumps and compress the gas, but after the clumps re-expand the temperature drops back to almost its initial value. Shocks also increase the clumps' entropy, but the compression shorten the radiative cooling time of the gas. Even in simulations with multiple frequent jet episodes, shock waves did not nearly offset radiative cooling. The inefficiency of shock heating was derived analytically in a previous paper \citep{Sokeretal2013}, and was shown to be much less efficient than mixing. On the other hand, we found, like \cite{GilkisSoker2012}, that once mixing with the jets' shocked material (the hot bubbles) begins, it is very efficient in heating the cold clump's material and increasing its entropy. The mixing process studied by \cite{GilkisSoker2012} and explored here, can go much beyond the direct mixing of shocked jets' material with the ICM and cold clumps, and continue with turbulence in a larger volume of the ICM in cooling flows \citep{BanerjeeSharma2014}. {{{{ Based on a 2D hydrodynamical study, \cite{Peruchoetal2014} argued recently that heating by shocks is the main heating process. However, they inflate bubbles on scales of $>500 \kpc$, larger by an order of magnitude than typical bubbles in cooling flow clusters. Their bubbles occupy a huge fraction of the ICM volume, hundreds of times the typical volume in observed cases. We attribute their conclusion about shock heating to their unrealistically large bubbles. In any case, they also note the importance of mixing. }}}} Our {{{{2D}}}} numerical code is constrained to launch jets along a constant direction, and the mixing is not efficient in directions at large angles to this direction. Observations, however, show that bubbles of different episodes are not exactly aligned with each other, and even two opposite bubbles inflated together lose alignment over time. These misalignments result from a relative motion of the central AGN and the ICM, and from jets' precession. {{{{Thus}}}}, mixing is expected to be efficient in all directions, and so to be the major heating mechanism in cooling flows in galaxies and clusters of galaxies, as well as in the process of galaxy formation during which cooling flow could have taken place \citep{Soker2010a}. The numerical constraint of a constant jet axis has another effect. In the simulations we conducted we did not get clear fat bubbles at late times of multiple-episodes simulations. Rather an elongated bubble was formed, a shape which is mostly inconsistent with the observations. The elongated shape was formed since the jets we simulated were always along the same axis, the rotational-symmetry axis of our {{{{2D}}}} simulations. In reality, different jet episodes are often directed at different directions, and in such cases two opposite bubbles are formed \citep{Sternberg2007, GilkisSoker2012}. The complicated flow structure induced by bubbles' inflation (Figs. \ref{figure: 3clumps_r45_rhofactor1.3_t50}, \ref{figure: rhofactor2_zoomed}, \ref{figure: rhofactor3}) has some further implications for the thermal evolution and feedback mechanism. (1) Vortices on all scales entangle magnetic field lines in the ICM. This suppresses any global heat conduction in the ICM near the center. (2) The same entanglement process mixes the magnetic fields of the ICM and the shocked jets' material. This leads to reconnection of the magnetic field lines, hence allowing for local heat conduction between the mixed ICM and jets' gas. We emphasize the efficiency of local heat conduction (scales of $\la 0.1 \kpc$) as opposed to the inefficiency of global (scales of $\ga 1 \kpc$) heat conduction (see review by \citealt{Soker2010b}). {{{{The typical grid size at $10 \kpc$ from the center of our numerical code is $0.06 \kpc$. Therefore, with our resolution and if the claim of \cite{Soker2010b} holds, there is no need to include heat conduction in the inner region of $r \la 50 \kpc$. The outer regions are of less interest to us here.}}}} (3) Heating by mixing, while being very efficient, is not $100 \%$ efficient. Our results show that some cold clumps do indeed cool to low temperatures. These will form very dense clumps that, if not heated by another jet within a short time, fall inward and feed the AGN. Our results support the cold feedback mechanism as suggested by \cite{Pizzolato2005}, and that has gained considerable support by recent observations of cold gas and by more detailed studies (see section \ref{s-intro}). We thank an anonymous referee for very helpful and detailed comments. | 14 | 3 | 1403.5137 |
1403 | 1403.5301_arXiv.txt | We present a novel method for revealing the equation of state of high-density neutron star matter through gravitational waves emitted during the postmerger phase of a binary neutron star system. The method relies on a small number of detections of the peak frequency in the postmerger phase for binaries of different (relatively low) masses, in the most likely range of expected detections. From such observations, one can construct the derivative of the peak frequency versus the binary mass, in this mass range. Through a detailed study of binary neutron star mergers for a large sample of equations of state, we show that one can extrapolate the above information to the highest possible mass (the threshold mass for black hole formation in a binary neutron star merger). In turn, this allows for an empirical determination of the maximum mass of cold, nonrotating neutron stars to within~$0.1 M_{\odot }$, while the corresponding radius is determined to within a few percent. Combining this with the determination of the radius of cold, nonrotating neutron stars of $1.6~M_{\odot}$ (to within a few percent, as was demonstrated in Bauswein et al., PRD, 86, 063001, 2012), allows for a clear distinction of a particular candidate equation of state among a large set of other candidates. Our method is particularly appealing because it reveals simultaneously the moderate and very high-density parts of the equation of state, enabling the distinction of mass-radius relations even if they are similar at typical neutron star masses. Furthermore, our method also allows to deduce the maximum central energy density and maximum central rest-mass density of cold, nonrotating neutron stars with an accuracy of a few per cent. | The Advanced LIGO~\cite{2010CQGra..27h4006H} and Advanced Virgo~\cite{2006CQGra..23S.635A} gravitational-wave detectors are expected to observe between 0.4 and 400 mergers of binary neutron stars (NSs) per year, when they start operating at their design sensitivity \cite{2010CQGra..27q3001A}.\footnote{Similar rates are estimated for the upcoming KAGRA instrument~\cite{2010CQGra..27h4004K}.} The Einstein Telescope~design \cite{2010CQGra..27a5003H} promises roughly \(10^{3 }\) times higher detection rates. The merger of NSs is a consequence of gravitational wave (GW) emission, which extracts energy and angular momentum from the binary and thus forces the binary components on inspiraling trajectories. Events within a few tens of Mpc are particulary interesting, because they bear the potential to constrain the (still largely unknown) equation of state (EoS) of neutron-star matter (see~\cite{2010CQGra..27k4002D,2011GReGr..43..409A,BaumgarteShapiro,2012LRR....15....8F,Rezzolla} for reviews and e.g.~\cite{2007PhR...442..109L,2012ARNPS..62..485L} for a discussion of the current EoS and NS constraints). The properties of cold, high-density matter are encoded in the stellar properties of nonrotating NSs, since the EoS uniquely defines the stellar structure via the Tolman-Oppenheimer-Volkoff equations~\cite{1939PhRv...55..364T,1939PhRv...55..374O}. Since the dynamics of a merger is crucially affected by the properties of NSs, the GW signal carries information on the binary parameters and the EoS (e.g.~\cite{1994PhRvD..50.6247Z,1996A&A...311..532R,2005PhRvL..94t1101S,2005PhRvD..71h4021S,2007A&A...467..395O,2007PhRvL..99l1102O,2008PhRvD..77b4006A,2008PhRvD..78b4012L,2008PhRvD..78h4033B,2009PhRvD..80f4037K,2011MNRAS.418..427S,2011PhRvD..83d4014G,2011PhRvD..83l4008H,2011PhRvL.107e1102S,2012PhRvL.108a1101B,2012PhRvD..86f3001B,2013MNRAS.430.2585R,2013PhRvL.111m1101B,2013PhRvD..88d4026H,2013arXiv1311.4443B,2014MNRAS.437L..46R,2014arXiv1402.6244B,2014arXiv1403.5672T}). For sufficiently nearby events, the chirp-like inspiral GW signal reveals the total binary mass and the mass ratio of the merging NSs (e.g.~\cite{1993PhRvD..47.2198F,1994PhRvD..49.2658C,2005PhRvD..71h4008A,2008CQGra..25r4011V,2013ApJ...766L..14H,2013arXiv1304.1775T}). During the late inspiral phase, deviations from the point-particle behavior may be used to determine stellar properties of the inspiraling NSs (NS radii or the NS moment of inertia) with some accuracy (e.g.~\cite{2002PhRvL..89w1102F,2008PhRvD..77b1502F,2009PhRvD..79l4032R,2010PhRvD..81l3016H,2012PhRvD..85l3007D,2013PhRvL.111g1101D,2013PhRvD..88b3009Y,2013PhRvD..88j4040M,2013PhRvD..88d4042R,2013arXiv1310.8288F,2013arXiv1310.8358Y,2014arXiv1402.5156W}). As an additional method, one may detect the dominant oscillations of the postmerger remnant, which (unless there is prompt collapse to a black hole (BH)) is a hot, massive, differentially rotating NS (which is observationally the most likely case)~\cite{1994PhRvD..50.6247Z,1996A&A...311..532R,2000ApJ...528L..29B,2003ApJ...583..410L,2005PhRvL..94t1101S,2005PhRvD..71h4021S,2007A&A...467..395O,2007PhRvL..99l1102O,2008PhRvD..77b4006A,2008PhRvD..78b4012L,2008PhRvD..78h4033B,2009PhRvD..80f4037K,2011MNRAS.418..427S,2011PhRvD..83d4014G,2011PhRvD..83l4008H,2011PhRvL.107e1102S,2012PhRvL.108a1101B,2012PhRvD..86f3001B,2012PhRvD..86f4032P,2013MNRAS.430.2585R,2013PhRvL.111m1101B,2013PhRvD..88f4009G,2013PhRvD..88d4026H,2013arXiv1306.4034K,2013arXiv1311.4443B,2014MNRAS.437L..46R,2014arXiv1403.3680N,2014arXiv1403.5672T}. The dominant peak in the gravitational wave spectrum of the postmerger phase originates from a fundamental quadrupolar (\(m=2) \) fluid oscillation mode (see \cite{2011MNRAS.418..427S} for an extraction of the mode pattern, which confirms this description), which appears as a pronounced peak in the GW spectrum, in the range between \(2-3.5 \)~kHz. Recently, it was found that for binaries with a total mass of about $2.7~M_{\odot}$ the frequency of this peak determines the radius of a cold, nonrotating NS with a mass of $1.6~M_{\odot}$ to within a few percent \cite{2012PhRvL.108a1101B,2012PhRvD..86f3001B}~\footnote{Note that the radii of NSs with masses somewhat different than 1.6~$M_{\odot}$ are also obtained with good accuracy.}, which was confirmed in~\cite{2013PhRvD..88d4026H}. Even a single such detection would thus tightly constrain the EoS in the density range of $1.6~M_{\odot}$. Observations of more massive binaries would provide estimates for the radii of more massive nonrotating NSs, since they probe a higher density regime~\cite{2012PhRvD..86f3001B}. The detection of binary NS mergers with masses larger than $2.7~M_{\odot}$ is particularly interesting, because the determination of the threshold binary mass to BH collapse sets a tight constraint on the maximum mass of cold, nonrotating NSs, as was shown recently in~\cite{2013PhRvL.111m1101B} (notice that current pulsar observations provide a lower limit to the maximum mass of about 2~$M_{\odot}$~\cite{2010Natur.467.1081D,Antoniadis26042013}). For a given EoS the threshold binary mass to BH collapse depends in a clear way on the maximum mass of cold, nonrotating NSs and on the radius of a star with 1.6~$M_{\odot}$~\cite{2013PhRvL.111m1101B}. Thus, given an estimate for $R_{1.6}$ (e.g. from the inspiral GW signal or from the postmerger GW peak frequency) the determination of the threshold mass to BH collapse yields a constraint on the maximum mass of cold, nonrotating NSs. For most EoSs the threshold mass to BH collapse is in the range of $3-4~M_{\odot}$~\cite{2013PhRvL.111m1101B}. This implies a serious obstacle for \textit{directly} determining the threshold mass, if NS mergers are taking place more frequently with a lower total binary mass of about $2.7 M_{\odot}$ (as is suggested by the mass distribution of observed NS binaries, see \cite{2012ARNPS..62..485L} for a compilation, and by theoretical population synthesis studies, see e.g.~\cite{2012ApJ...759...52D}). Moreover, for binary masses very near to the threshold mass for prompt collapse to a BH, the duration of the postmerger signal becomes shorter, decreasing further the expected detection rate. In this work we show that the detection of the postmerger GW emission of two low-mass NS binary mergers with slightly different masses can be employed to estimate the threshold mass. Thus, the binary systems that are most likely to be detected, may reveal the threshold mass to BH collapse and, in turn, the maximum mass of cold, nonrotating NSs (to within \(0.1~M_{\odot}\)). The corresponding radius is determined to within a few percent. Combining this with the determination of the radius of cold, nonrotating neutron stars of~$1.6~M_{\odot}$~\cite{2012PhRvD..86f3001B}, allows for a clear distinction of a particular candidate equation of state among a large set of other candidates. In this paper NS masses refer to the \textit{gravitational mass in isolation}, and \textit{binary masses are reported as the sum of the gravitational masses in isolation} of the individual binary components. We use the term ``low-mass binaries'' for systems with binary masses of about 2.7~$M_{\odot}$ to distinguish them from ``high-mass binaries'' with binary masses closer to or above the threshold mass. The paper is organized as follows: In Sect.~\ref{sec:sim} we briefly review the simulations investigated in this study. Sect.~\ref{sec:idea} outlines the main idea. The method and its results are described in Sect.~\ref{sec:extra}. We close with a summary and conclusions. | 14 | 3 | 1403.5301 |
|
1403 | 1403.7597_arXiv.txt | Observations of strong flux of low-energy neutrons were made by $^{3}\mathrm{He}$ counters during thunderstorms [Gurevich et al (Phys. Rev. Lett. 108, 125001, 2012)]. How the unprecedented enhancements were produced remains elusive. To better elucidate the mechanism, a simulation study of surrounding material impacts on measurement by $^{3}\mathrm{He}$ counters was performed with GEANT4. It was found that unlike previously thought, a $^3\mathrm{He}$ counter had a small sensitivity to high-energy gamma rays because of inelastic interaction with its cathode-tube materials (Al or stainless steel). A $^{3}\mathrm{He}$ counter with the intrinsic small sensitivity, if surrounded by thick materials, would largely detect thunderstorm-related gamma rays rather than those neutrons produced via photonuclear reaction in the atmosphere. On the other hand, the counter, if surrounded by thin materials and located away from a gamma-ray source, would observe neutron signals with little gamma-ray contamination. Compared with the Gurevich measurement, the present work allows us to deduce that the enhancements are attributable to gamma rays, if their observatory was very close to or inside a gamma-ray emitting region in thunderclouds. | \label{sec:intro} Like the Sun and supernova remnants, thunderclouds as well as lightning are powerful particle accelerators in which electrons are accelerated by electric fields to a few tens of MeV or higher energies. Then, they in turn produce high-energy gamma rays extending from a few hundred keV to a few tens of MeV or 100 MeV on rare occasions. In addition to gamma rays and electrons, some observations~\cite{Shah_1985,armenia_2010, armenia_2012,Tibet_2012} showed that neutrons were probably produced in association with lightning and thunderclouds. To explain such neutron generations, two mechanism have been investigated theoretically and experimentally since the first positive neutron detection~\cite{Shah_1985}. One is fusion mechanism via $\mathrm{^2H}+\mathrm{^2H} \rightarrow \mathrm{n} + \mathrm{^{3}He}$, and the other is photonuclear reaction or the Giant Resonance Reaction (GDR), mainly via $^{14}\mathrm{N} + \gamma (>10.6\,\, \mathrm{MeV}) \rightarrow \mathrm{n} + ^{13}\mathrm{N}$ in the atmosphere. Conducting numerical calculations, \citet{BR_2007_Npro} presented that only the latter was feasible in an usual thunderstorm environment. However, a recent calculation considering ion runaway in a lightning discharge suggested a possibility of neutron production via the former~\cite{IonRunaway}. Thus, a neutron generation process in thunderstorms remains elusive. Experimentally, a $\mathrm{BF_{3}}$ and $^3\mathrm{He} $ counters were frequently employed in order to detect neutrons associated with thunderstorms. As well known, the two detectors have high sensitivity to neutrons thanks to high total cross-section in thermal to epithermal energy region; 3840 b for $^{10}\mathrm{B}$ and 5330 b for $^3\mathrm{He}$ at 0.025 eV~\cite{Knoll}. Especially, a set of neutron monitors (NMs), installed at high mountains with an altitude of $>$3000 m, detected remarkable count increases during thunderstorms~\cite{armenia_2010,Tibet_2012}. Generally, a NM consists of a $\mathrm{BF_{3}}$ counter and its thick shields of lead and polyethylene [$\mathrm{(C_2H_4)_{n}}$]~\cite{NM64_1,NM64_2}. Thus, it was naturally considered that the detected count increases by NMs were attributable to neutrons, not gamma rays. However, \citet{Tibet_2012}, using GEANT4 simulations~\cite{GEANT4}, demonstrated that such a NM had a low but innegligible sensitivity to gamma rays with their energy higher than 7 MeV because they can produce neutrons in the surrounding lead blocks via photonuclear reaction. Consequently, they pointed out that count enhancements of NMs associated with thunderstorms were dominated by gamma rays rather than neutrons. This claim was favored shortly afterward by \citet{armenia_2012}. As shown in Figure~\ref{fig:ex_event}, \citet{Gurevich_Obs2012} recently reported detections of strong flux of low-energy ($<$ a few keV) neutrons during thunderstorms. They observed the enhancements by several independent detectors for 1 minutes or longer, in coincidence with high electric field changes ($<\pm 30\, \mathrm{kV/m}$). Such a long duration, together with the simultaneous detections, may exclude the increases being due to electrical noise, and is similar to prolonged ones observed by other groups~\cite{armenia_2010,Tibet_2012}. Unlike the other observations, the Gurevich's events were done with a set of $\mathrm{^{3}He}$ counters that were installed at a high mountain with an altitude of 3340 m. They argued that the detected neutron flux of 0.03$-$0.05 $\mathrm{cm^{-2}s^{-1}}$ were not able to be explained by the photonuclear reaction, requiring at least three orders of magnitude higher flux of gamma rays emission than previously measured. However, such an increase obtained by $\mathrm{^{3}He}$ counters may originate from gamma rays, not neutrons, if we consider inelastic interaction between high-energy gamma rays and their cathode wall made by aluminum or stainless steel. For example, a threshold energy of $^{27}\mathrm{Al}(\gamma, n)^{26}\mathrm{Al}$ and $^{27}\mathrm{Al}(\gamma, p)^{26}\mathrm{Mg}$ is 13.1 MeV and 8.3 MeV, respectively~\cite{IAEA_2000}. Actually, gamma rays at enrages of 10 MeV or higher have been measured by sea-level experiments~\cite{Dwyer_10MeV,TERA_2012,growth_2007,growth_2011,growth_2013}, high-mountain ones~\cite{norikura_2009,Fuji_2009,armenia_2010, armenia_2011,Tibet_2012}, and space missions~\cite{RHESSI_2009,FERMI_2010,AGILE_100MeV}. In addition, it is well known that neutron measurement by a $^3\mathrm{He}$ counter is disturbed by gamma rays in a mixed field of gamma rays and neutrons~\citep{He3_gamma1,He3_gamma2}. Such a mixture environment is similar to observations of gamma rays and neutrons during thunderstorms. In this paper, we investigate how materials surrounding $^{3}\mathrm{He}$ counters affect their measurement during thunderstorms. For this aim, we derived in Section 2, with GEANT4, detection efficiency of a $^{3}\mathrm{He}$ counter for $>$10 MeV gamma rays as well as neutrons in 0.01 eV$-$20 MeV energy range. Some authors~\citep{armenia_2012,BabichCal_2013} argued against an interpretation given by \citet{Gurevich_Obs2012}, but did not clearly gave detection efficiency of a $^{3}\mathrm{He}$ counter for gamma rays. Then, to examine how neutrons and gamma rays contributes to a $^{3}\mathrm{He}$ counter surrounded by a thick or thin material, we utilized two roof configurations according to \citet{Gurevich_Obs2012} in Section 3. Considering the derived efficiency and roof effects on neutron detection during thunderstorms, we argue the Gurevich observations. \section {Detection efficiency of a $\mathrm{^{3}He}$ counter} As described in \citep{He3_gamma2}, a reason why a $\mathrm{^{3}He}$ counter has a sensitivity to gamma rays is believed that they occasionally supply either neutrons or protons in the counter via inelastic interaction with a cathode wall. As a consequence, such a gamma-ray induced nucleon would produce a large energy deposit in the counter. Table~\ref{tab:PNreaction} lists properties of several photonuclear reactions to be considered in this paper. From this table, gamma rays at energies of $>$10 MeV are found to probably give a contribution to a $^{3}\mathrm{He}$ counter during thunderstorms, because its cathode usually consists of either Al or stainless steel. For the purpose of calculating detection efficiencies of $^{3}\mathrm{He}$ counters for neutrons and gamma rays in the relevant energy range, we adopted in the GEANT4 simulation a hadronic model of QGSP\verb+_+BERT\verb+_+HP and GEANT4 standard electromagnetic physics package to simulate neutron reactions and electromagnetic interactions including GDR, respectively. Then, we constructed a set of three $\mathrm{^3He}$ counters confined in an Al box with an area of $1.2 \times 0.84\,\,\mathrm{m^2}$ based on "Experimental setup" of \citep{Gurevich_Obs2012} and a reference given by Gurevich group~\citep{TienShanDet}. The setup is shown in Figure~\ref{fig:Gurevich_counter}. Each counter has a diameter of 3 cm and a length of 100 cm, containing 100\% $\mathrm{^3He}$ gas with a pressure of 2 atm. Because the thickness and cathode material were not shown in \citep{Gurevich_Obs2012,TienShanDet}, we employed in our GEANT4 simulation 2-mm thick stainless steel (74\%Fe + 8\%Ni + 18\%Cr) that is generally used by a commercial $\mathrm{^3He}$ counter. Then, $10^{6}$ neutrons or $10^{7}$ gamma rays with mono energy were illuminated on the same area of a set of six $\mathrm{^{3}He}$ counters, isotropically injected to the counters from the vertical to 60 degrees. According to \citet{Gurevich_Obs2012}, an efficiency of thier $\mathrm{^3He}$ counters for neutrons in a low energy range is about 60\%, and the efficiency at $\sim$10 keV becomes three orders of magnitude lower. As shown in Figure~\ref{fig:He3_det_ng}, this trend is found to be consistent with that of neutron detection efficiency derived here. In addition, it is found that the whole structure of the detection efficiency for neutrons in a wide energy range of 0.01 eV$-$20 MeV completely follows the total cross-section of $^{3}\mathrm{He}$ atom\footnote{The total cross-section can be seen at e.g. \url{http://wwwndc.jaea.go.jp/j40fig/jpeg/he003_f1.jpg}}; mainly a neutron capture reaction of $^{3}\mathrm{He(n, p)T}$ in energy below 0.1 MeV and an elastic scattering above 0.1 MeV. These consistencies validate the simulation. Due to the smaller cross-section, gamma rays are detected with a relatively low sensitivity of at most $(1.47\pm0.12)\times10^{-3}\%$ at 20 MeV (the error is statistical one only). This is consistent with that each peak energy of photonuclear reaction for $^{52}\mathrm{Cr}$ and $^{56}\mathrm{Fe}$ is around 20 MeV (Table~\ref{tab:PNreaction}). From this simulation, it was found that gamma-ray induced protons or neutrons (alpha on rare occasions) had a typical kinetic energy of nearly 10 MeV. Then, such a proton (or alpha) deposits via ionization loss an amount of a hundred keV or higher energies in a $^{3}\mathrm{He}$ counter, while the gamma-ray induced neutron mainly causes an elastic scattering with $^{3}\mathrm{He}$ nucleus to produce a large energy deposit of $>$1 MeV. Changing a cathode material of stainless steel to Al, we found that gamma-ray detection efficiency for Al was the same with the derived values (Fig.~\ref{fig:He3_det_ng}), within statistical uncertainty. | The present simulation clearly showed that a $^{3}\mathrm{He}$ counter had a small sensitivity to $>$10 MeV gamma rays. It was found that this ability enabled $^{3}\mathrm{He}$ counters to detect thundercloud-related gamma rays rather than those neutrons if surrounded by thick materials. Thus, it would be rather difficult to conclude that a $^{3}\mathrm{He}$-counter signal detected during thunderstorms is all attributable to neutrons like previously thought~\citep{Gurevich_Obs2012}. To obtain a conclusive answer whether detected counts are dominated by neutrons or gamma rays, we must consider a source height as well as surrounding material impacts on measurement by $^{3}\mathrm{He}$ counters. Given the present results, we may conclude that the large count enhancements obtained by \citet{Gurevich_Obs2012} is resulted from $>$10 MeV gamma rays radiated from a very nearby source in thunderclouds. To clarify the present finding, we will need to install $^{3}\mathrm{He}$ counters at other high mountains and conduct further experiments with $^{3}\mathrm{He}$ counters and gamma-ray detectors. In addition, like a recent measurement done by \citet{LabNeutron}, a laboratory experiment using a high-voltage generator and various detectors to catch neutrons and gamma rays would be promising. In this paper, we did not consider another neutron generation process such as the fission mechanism of $\mathrm{^2H}+\mathrm{^2H} \rightarrow \mathrm{n} + \mathrm{^{3}He}$. Therefore, we are unable to rule out a possibility that such a mechanism contributes to the large count enhancements. | 14 | 3 | 1403.7597 |
1403 | 1403.0654_arXiv.txt | Using $\delta N$ formalism, in the context of a generic multi-field inflation driven on a non-flat field space background, we revisit the analytic expressions of the various cosmological observables such as scalar/tensor power spectra, scalar/tensor spectral tilts, non-Gaussianity parameters, tensor-to-scalar ratio, and the various runnings of these observables. In our backward formalism approach, the subsequent expressions of observables automatically include the terms beyond the leading order slow-roll expansion correcting many of the expression at subleading order. To connect our analysis properly with the earlier results, we rederive the (well) known (single field) expressions in the limiting cases of our generic formulae. Further, in the light of PLANCK results, we examine for the compatibility of the consistency relations within the slow-roll regime of a two-field roulette poly-instanton inflation realized in the context of large volume scenarios. | The inflationary paradigm has been proven to be quite fascinating for understanding various challenging issues (such as horizon problem, flatness problem, etc.) in the early universe cosmology \cite{Guth:1980zm,Linde:1981mu}. Moreover, it provides an elegant way for studying the inhomogeneities and anisotropies of the universe, which could be responsible for generating the correct amount of primordial density perturbations initiating the structure formation of the universe and the cosmic microwave background (CMB) anisotropies \cite{Planck:2013kta}. The simplest (single-field) inflationary process can be understood via a (single) scalar field slowly rolling towards its minimum in a nearly flat potential. % There has been enormous amount of progress towards constructing inflationary models and the same has resulted in plethora of those which fit well with the observational constraints from WMAP \cite{Larson:2010gs,Komatsu:2010fb} as well as the recent most data from PLANCK \cite{Planck:2013kta, Ade:2013zuv, Ade:2013ydc, Ade:2013uln}, and so far the experimental ingredients are not sufficient to discriminate among the various known models compatible with the experiments. In general, if the perturbations are purely Gaussian, the statistical properties of the perturbations are entirely described by the two-point correlators of the curvature perturbations, namely the power spectrum. The observables which encode the non-Gaussian signatures are defined through the so-called non-linearity parameters $f_{NL}, \tau_{NL}$ and $g_{NL}$ parameter which are related to bispectrum (via the three-point correlators) and the tri-spectrum (via the four-point correlators) of the curvature perturbations. Although, the recent Planck data \cite{Ade:2013ydc} could not get very conclusive so far, it is still widely accepted that the signature of non-Gaussianity could be a crucial discriminator for the various known consistent inflationary models. For this purpose, multi-field inflationary scenarios have been more promising because of their relatively rich structure and geometries involved \cite{Vernizzi:2006ve,Battefeld:2006sz,Choi:2007su,Rigopoulos:2005us,Seery:2006js,Byrnes:2009qy,Battefeld:2009ym} (See \cite{Byrnes:2010em,Suyama:2010uj} also for recent review). Meanwhile, a concisely analytic formula for computing the non-linear parameter for a given {\it generic } multi-field potential has been proposed in \cite{Yokoyama:2007dw,Yokoyama:2008by}, which is valid in the beyond slow-roll region as well. Recently, some examples with (non-)separable multifield potentials have been studied in \cite{Mazumdar:2012jj} which can produce large detectable values for the non-linear parameter $f_{NL}$ and $\tau_{NL}$. % However, most of these works were investigated on a flat background. One of the main purpose of this work is to provide a general formula for these cosmological observables on a non-flat background in multi-filed inflationary model. To illustrate the validity of these formula in a concrete model, we will utilize a so-called poly-instanton inflationary model which comes from the setup of string cosmology in Type IIB string compactification. Significant amount of progress has been made in building up inflationary models in type IIB orientifold setups with the inflaton field identified as an open string modulus \cite{Kachru:2003sx,Dasgupta:2004dw,Avgoustidis:2006zp,Baumann:2009qx}, a closed string modulus \cite{Conlon:2005jm,Conlon:2008cj,Blumenhagen:2012ue} and involutively even/odd axions \cite{BlancoPillado:2004ns,Dimopoulos:2005ac,BlancoPillado:2006he,Kallosh:2007cc,Grimm:2007hs,Misra:2007cq,McAllister:2008hb}. Along the lines of moduli getting lifted by sub-dominant contributions, recently so-called poly-instanton corrections became of interest. These are sub-leading non-perturbative contributions which can be briefly described as instanton corrections to instanton actions. The mathematical structure of poly-instanton is studied in \cite{Blumenhagen:2012kz}, the consequent moduli stabilization and inflation have been studied in a series of papers \cite{Blumenhagen:2012ue,Cicoli:2011ct,Blumenhagen:2008kq,Lust:2013kt,Gao:2013hn}. % In the framework of type IIB orientifolds, several single/multi-field models have been studied for aspects of non-Gaussianities \cite{Kallosh:2004rs,Burgess:2010bz,Misra:2008tx,Berglund:2010xr,Cicoli:2012cy, Gao:2013hn}. The computation of non-Gaussianties in racetrack models has been made in \cite{Sun:2006xv} and in the context of large volume scenarios, by the so-called roulette inflationary models \cite{Bond:2006nc,BlancoPillado:2009nw}. Despite of being a good and simple example for multi-field inflation with a non-flat background, this class of models allows the presence of several inflationary trajectories of sufficient ($\ge 50$) number of efoldings with significant curving and a subsequent investigation of non-Gaussianities in such a setup has resulted in small values of non-linearity parameters in slow roll \cite{Vincent:2008ds} and large detectable values of those in beyond slow-roll regime \cite{Gao:2013hn}. In this article, our main aim is to revisit the analytic expressions of various cosmological observables, including scalar/tensor power spectra, scalar/tensor spectral tilts, non-Gaussianity parameters, tensor-to-scalar ratio and their runnings for a generic multi-field inflationary model driven on a non-flat background. The idea is to represent various observables in terms of field variations of the number of e-folding $N$ along with the inclusion of curvature correction coming from the non-flat field space metric. Some crucial developments along these lines have been made in recent works \cite{Yokoyama:2007dw, Byrnes:2012sc,Gong:2011uw, Elliston:2012ab, White:2013ufa, White:2012ya}. These generic expressions which automatically include the terms beyond the leading order slow-roll expansion, recover all the respective well known single field expressions in the limiting case. Moreover, we utilize these expressions for checking the various consistency relations in a string inspired two-field `roulette' inflationary model \cite{Gao:2013hn} based on poly-instanton effects. The strategy for computing the field-variations of number of e-folding $N$ is via numerical approach following the so-called `backward formalism' \cite{Yokoyama:2007dw} and then to use the solutions for the computation of various cosmological observables. From the recent Planck data \cite{Planck:2013kta, Ade:2013zuv, Ade:2013ydc, Ade:2013uln}, the experimental bounds for various cosmological observables under consideration are, \bea \label{obs} & & {\rm Scalar \, \, Power \, \, Spectrum: } \, \, \, 2.092\times10^{-9} < {\cal P}_S < 2.297\times10^{-9} \nonumber\\ & & {\rm Spectral \, \, index:} \, \, \, 0.958 < n_S < 0.963 \nonumber\\ & & {\rm Running \, \, of \, \, spectral \, \, index:} \, \, \, -0.0098< \alpha_{n_S} < 0.0003\\ & & {\rm Tensor \, \, to \, \, scalar \, \, ratio:} \, \, \, r < 0.11 \nonumber\\ & & {\rm Non\,\, Gaussianity \, \,parameters:} \, \, \, -9.8< f_{NL} < 14.3 , \, \, \tau_{NL} < 2800 \nonumber \eea while some other cosmological observables (like running of non-Gaussianity parameter) relevant for study made in this article could be important future observations. The article is organized as follows: In section \ref{sec_Setup}, we will provide relevant pieces of information regarding type IIB orientifold compactification along with ingredients of ``roulette-inflationary setup" developed with the inclusion of poly-instanton corrections \cite{Blumenhagen:2012ue, Gao:2013hn}. Section \ref{sec:ODEevolutionNabc} will be devoted to set the strategy for computing the field derivative of number of e-folding $N$ which gets heavily utilized in the upcoming sections. In section \ref{sec:cosmo-I}, we present the analytic expressions of various cosmological parameters such as scalar/tensor power spectra (${\cal P}_S, {\cal P}_T$), spectral index and tilt ($n_S, n_T$), tensor to scalar ratio ($r$) as well as their numerical details applied to the model under consideration. Section \ref{sec:cosmo-II} deals with a detailed analytical and numerical analysis of the non-linearity parameters ($f_{NL}, \tau_{NL}$ and $g_{NL}$) and their scale dependence encoded in terms of $n_{f_{NL}}, n_{\tau_{NL}}$ and $n_{g_{NL}}$ parameters. Finally an overall conclusion will be presented in section \ref{sec_Conclusions and Discussions} followed by an appendix \ref{expressions} for intermediate computations. | \label{sec_Conclusions and Discussions} In this article, we presented generalized analytic expressions for various cosmological observables in the context of a multi-field inflation driven on a non-flat field space. A closer investigation has been made regarding the new/generalized contributions to various cosmological observables coming from the non-trivial field space metric, which appears in the standard kinetic term of the scalar field Lagrangian. Subsequently, in order to connect our findings with the known results, we recovered the standard results as limiting cases from the analytic expressions we derived. The basic idea has been to rewrite all the cosmological variables in terms of field derivatives of number of e-foldings $N$ and thereafter to solve the differential equation governing the evolution by utilizing the so-called `backward formalism'. For this purpose, we translated the whole problem in solving for the evolution of field-derivatives of $N$ in form of a set of coupled order-one differential equations for vector $N_{\cal A}$, 2-tensor $N_{\cal A \cal B}$ and 3-tensor $N_{\cal A \cal B \cal C}$ quantities. Following the strategy of Yokoyama et al \cite{Yokoyama:2008by}, each of the index ${\cal A}$ counts as $2 \, n$, where $n$ is the number of scalar fields taking part in the inflationary process. This happens because each second-order differential equations for $n$-inflatons has been equivalently written as the first-order differential equations (\ref{EOM}) for $2 \, n$ number of fields. The same implies that the evolution equations for $N_{\cal A}$ results into $2 \, n$ differential equations while those of $N_{\cal A \cal B}$ and $N_{\cal A \cal B \cal C}$ result in ${4\, n^2}$ and $8 \, n^3$ order-one differential equations, respectively. This is obvious that the numerical analysis gets difficult for large number of scalar fields involved, however, we exemplified the analytic results for a two-field inflationary model, and hence the analysis still remains well under controlled as well as efficient for solving 84 order-one (but coupled) differential equations. The analytic expressions of various cosmological observables have been utilized for a detailed numerical analysis in a two field inflationary model realized in the context of large volume scenarios. In this model, the inflationary process is driven by a so-called Wilson divisor volume modulus and its respective $C_4$ axion appearing in the chiral coordinate. The same results in a `roulette' type inflation in which depending on the initial conditions, various inflationary trajectories can generate sufficient number of e-foldings as well as significant curving during the inflationary dynamics. Apart from a consistent realization of CMB results, we have also studied the scale dependence of non-Gaussianity observables which could be interesting from the point of view of upcoming experiments. The analytic expressions for various cosmological observables derived in this article involve the quantities/intermediate ingredients in the form of ${\cal O}^{\cal A} \equiv \{ {\cal O}^a_1, {\cal O}^a_2\}$. Unlike the usual approach, it includes not only the derivative with respect to the field ${\cal O}^a_1$ but also the derivatives with respect to the time derivatives of the field ${\cal O}^a_2$. This method subsequently induce new terms to generalize the previously known expressions of the respective observables with subleading higher order slow-roll corrections. Moreover, the expressions are derived for any generic multi-field inflationary potential with non-flat background and thus could be applicable and useful for generic models. \subsubsection* | 14 | 3 | 1403.0654 |
1403 | 1403.1072_arXiv.txt | Some severe constraints on asymmetric dark matter are based on the scenario that certain types of WIMPs can form mini-black holes inside neutron stars that can lead to their destruction. A crucial element for the realization of this scenario is that the black hole grows after its formation (and eventually destroys the star) instead of evaporating. The fate of the black hole is dictated by the two opposite mechanics i.e. accretion of nuclear matter from the center of the star and Hawking radiation that tends to decrease the mass of the black hole. We study how the assumptions for the accretion rate can in fact affect the critical mass beyond which a black hole always grows. We also study to what extent degenerate nuclear matter can impede Hawking radiation due to the fact that emitted particles can be Pauli blocked at the core of the star. \\[.1cm] {\footnotesize \it Preprint: CP$^3$-Origins-2014-006 DNRF90 \& DIAS-2014-6.} | Observations of old compact stars have been used in order to impose constraints on specific types of (a)symmetric dark matter~\cite{Goldman:1989nd,Kouvaris:2007ay,Bertone:2007ae,Kouvaris:2010vv,deLavallaz:2010wp,Kouvaris:2011fi,McDermott:2011jp,Guver:2012ba,Bell:2013xk,Bramante:2013hn,Bramante:2013nma, Kouvaris:2011gb,Capela:2012jz,Capela:2013yf,Kouvaris:2010jy,Fan:2012qy} or to predict new effects~\cite{PerezGarcia:2010ap,PerezGarcia:2011hh}. A set of the derived constraints is based on the fact that asymmetric dark matter can be trapped inside compact stars and under certain conditions, the WIMP population might collapse forming a black hole at the center of the star which eventually can consume it. The observation of old neutron stars (as it is well established) can therefore eliminate specific dark matter candidates, because their existence would have implied the destruction of these stars by WIMP-generated black holes inside the stars. However, in order to consider these constraints seriously, one has to ensure that all the stages that lead to the destruction of the star take place and that the destruction does not happen in time scales larger than billions of years. The last is important because in principle black holes might exist inside stars but due to small accretion rates, they could potentially not have a visible effect yet. At this point one should recall that in the case of repulsively interacting-bosons, the mass that leads to a black hole formation is~\cite{Kouvaris:2011fi} \begin{equation} M_c=\frac{2}{\pi}\frac{M_{\rm pl}^2}{m} \sqrt{1+\frac{M_{\rm pl}^2}{4\sqrt{\pi}m}\sigma^{1/2}}, \label{chandra} \end{equation} where $M_{\rm pl}$ and $m$ are the Planck mass and the WIMP mass respectively and $\sigma$ is a repulsive WIMP-WIMP interaction cross section modeled via a $\phi^4$ type of interaction. In the absence of self-interactions ($\sigma=0$) it is easy to deduce the above formula (up to a numerical factor of order one) by demanding the self-gravitation potential energy of the WIMP population to be larger than the relativistic kinetic energy coming from the uncertainty principle. The kinetic energy is $ \hbar /r$ and therefore the criterion for collapse becomes \begin{equation} \frac{\hbar}{r}<\frac{GMm}{r}, \end{equation} that leads to Eq.~(\ref{chandra}). However, after the formation, the expansion of the black hole depends on two competing mechanisms: accretion of the surrounding matter at the core of the star that obviously tends to increase the mass of the black hole and Hawking radiation that tends to reduce the mass of the black hole. The rate of change for the black hole mass is \begin{equation} \frac{dM}{dt}=CM^2-\frac{f}{G^2M^2}. \label{evol} \end{equation} In the case of spherical accretion in the hydrodynamic limit (Bondi accretion) $C=4\pi \lambda_s \rho_c G^2/c_s^3$, where $\lambda_s$ is a coefficient of order one, $\rho_c$ is the matter density at the core of the star, $M$ is the mass of the black hole, and $c_s$ is the speed of sound for the accreting matter. $f$ is a dimensionless number giving the power radiated away from Hawking radiation and it depends on the number of modes participating (how many different species of particles are emitted) and how fast the black hole rotates. Since accretion scales as $M^2$ and Hawking radiation as $M^{-2}$, there is a critical value $M_{\rm crit}=(f/G^2C)^{1/4}=m_{\rm pl}(f/C)^{1/4}$ above which the black hole grows and below which it is doomed to evaporate. Therefore if $M_c<M_{\rm crit}$, the black hole evaporates, while in the opposite case it grows. Due to the dependence of $M_c$ on $m$ in Eq.~(\ref{chandra}), the existence of the $M_{\rm crit}$ sets an upper bound $m_{\rm upp}$ on the WIMP mass upon the constraints can be potentially applied. For WIMPs with $m>m_{\rm upp}$, the mass of the formed black hole $M_c$ is smaller than $M_{\rm crit}$ and therefore the black hole eventually evaporates, so no constraints can be drawn. The process of the potential destruction of the star has the following stages: accretion of WIMPs onto the stars, thermalization with the surrounding nuclear matter and concentration at the core of the star, Bose-Einstein Condensate formation, self-gravitation, loss of energy of the WIMP sphere, formation of the black hole, and expansion of the black hole. If any of these stages does not take place, no destruction of the star happens and the constraints are invalid. Issues regarding the thermalization time scale of the WIMPs inside the star have been addressed in~\cite{Bertoni:2013bsa}. The effect of rotation of the neutron star on the rate of expansion of the black hole was addressed in~\cite{Kouvaris:2013kra}. One should mention that in the case of non-interacting bosonic WIMP with masses above $\sim 10$ TeV, the time order of BEC formation and self-gravitation is reversed. However, as it was pointed out in~\cite{Kouvaris:2012dz}, although the self-gravitating WIMP population might have a mass larger than $M_{\rm crit}$, the population does not collapse altogether, but rather black holes of $M_c$ are formed one after the other where every time the black hole evaporates before the next one forms. This means that no constraints can be applied in this case. The $m_{\rm upp}$ in the absence of self-interactions was estimated in~\cite{Kouvaris:2011fi} to be around 16 GeV assuming that only photons are emitted via Hawking radiation. In this paper we estimate more precisely the critical black hole mass $M_{\rm crit}$ and the upper bound on the WIMP mass that the constraints can be applied taking into consideration two things: The first one has to do with the power of Hawking radiation. As it is known, a black hole of temperature $T$ should roughly emit all elementary particles with mass lower than T. However, since the newly formed black hole is immersed in degenerate matter, some of the modes can be (partially) blocked. The temperature of a newly formed black hole is \begin{equation} T=\frac{1}{8\pi GM_c}=\frac{m}{16}. \label{temp} \end{equation} where we used Eq.~(\ref{chandra}) (in the case where self-interactions are absent) in the final part. For a WIMP mass of a few GeV which is roughly the upper bound on $m$ deduced previously, the temperature is just below GeV. Given that at the core of a neutron star, the baryon chemical potential is of the order of GeV, some of the modes might be partially or fully blocked due to degeneracy. This can potentially reduce significantly Hawking radiation, thus increasing the upper bound for $m$ (where constraints can be applied). On the other hand, the newly formed black hole can have a Schwarzschild radius that can be small compared to nucleon sizes. Using $r_s=2GM$ and Eq.~(\ref{chandra}) for example in the absence of self-interactions, one gets \begin{equation} r_s=2GM_c=\frac{4}{\pi m}\simeq 2.5 \times 10^{-14} \left ( \frac{{\rm GeV}}{m} \right ) {\rm cm}, \label{r_s} \end{equation} which is smaller than the proton size already for $m=1$ GeV. Therefore, the size of the black hole can be smaller than the size of the particles it accretes. From this point of view, it is not clear at all, if the conditions for Bondi accretion are fulfilled. The Bondi accretion solution is derived upon assuming that the accreted matter behaves as a smooth fluid. However, it is not clear if the hydrodynamic limit is satisfied. We are going to check how the upper limit for the WIMP mass where the existing constraints can still apply is modified if instead of a Bondi accretion we simply assume a geometric cross section for the black hole-matter collision. | 14 | 3 | 1403.1072 |
|
1403 | 1403.1843_arXiv.txt | Accurately modeling an extreme-mass-ratio inspiral requires knowledge of the second-order gravitational self-force on the inspiraling small object. Recently, numerical puncture schemes have been formulated to calculate this force, and their essential analytical ingredients have been derived from first principles. However, the \emph{puncture}, a local representation of the small object's self-field, in each of these schemes has been presented only in a local coordinate system centered on the small object, while a numerical implementation will require the puncture in coordinates covering the entire numerical domain. In this paper we provide an explicit covariant self-field as a local expansion in terms of Synge's world function. The self-field is written in the Lorenz gauge, in an arbitrary vacuum background, and in forms suitable for both self-consistent and Gralla-Wald-type representations of the object's trajectory. We illustrate the local expansion's utility by sketching the procedure of constructing from it a numerically practical puncture in any chosen coordinate system. | Observation of extreme-mass-ratio inspirals (EMRIs) is a central plank in plans for a space-based gravitational-wave detector~\cite{eLISA:13}. EMRIs, in which a compact object of mass $m$ orbits about and eventually falls into a massive black hole of mass $M\gg m$, will offer a unique probe of strong-field dynamics and a detailed map of the spacetime geometry near a black hole. However, an inspiral occurs on the very long dynamical timescale $M^2/m$, and to extract information about an inspiral from an observed waveform, one will require a model that accurately relates the waveform to the motion over that long time. For a physically relevant mass ratio $m/M=10^{-6}$, this translates to requiring an accurate model covering $\sim\!\!10^6$ wavecycles. Because of the drastically dissimilar lengthscales in these systems, numerical relativity cannot adequately model them even on short timescales. And because of the strong fields and large velocities in play, post-Newtonian theory is inapplicable. Instead, the most prominent method of tackling the problem has been to apply the gravitational self-force formalism~\cite{Barack:09,Poisson-Pound-Vega:11}, in which the small object is treated as the source of a perturbation $h_{\mu\nu}\sim m$ on the background spacetime $g_{\mu\nu}$ of the large black hole, and $h_{\mu\nu}$ exerts a force back on the small object, accelerating it away from test-particle, geodesic motion in $g_{\mu\nu}$. It has long been known~\cite{Rosenthal:06a} that within this formalism, accurately modeling an inspiral on the long timescale $\sim M^2/m$ requires knowledge of the smaller object's acceleration to second order in $m$, meaning garden-variety linear perturbation theory is insufficient. The veracity of this claim can be seen from a simple scaling argument: if the small object's acceleration contains an error of order $\delta a~\sim m^2/M^3$, then after a time $M^2/m$ the error in its position is $\delta z\sim t^2\delta a\sim M$ (setting $c=G=1$, as we do throughout this paper). Therefore, to ensure that the errors remain small (i.e., $\delta z\ll M$), we must allow no error in the acceleration at order $m^2$. In other words, we must account for the second-order self-force.\footnote{A subtler scaling argument~\cite{Hinderer-Flanagan:08} shows that only a specific piece of the second-order force is needed: the orbit-averaged dissipative piece, which causes the largest long-term changes in the orbit.} In addition to its applications in the EMRI problem, the second-order self-force promises to be a useful tool in modeling other binary systems. At first order, numerical self-force data has been fruitfully used to fix high-order terms and otherwise-free parameters in post-Newtonian~\cite{Blanchet-etal:10b,Favata:11,LeTiec-etal:11,LeTiec-etal:12b} and effective-one-body~\cite{Damour:09,Barack-Damour-Sago:10, Barausse-etal:11,Akcay-etal:12} models, and the same strategy could be employed at second order. Perhaps more strikingly, at first order there is compelling evidence that the self-force formalism can be made accurate well outside the extreme-mass-ratio regime~\cite{LeTiec-etal:11,LeTiec-etal:13}, which suggests that at second order the self-force could be used to directly model intermediate-mass-ratio and potentially even comparable-mass binaries with reasonable accuracy. After several exploratory studies of the second-order problem~\cite{Rosenthal:06a,Rosenthal:06b,Pound:10a,Detweiler:11}, these prospects have recently been brought substantially closer to realization, and the essential analytical ingredients necessary for concrete calculations of the second-order self-force are now available~\cite{Pound:12a,Gralla:12,Pound:12b,Pound:14a}. These ingredients are \begin{itemize} \item a local expression for the small object's \emph{self-field} $h^\S_{\mu\nu}$, \item an equation of motion for the small object's center of mass in terms of a certain \emph{effective field} $h^\R_{\mu\nu}$. \end{itemize} Both results were derived from the Einstein equations via rigorous methods of matched asymptotic expansions developed in Refs.~\cite{Gralla-Wald:08, Pound:10a}; for an overview, see the review~\cite{Poisson-Pound-Vega:11} or the forthcoming exegesis~\cite{Pound:14b}. Together, the above two ingredients make up all the requisite input for a numerical puncture scheme (also known as an effective-source scheme)~\cite{Barack-Golbourn:07,Vega-Detweiler:07,Wardell-etal:11}. In the context of matched asymptotic expansions, such a scheme originates from a split of the full perturbation $h_{\mu\nu}$ into two pieces: \begin{equation} h_{\mu\nu}=h^\S_{\mu\nu}+h^\R_{\mu\nu}, \end{equation} where the self-field $h^\S_{\alpha\beta}$ encapsulates local information about the object's multipole structure, and the effective field $h^\R_{\mu\nu}$ is a vacuum perturbation that is determined by the global boundary conditions imposed on $h_{\mu\nu}$. $h^\S_{\mu\nu}$ and $h^\R_{\mu\nu}$ are defined locally in a neighbourhood \emph{outside} the object. A puncture scheme (and from this perspective, \emph{every} numerical scheme that has been implemented in calculations of the gravitational self-force) proceeds by analytically continuing these fields into the region where the object would lie in the full, physical spacetime. The analytically continued self-field $h^\S_{\mu\nu}$ diverges at a worldline $\gamma$ that represents the object's mean motion in the background spacetime, and the self-field is hence renamed the \emph{singular field}. Conversely, the analytically continued field $h^\R_{\mu\nu}$ is smooth at $\gamma$, earning it the sobriquet \emph{regular field}. In this paper we will use the choice of $h^\R_{\mu\nu}$ and $h^\S_{\mu\nu}$ defined by Pound~\cite{Pound:12a},\footnote{This definition is closely related to but slightly different from that of Ref.~\cite{Pound:12b}; see Sec.~\ref{singular-regular} and Appendix~\ref{trace-reverse}.} described again below in Sec.~\ref{Fermi-field}. With that choice, the effective metric $g_{\mu\nu}+h^\R_{\mu\nu}$ is a $C^\infty$ solution to the vacuum Einstein equation, and $\gamma$ is a geodesic in that vacuum metric (through order $m^2$, for any object with sufficient sphericity and slow spin). Alternative choices of $h^\R_{\mu\nu}$ and $h^\S_{\mu\nu}$, with different properties, have also been made at second order~\cite{Harte:12,Gralla:12}, and they could equally well be used within a puncture scheme. Once the choice of singular and regular fields has been made, a puncture scheme begins with the construction of a \emph{puncture} $h^\P_{\mu\nu}$, defined by truncating a local expansion of the singular field, in powers of spatial distance $\lambda$ from $\gamma$, at a specified order. One then defines the residual field \begin{equation} h^\res_{\mu\nu} \equiv h_{\mu\nu}-h^\P_{\mu\nu} \end{equation} and in a region covering the object, writes a field equation for $h^\res_{\mu\nu}$, rather than one for (the analytically continued) physical field $h_{\mu\nu}$. Since $h^\P_{\mu\nu}\approx h^\S_{\mu\nu}$, so too $h^\res_{\mu\nu}\approx h^\R_{\mu\nu}$. The better $h^\P_{\mu\nu}$ represents $h^\S_{\mu\nu}$, the better $h^\res_{\mu\nu}$ represents $h^R_{\mu\nu}$. For example, if $\lim_{x\to\gamma}[h^\P_{\mu\nu}(x)-h^\S_{\mu\nu}(x)]=0$, then $\lim_{x\to\gamma}h^\res_{\mu\nu}(x)=\lim_{x\to\gamma}h^\R_{\mu\nu}(x)$; that is, the residual field agrees with the regular field on the worldline. If $h^\P_{\mu\nu}$ is one order more accurate, meaning $h^\P_{\mu\nu}-h^\S_{\mu\nu}=o(\lambda)$, then $\lim_{x\to\gamma}\nabla_{\!\rho} h^\res_{\mu\nu}=\lim_{x\to\gamma}\nabla_{\!\rho} h^\R_{\mu\nu}$; since the self-force is constructed from first derivatives of $h^\R_{\mu\nu}$, this condition guarantees that the force can be calculated from $h^\res_{\mu\nu}$, as in Eq.~\eqref{motion_SC} below.\footnote{The reader should note that up to numerical error, this procedure yields the force {\em exactly}. No approximations are made by replacing the regular field with the residual field in the equation of motion.} This type of scheme removes the physical system in the interior of the object, with all its matter fields, curvature singularities (in the case of a black hole), or even wormholes, and replaces it with an \emph{effective} system. Put more simplistically, the puncture replaces the object. The precise form that a puncture scheme takes, and the interpretation of the puncture's `position', will depend on the type of perturbative expansion one begins from: a self-consistent expansion~\cite{Pound:10a,Pound:12a,Pound:12b,Pound:13a,Pound:14b}; or what we will call a Gralla-Wald-type expansion, exemplified by Refs.~\cite{Gralla-Wald:08,Gralla:12}. The core difference between these two methods of expansion is their representation of the object's mean motion, but that difference influences the overarching treatment of the field equations. To set the stage for our calculations and establish a unified framework for the discussion, we will briefly describe the second-order puncture scheme that arises from each of the methods. \subsection{Self-consistent puncture scheme} In a self-consistent expansion of the field equations, one seeks an equation of motion for a self-accelerated worldline $\gamma$ that well represents the object's bulk motion, and one expands the metric perturbation in terms of functionals of that worldline: \begin{equation}\label{h_SC_expansion} h_{\mu\nu} = \e h^1_{\mu\nu}[\gamma]+\e^2 h^2_{\mu\nu}[\gamma]+O(\e^3), \end{equation} where $\e\equiv1$ is used to count powers of the object's mass $m$. Here each $h^n_{\mu\nu}$ is allowed a functional dependence on the accelerated (and therefore $\e$-dependent) worldline $\gamma$. After imposing the Lorenz gauge on the full perturbation, \begin{equation} \nabla^\nu\bar h_{\mu\nu} = 0,\label{gauge} \end{equation} where $\bar h_{\mu\nu} \equiv h_{\mu\nu}-\frac{1}{2}g_{\mu\nu}g^{\alpha\beta}h_{\alpha\beta}$, the vacuum Einstein equation outside the object, $R_{\mu\nu}[g+h]$, is split into a sequence of wave equations, the first two of which read \begin{align} E_{\mu\nu}[h^{1}] &= 0,\label{h1_eq}\\ E_{\mu\nu}[h^2] &= 2\delta^2R_{\mu\nu}[h^1,h^1],\label{h2_eq} \end{align} where \begin{equation} E_{\mu\nu}[h] \equiv \Box h_{\mu\nu} +2R_\mu{}^\alpha{}_\nu{}^\beta h_{\alpha\beta} \end{equation} is the usual tensorial wave operator and \begin{align} \delta^2R_{\alpha\beta}[h,h] &= -\tfrac{1}{2}\bar h^{\mu\nu}{}_{;\nu}\left(2h_{\mu(\alpha;\beta)} -h_{\alpha\beta;\mu}\right) \nonumber\\ &\quad +\tfrac{1}{4}h^{\mu\nu}{}_{;\alpha}h_{\mu\nu;\beta} +\tfrac{1}{2}h^{\mu}{}_{\beta}{}^{;\nu}\left(h_{\mu\alpha;\nu} -h_{\nu\alpha;\mu}\right)\nonumber\\ &\quad-\tfrac{1}{2}h^{\mu\nu}\left(2h_{\mu(\alpha;\beta)\nu}-h_{\alpha\beta;\mu\nu}-h_{\mu\nu;\alpha\beta}\right)\!\!\label{second-order_Ricci} \end{align} is the second-order Ricci tensor (the first term of which, involving $\bar h^{\mu\nu}{}_{;\nu}$, vanishes with our choice of gauge). Both a semicolon and $\nabla$ denote the covariant derivative compatible with the background metric $g_{\mu\nu}$. After solving Eqs.~\eqref{h1_eq} and \eqref{h2_eq} in a region around the small object, each $h^n_{\mu\nu}$ can be decomposed into singular and regular pieces, or into a puncture and residual field: \begin{align} h^1_{\mu\nu} &= h^{\S1}_{\mu\nu}[\gamma] + h^{\R1}_{\mu\nu}[\gamma]= h^{\P1}_{\mu\nu}[\gamma] + h^{\res1}_{\mu\nu}[\gamma],\\ h^2_{\mu\nu} &= h^{\S2}_{\mu\nu}[\gamma] + h^{\R2}_{\mu\nu}[\gamma]= h^{\P2}_{\mu\nu}[\gamma] + h^{\res2}_{\mu\nu}[\gamma]. \end{align} For a sufficiently slowly spinning object, the first- and second-order singular fields (and punctures) near $\gamma$ have the schematic forms \begin{equation}\label{hS1_SC_schematic} h^{\S1}_{\mu\nu}\sim \frac{m}{|x^i-z^i|} + O(|x^i-z^i|^0) \end{equation} and \begin{equation}\label{hS2_SC_schematic} h^{\S2}_{\mu\nu}\sim \frac{m^2}{|x^i-z^i|^2} + \frac{\delta m_{\mu\nu}+mh^{\R1}_{\mu\nu}}{|x^i-z^i|} + O(\ln|x^i-z^i|), \end{equation} where $z^i$ are spatial coordinates on $\gamma$ and $|x^i-z^i|$ represents distance from $\gamma$. The explicit expressions for the first few terms in these expansion, derived in Refs.~\cite{Pound:10a,Pound:12a}, are given in Eqs.~\eqref{hS1_SC_Fermi} and \eqref{hS2_SC_Fermi}--\eqref{hdm_SC_Fermi} in a local coordinate system $(t,x^i)$ centered on $\gamma$ (such that $z^i\equiv0$ in the schematic expressions above). At first order, the puncture is given roughly by a Coulomb potential sourced by the mass $m$. At second order, there are naturally quadratic combinations of this potential, signified by the $m^2$ term, but there are also quadratic combinations of the mass and the first-order regular field, as well as a gravitationally induced correction to the body's monopole moment, denoted by $\delta m_{\mu\nu}$ and given explicitly in Eq.~\eqref{dm_SC_Fermi}. There are several schemes that can be developed from the starting point of the puncture. Here we describe a worldtube scheme in the tradition of Refs.~\cite{Barack-Golbourn:07,Dolan-Barack:11}. In this type of scheme one uses the field variables $h^{\res n}_{\mu\nu}$ inside a worldtube $\Gamma$ surrounding $\gamma$, the field variables $h^n_{\mu\nu}$ outside that worldtube, and the change of variables $h^n_{\mu\nu}=h^{\res n}_{\mu\nu}+h^{\P n}_{\mu\nu}$ when moving between the two regions.\footnote{One does not solve the problem in each domain separately, since the separate problems would be ill-posed. Instead, when calculating $h^n_{\mu\nu}$ at a point just outside $\Gamma$ that depends on points on past time slices inside $\Gamma$, one makes use of the values of $h^{\res n}_{\mu\nu}$ already calculated at those earlier points, and vice versa; see Sec.~VB of Ref.~\cite{Barack-Golbourn:07}.} The second-order puncture scheme is then summarized by the coupled system of equations \begin{subequations}\label{h1_SC}% \begin{align} E_{\mu\nu}[h^{\res1}] &= -16\pi \bar T^1_{\mu\nu}[\gamma]-E_{\mu\nu}[h^{\P1}] & \text{inside }\Gamma,\\ E_{\mu\nu}[h^{1}] &= 0 & \text{outside }\Gamma, \end{align} \vspace{-1.5\baselineskip} \end{subequations} \begin{subequations}\label{h2_SC}% \begin{align} E_{\mu\nu}[h^{\res2}] &= 2\delta^2R_{\mu\nu}[h^1,h^1]-16\pi \bar T^2_{\mu\nu}[\gamma]\hspace{-15pt} & \nonumber\\ &\quad - E_{\mu\nu}[h^{\P2}] & \text{inside }\Gamma,\\ E_{\mu\nu}[h^2] &= 2\delta^2R_{\mu\nu}[h^1,h^1] & \text{outside }\Gamma, \end{align} \end{subequations} \vspace{-1.5\baselineskip} \begin{align} \frac{D^2 z^\mu}{d\tau^2} &= -\frac{1}{2}P^{\mu\nu}\left(g_\nu{}^\gamma-h^\res_\nu{}^\gamma\right) \left(2h^\res_{\gamma\alpha;\beta}-h^\res_{\alpha\beta;\gamma}\right)u^\alpha u^\beta,\label{motion_SC} \end{align} where the puncture diverges on the worldline $z^\mu$ determined by Eq.~\eqref{motion_SC}. That divergence is quite strong, with the terms $2\delta^2R_{\mu\nu}[h^1,h^1]$ and $E_{\mu\nu}[h^{\P2}]$ in Eq.~\eqref{h2_SC} each blowing up as $1/\lambda^4$ near the worldline, but by construction, the divergences necessarily cancel each other. Here the quantities \begin{align} \bar T^1_{\mu\nu}[\gamma] &= \int_\gamma m(\tfrac{1}{2}g_{\mu\nu}+u_\mu u_\nu)\delta^4(x,z)d\tau,\label{T1_SC}\\ \bar T^2_{\mu\nu}[\gamma] &= \int_\gamma \tfrac{1}{4}\delta m_{\mu\nu}\delta^4(x,z)d\tau,\label{T2_SC} \end{align} with $\delta^4(x,z)\equiv\delta^4(x^\alpha-z^\alpha)/\sqrt{-g}$, are effective (trace-reversed) point-particle stress-energy tensors sourcing the Coulomb-like fields $m/|x^i-z^i|$ and $\delta m_{\mu\nu}/|x^i-z^i|$ in Eqs.~\eqref{hS1_SC_schematic} and \eqref{hS2_SC_schematic}. Their origin is described in Sec.~\ref{Fermi-field} below. In the equation of motion~\eqref{motion_SC}, \begin{equation} h^\res_{\mu\nu} = \e h^{\res1}_{\mu\nu}[\gamma]+\e^2 h^{\res2}_{\mu\nu}[\gamma] \end{equation} is the total residual field through second order, $\tau$ is proper time (measured in $g_{\mu\nu}$) on $\gamma$, $u^\mu\equiv \frac{dz^\mu}{d\tau}$ is the four-velocity on $\gamma$, $\frac{D}{d\tau}\equiv u^\mu\nabla_{\!\mu}$ is a covariant derivative along $u^\mu$, and \begin{equation} P^{\mu\nu}\equiv g^{\mu\nu}+u^\mu u^\nu \end{equation} projects orthogonally to $u^\mu$. In this scheme, Eqs.~\eqref{h1_SC}--\eqref{motion_SC} must be solved together, as a coupled system for the variables $z^\mu$, $h^{\res1}_{\mu\nu}\slash h^1_{\mu\nu}$ (inside/outside $\Gamma$), and $h^{\res2}_{\mu\nu}\slash h^2_{\mu\nu}$. Unlike in many approaches to the gravitational self-force, there is nowhere any reference to a background geodesic. Instead, the residual fields govern the position of the puncture, and the position of the puncture effectively sources the residual fields. To ensure that the metric perturbation is a solution to the Einstein equation, and not just the wave equations~\eqref{h1_eq}--\eqref{h2_eq}, we must ensure it satisfies the gauge condition~\eqref{gauge}. However, each $h^n_{\mu\nu}$ cannot satisfy a separate gauge condition of the form $\nabla^\nu\bar h^n_{\mu\nu}=0$, since such a condition is inconsistent with an accelerated worldline as a source. Instead, the perturbations together must satisfy $\e\nabla^\nu\bar h^1_{\mu\nu}+\e^2\nabla^\nu\bar h^2_{\mu\nu}=o(\e^2)$, with $\e\nabla^\nu\bar h^1_{\mu\nu}$ being on its own of order $\sim \e a^\mu$.\footnote{The precise condition on $h^1_{\mu\nu}$ can be written down explicitly. It is known~\cite{Pound:10a,Pound:12b} that the correct solution to Eq.~\eqref{h1_eq} is identical to that sourced by the point-mass stress-energy of Eq.~\eqref{T1_SC}, meaning $\bar h_1^{\mu\nu}(x)=4m\int_\gamma G^{\mu\nu}{}_{\mu'\nu'}(x,z(\tau))u^{\mu'}u^{\nu'}d\tau$, where a primed index indicates a tensor evaluated at $x'=z(\tau)$, and $G^{\mu\nu}{}_{\mu'\nu'}$ is a Green's function for the operator $E_{\mu\nu}$. Using the identity $G^{\mu\nu}{}_{\mu'\nu';\nu}=-G^\mu_{(\mu';\nu')}$, where $G^\mu{}_{\mu'}$ is a Green's function for the operator $\Box$ as it acts on a vector field, one finds the exact gauge condition to be $\nabla_{\!\nu}\bar h^{\mu\nu}=4m\int_\gamma G^\mu{}_{\mu'}\frac{D^2 z^{\mu'}}{d\tau^2}d\tau$.} In principle, these conditions should be satisfied automatically if the initial data satisfies it; one can verify this by taking the divergence of the first- and second-order wave equations and making use of the second-order Bianchi identity. In practice, however, gauge violations will be introduced numerically. Eliminating those violations should be possible with the introduction of constraint-damping terms~\cite{Barack-Lousto:05,Dolan-Barack:13}; for example, constraints of the form $0=Z^n_\mu\equiv \e^n\nabla^\nu\bar h^n_{\mu\nu} - \e^3 f^n_\mu$ might be used, where $n=1,2$ and $f^n_\mu$ are chosen vector fields that are uniformly of order 1. Constraints of this form do not affect the fields at orders $\e$ and $\e^2$, allowing them to maintain their correct relationship with the acceleration and thereby ensuring that the Einstein equation is satisfied through order $\e^2$. \subsection{Gralla-Wald-type puncture scheme} In a Gralla-Wald-type expansion of the field equations, rather than seeking an equation of motion for a self-accelerated worldline $\gamma$, one expands that worldline in a power series around a zeroth-order reference geodesic $\gamma_0$: given a coordinate description $z^\mu(s,\e)$ of $\gamma$, the expansion reads \begin{equation}\label{z_expansion} z^\mu(s,\e) = z_0^\mu(s)+\e z_1^\mu(s)+\e^2 z_2^\mu(s)+O(\e^3), \end{equation} where $s$ is a monotonic parameter along both $\gamma$ and $\gamma_0$. The leading-order term, $z_0^\mu(s)$, is the coordinate description of a geodesic of the background metric $g_{\mu\nu}$. The first-order term, $z_1^\mu\equiv \frac{\partial z^\mu}{\partial\e}|_{\e=0}$, is a vector on $\gamma_0$, describing the leading-order deviation of $\gamma$ from $\gamma_0$. The second-order term, if defined as $z_2^\mu\equiv \frac{1}{2}\frac{\partial^2z^\mu}{\partial^2\e}|_{\e=0}$, is simply a set of four scalars that depend on the choice of coordinates; because it is a second derivative (along a curve of increasing $\e$ and constant $s$), it does not transform as a vector. A puncture scheme for this type of expansion can be derived from scratch in any gauge of choice, such as in Gralla's `P-smooth' gauges~\cite{Gralla:12}. Alternatively, a puncture scheme in the Lorenz gauge can be deduced simply by substituting the expansion~\eqref{z_expansion} into the metric perturbation~\eqref{h_SC_expansion}, the field equations~\eqref{h1_SC}--\eqref{h2_SC}, and the equation of motion~\eqref{motion_SC}, and then reorganizing terms according to explicit powers of $\e$. The metric perturbation is then given by the expansion \begin{equation}\label{h_GW_expansion} h_{\mu\nu} = \e h^1_{\mu\nu}[\gamma_0] + \e^2 h^2_{\mu\nu}[\gamma_0,z_1] + o(\epsilon^2). \end{equation} Here $h^1_{\mu\nu}$ is the same functional as in in Eq.~\eqref{h_SC_expansion}, but $\gamma_0$ has replaced $\gamma$ in its argument. On the other hand, $h^2_{\mu\nu}$ is now a different functional, which depends on $z_1$. Analogously, the decomposition into singular and regular fields in this expansion reads \begin{align} h^1_{\mu\nu} &= h^{\S1}_{\mu\nu}[\gamma_0] + h^{\R1}_{\mu\nu}[\gamma_0]= h^{\P1}_{\mu\nu}[\gamma_0] + h^{\res1}_{\mu\nu}[\gamma_0],\\ h^2_{\mu\nu} &= h^{\S2}_{\mu\nu}[\gamma_0,z_1] + h^{\R2}_{\mu\nu}[\gamma_0,z_1]= h^{\P2}_{\mu\nu}[\gamma_0,z_1] + h^{\res2}_{\mu\nu}[\gamma_0,z_1]. \end{align} Near the object, the singular field takes the form \begin{equation}\label{hS1_GW_schematic} h^{\S1}_{\mu\nu}\sim \frac{m}{|x^i-z^i_0|} + O(|x^i-z^i_0|^0), \end{equation} \begin{equation}\label{hS2_GW_schematic} h^{\S2}_{\mu\nu}\sim \frac{m^2+mz^\mu_{1\perp}}{|x^i-z^i_0|^2} + \frac{\delta m_{\mu\nu}+mh^{\res1}_{\mu\nu}}{|x^i-z^i_0|} + O(|x^i-z^i_0|^0). \end{equation} This form is identical to Eqs.~\eqref{hS1_SC_schematic}--\eqref{hS2_SC_schematic} but for two alterations: \begin{itemize} \item The divergent terms diverge on $\gamma_0$, not on $\gamma$. \item The second-order singular field depends on the correction $z^\mu_1$ to the position. \end{itemize} The explicit expressions for the first few terms in these expansions, derived in Ref.~\cite{Pound:10a}, are given in Eqs.~\eqref{hS1_GW_Fermi} and \eqref{hS2_GW_Fermi}--\eqref{dm_GW_Fermi} in a local coordinate system $(t,x^i)$ centered on $\gamma_0$ (such that $z_0^i\equiv0$ in the schematic expressions above). Because the point at which the puncture diverges is independent of the field values in this expansion, the puncture scheme becomes a sequence of equations, rather than a coupled system: first, the zeroth-order worldline is prescribed as a solution to the background geodesic equation, \begin{equation} \frac{D^2z^\mu_0}{d\tau_0^2} = 0, \end{equation} then the first order field is found from \begin{subequations}\label{h1_GW} \begin{align} E_{\mu\nu}[h^{\res1}] &= -16\pi \bar T^1_{\mu\nu}[\gamma_0]-E_{\mu\nu}[h^{\P1}_{\alpha\beta}] \hspace{-5pt}& \text{inside }\Gamma_0,\\ E_{\mu\nu}[h^{1}] &= 0 & \text{outside } \Gamma_0, \end{align} \end{subequations} then that field is used to find the first-order correction to the position by solving the Gralla-Wald equation~\cite{Gralla-Wald:08} \begin{align}\label{motion_GW} \frac{D^2z_{1\perp}^\mu}{d\tau_0^2} &= R^\mu{}_{\alpha\beta\gamma}u_0^\alpha u_0^\beta z_{1\perp}^\gamma \nonumber\\ &\quad -\frac{1}{2}P_0^{\mu\gamma}\left(2h^{\res1}_{\gamma\alpha;\beta}-h^{\res1}_{\alpha\beta;\gamma}\right)u^\alpha_0 u^\beta_0, \end{align} and finally the second-order field is found from \begin{subequations}\label{h2_GW} \begin{align} E_{\mu\nu}[h^{\res2}] &=2\delta^2R_{\mu\nu}[h^1,h^1]-16\pi \bar T^2_{\mu\nu}[\gamma_0,z_1]\hspace{-40pt}&\nonumber\\ &\quad - E_{\mu\nu}[h^{\P2}] & \text{inside }\Gamma_0,\\ E_{\mu\nu}[h^2] &= 2\delta^2R_{\mu\nu}[h^1,h^1] & \text{outside }\Gamma_0. \end{align} \end{subequations} Here \begin{align} \bar T^1_{\mu\nu}[\gamma_0] &= \int_{\gamma_0} m(\tfrac{1}{2}g_{\mu\nu}+u_{0\mu} u_{0\nu}) \delta^4(x,z_0)d\tau_0,\label{T1_GW}\\ \bar T^2_{\mu\nu}[\gamma_0,z_1] &= \int_{\gamma_0} m(\tfrac{1}{2}g_{\mu\nu}+u_{0\mu} u_{0\nu}) z^{\gamma}_{1\perp}\frac{\partial}{\partial z_0^{\gamma}}\delta^4(x,z_0)d\tau_0\nonumber\\ &\quad +\frac{1}{4}\int_{\gamma_0} \delta m_{\mu\nu}\delta^4(x,z_0)d\tau_0\label{T2_GW} \end{align} act as effective stress-energies sourcing the $m$, $\delta m_{\mu\nu}$, and $mz^a_1$ terms in Eqs.~\eqref{hS1_GW_schematic} and \eqref{hS2_GW_schematic}. $\tau_0$ is the proper time (measured in $g_{\mu\nu}$) on $\gamma_0$, $u_0^\mu\equiv \frac{dz_0^\mu}{d\tau_0}$ is the four-velocity on $\gamma_0$, $\frac{D}{d\tau_0}\equiv u_0^\mu\nabla_{\!\mu}$ is a covariant derivative along $u_0^\mu$, \begin{equation} P_0^{\mu\nu}\equiv g^{\mu\nu}+u_0^\mu u_0^\nu \end{equation} projects orthogonally to $u_0^\mu$, and \begin{equation} z^\mu_{1\perp}\equiv P_{0\nu}^{\mu} z_1^\nu \end{equation} is the piece of $z_1^\mu$ perpendicular to $u_0^\mu$.\footnote{Appendix D of Ref.~\cite{Pound:14a} describes why only the perpendicular piece of $z^\mu_{1}$ is needed as input for the second-order field. See also Sec.~\ref{GW_Fermi} below.} We have renamed the worldtube $\Gamma_0$ to indicate that $\gamma_0$ is always in its interior but $\gamma$ need not be. If one wishes, as a final step in this procedure one can use the second-order field obtained from Eq.~\eqref{h2_GW} to find the second-order correction to the position; because that correction is not vectorial, we omit it, but we refer the reader to Refs.~\cite{Gralla:12,Pound:14a} for the differential equations governing $z_2^\mu$ (defined in particular local coordinate systems). In a scheme of this type, the correction to the motion is never incorporated into the position of the puncture, which diverges on the geodesic $z^\mu_0$ at all orders. This points to the fact that a Gralla-Wald-type expansion is valid only on timescales of order $\epsilon^0$, which are much much shorter than an inspiral time. After sufficient time, the correction $z_1^\mu$ will become large---as, for example, the small object falls into the large black hole in an EMRI---the series expansion of $z^\mu$ will no longer be valid, and the entire approximation scheme will fail. Nevertheless, a puncture scheme of this type can be useful for extracting short-term information about an inspiral, such as the conservative effects of the self-force at a given time~\cite{Pound:14c}. Because it is much easier to implement than a self-consistent scheme, it will likely be the preferred method for such calculations. Unlike in the self-consistent expansion, here each of the perturbations must independently satisfy the Lorenz gauge condition: \begin{align} \nabla^\nu\bar h^1_{\mu\nu} =0=\nabla^\nu\bar h^2_{\mu\nu}. \end{align} \subsection{Building a practical puncture} Several versions of the second-order self-field (and therefore several punctures) are now available. In Refs.~\cite{Pound:10a,Pound:12a,Pound:12b} one of us derived expressions for the self-field in the Lorenz gauge in both self-consistent and Gralla-Wald form in an arbitrary vacuum background and for an arbitrarily structured (sufficiently compact) small object. The last of these, Ref.~\cite{Pound:12b}, showed how the same can be done at arbitrary order in $\e$ in a broad class of wave gauges. In Ref.~\cite{Gralla:12}, Gralla presented a puncture scheme within a Gralla-Wald-type expansion in a broad class of P-smooth gauges in an arbitrary vacuum background for a nearly spherical and non-spinning object. However, all of these results were derived in local coordinate systems centered on the object's worldline (either $\gamma$ or $\gamma_0$). As of yet, no punctures have been presented in coordinate systems useful for numerical implementations of a puncture scheme. The main purpose of this paper is to fill that gap in the literature by deriving a covariant expansion of the second-order singular/self-field. From that covariant expansion, a puncture can be found in any desired coordinate system. We work in the Lorenz gauge, use the singular-regular split defined in Refs.~\cite{Pound:10a,Pound:12a}, and set the object's leading-order spin to zero. Our results are valid in any vacuum background. We begin in Sec.~\ref{Fermi-field} with the field in Fermi-Walker coordinates $(t,x^i)$ centered on either $\gamma$ or $\gamma_0$. In these coordinates, the components of the singular field, through the orders that have been calculated, take the form \begin{align} h^{\S 1}_{\mu\nu}&=\sum_{p=-1}^2\sum_{\substack{\ell=0\\\ell\neq p}}^{p+1}r^p h^{(1p0\ell)}_{\mu\nu L}(t)\nhat^L+O(r^3),\\ h^{\S 2}_{\mu\nu}&=\sum_{p=-2}^1\sum_{\substack{\ell=0\\\ell\neq p}}^{p+4}r^p h^{(2p0\ell)}_{\mu\nu L}(t)\nhat^L\nonumber\\ &\quad +(\ln r) \sum_{p=0,1}\sum_{\ell=p} r^p h^{(2p1\ell)}_{\mu\nu L}(t)\nhat^L\nonumber\\ &\quad +O(r^2\ln r),\label{hS2_form_Fermi} \end{align} where $r\equiv\sqrt{\delta_{ab}x^ix^j}$ is the geodesic distance from the worldline. The first few of these terms are given explicitly in Eqs.~\eqref{hS1_SC_Fermi} and \eqref{hS2_SC_Fermi}--\eqref{hdm_SC_Fermi} in the self-consistent case and in Eqs.~\eqref{hS1_GW_Fermi} and \eqref{hS2_GW_Fermi}--\eqref{dm_GW_Fermi} in the Gralla-Wald case. All the terms displayed in Eq.~\eqref{hS2_form_Fermi} (i.e., terms through order $r$) were previously made available online~\cite{results}. In these expressions $n^i\equiv x^i/r$ is a unit vector pointing radially outward from the worldline, $L\equiv i_1\cdots i_\ell$ is a multi-index, and $\hat n^L\equiv n^{\langle i_1}\cdots n^{i_\ell\rangle}$ is the symmetric-trace-free (STF) product of $\ell$ unit vectors, with the trace defined with respect to $\delta_{ab}$. In Sec.~\ref{field-cov}, we put this in covariant form using the tools of near-coincidence expansions. The expansion parameter, in place of $r$, becomes $\sigma^{\mu'}\equiv \nabla^{\mu'}\sigma(x,x')$, where Synge's world function $\sigma(x,x')$ is equal to one-half the squared geodesic distance from $x'$ to $x$, the latter being the point off the worldline where the field is evaluated, and the former being an arbitrarily chosen nearby point on the worldline. We refer the reader to Ref.~\cite{Poisson-Pound-Vega:11} for a pedagogical introduction to covariant near-coincidence expansions and to Ref.~\cite{Heffernan-etal:12} for a recent example of their usage. The transformation from Fermi-Walker coordinates to covariant form is aided by the coordinates' convenient definition in terms of Synge's world function: $x^i\equiv-e^i_{\balpha}\sigma^{\balpha}$, where $e^i_\alpha$ is a triad leg on the worldline and a barred index signifies evaluation at a point $\bar x$ connected to $x$ by a geodesic intersecting the worldline orthogonally. Making use of this and similar definitions leads to, through the orders we have calculated, a covariant expansion of the form \begin{align} h^{\S 1}_{\mu\nu}&=g^{\mu'}_{\mu}g^{\nu'}_\nu \sum_{p=-1}^2\sum_{\ell=0}^{p+1}\sum_{i,j}\lambda^p\s^i\r^j \tilde h^{1p0ij\ell}_{\mu'\nu'\Lambda'}(x')\sigma^{\Lambda'}+O(\lambda^3),\\ h^{\S 2}_{\mu\nu}&=g^{\mu'}_{\mu}g^{\nu'}_\nu \Bigg[\sum_{p=-2}^1\sum_{\ell=0}^{p+4}\sum_{i,j} \lambda^p\s^i\r^j \tilde h^{2p0ij\ell}_{\mu'\nu'\Lambda'}(x')\sigma^{\Lambda'}\nonumber\\ &\quad \ln(\lambda\s)\sum_{p=0}^1\sum_{\ell=0}^p\sum_{j=p-\ell} \lambda^p\r^j \tilde h^{2p10j\ell}_{\mu'\nu'\Lambda'}(x')\sigma^{\Lambda'}\Bigg]\nonumber\\ &\quad +O(\lambda^2\ln\lambda),\label{hS2_form_cov} \end{align} where $\lambda$, introduced previously as a measure of spatial distance from the worldline, we now set equal to unity and use simply to count powers of that distance. The first few of these terms are given explicitly in Eqs.~\eqref{hS1_SC_cov}--\eqref{hdm_SC_final} in the self-consistent case and in Eqs.~\eqref{hS1_GW_cov}--\eqref{F1_GW} in the Gralla-Wald case. All the terms displayed in Eq.~\eqref{hS2_form_cov} (i.e., terms through order $\lambda$) are now available online~\cite{results}. In these expressions $g^{\mu'}_{\mu}$ is a parallel propagator from $x'$ to $x$, $\Lambda'\equiv\alpha_1'\cdots\alpha_\ell'$ is a multi-index, $\sigma^{\Lambda'}\equiv\sigma^{\alpha'_1}\cdots\sigma^{\alpha'_\ell}$, $\r$ and $\s$ are certain small distances defined in Eqs.~\eqref{r} and \eqref{s}, and the sums over $i$ and $j$ are such that $i+j+\ell=p$. The covariant expansion of $h^{\S2}_{\mu\nu}$ represented by Eq.~\eqref{hS2_form_cov} is the central result of this paper. With that covariant expansion in hand, a puncture in any particular coordinate system can be easily found by expanding the covariant quantities $g^{\mu'}_\mu$ and $\sigma^{\mu'}$ in terms of coordinate distances $\Delta x^{\mu'}=x^\mu-x^{\mu'}$, where $x^{\alpha'}$ are the coordinate values at $x'$; see, e.g., Ref.~\cite{Heffernan-etal:12}. The result will be a puncture of the form \begin{align} h^{\P 1}_{\mu\nu}&=\sum_{p=-1}^2\delta^i_{2p+3}\delta^\ell_{3p+3}\frac{\lambda^p}{\rho^i}\mathcal{H}^{1p0i\ell}_{\mu'\nu'\Lambda'}(x')\Delta x^{\Lambda'},\\ h^{\P 2}_{\mu\nu}&=\sum_{p=-2}^1\ \sum_{\substack{i=2p+3\\i>0}}^{2p+8}\delta^\ell_{p+i} \frac{\lambda^p}{\rho^i}\mathcal{H}^{2p0i\ell}_{\mu'\nu'\Lambda'}(x')\Delta x^{\Lambda'}\nonumber\\ &\quad +\ln(\lambda\rho) \sum_{p=0}^1\delta^\ell_p\lambda^p\mathcal{H}^{2p10\ell}_{\mu'\nu'\Lambda'}(x')\Delta x^{\Lambda'}.\label{hP2_coords} \end{align} where $\rho\equiv\sqrt{P_{\mu'\nu'}\Delta x^{\mu'}\Delta x^{\nu'}}$. A Gralla-Wald-type puncture $h^{\P 2}_{\mu\nu}$ of this form has already been calculated to order $\lambda\ln\lambda$ in the special case of circular orbits in Schwarschild coordinates~\cite{Pound:13b}. We leave the presentation of those and more general results to a future paper. Within the body of the current paper, we display results of sufficiently high order to calculate the second-order regular field on the worldline. The results we present online~\cite{results} are of sufficiently high order to calculate both the second-order regular field on the worldline and the second-order force. We describe the precise order required of the puncture in our concluding discussion; readers uninterested in the technical details of our calculations may skip directly to that discussion. Although we have only derived results for the singular field of Refs.~\cite{Pound:10a,Pound:12a}, the same method could be used to generate a covariant expansion of Gralla's singular field~\cite{Gralla:12}, after first transforming from his choice of local coordinates to Fermi-Walker coordinates. | We have derived covariant expansions of the second-order singular field in an arbitrary vacuum background, in both a self-consistent formalism and a Gralla-Wald-type formalism. Our final results in the self-consistent case are Eqs.~\eqref{hS1_SC_cov}--\eqref{hdm_SC_final}; in the Gralla-Wald case, they are Eqs.~\eqref{hS1_GW_cov}--\eqref{F1_GW}. We have also made higher-order terms in the expansions available online~\cite{results}. To make use of these results in practice, a few steps must be taken. \subsection{Puncture as a coordinate expansion} For a practical numerical implementation of a puncture scheme, the puncture must be written in a specified coordinate system. This might mean that the expansion of the singular field must be written directly in the coordinates one will use in one's numerical evolution. Or if one wishes to decompose the puncture into a useful basis of functions, such as tensor harmonics in Schwarzschild, it might mean that the expansion should be written in some coordinate system convenient for the calculation of that decomposition, as was done in the frequency-domain puncture scheme of Warburton and Wardell~\cite{Warburton-Wardell:14}. Expressing the expansion in coordinate form is, in principle, straightforward. The covariant expansion of the singular field can be recast as an expansion in coordinate differences $\Delta x^{\alpha'}=x^\alpha-x^{\alpha'}$, where $x^{\alpha'}$ are the coordinate values at the point $x'$ on the worldline (by which we mean $\gamma$ in the self-consistent case, $\gamma_0$ in the Gralla-Wald case). All that is required is the expansion of the covariant quantities $\sigma^{\alpha'}$ and $g^{\alpha'}_\beta$ in powers of $\Delta x^{\alpha'}$. Following Ref.~\cite{Heffernan-etal:12}, the expansion of $\sigma_{\alpha'}$ can be found by writing \begin{align} \sigma(x,x') &= \frac{1}{2}g_{\alpha'\beta'}\Delta x^{\alpha'}\Delta x^{\beta'} + A_{\alpha'\beta'\gamma'}\Delta x^{\alpha'}\Delta x^{\beta'}\Delta x^{\gamma'} \nonumber\\ &\quad + B_{\alpha'\beta'\gamma'\delta'}\Delta x^{\alpha'}\Delta x^{\beta'}\Delta x^{\gamma'}\Delta x^{\delta'} +\ldots, % \end{align} then acting with a partial derivative on the equation, and finally determining the coefficients in the expansions recursively by using the identity $\sigma^{\alpha'}\sigma_{\alpha'}=2\sigma(x,x')$. Similarly, the expansion of $g^{\alpha'}_{\beta}$ can be found by writing \begin{align} g^{\alpha'}_\beta &= \delta^{\alpha'}_{\beta'}+G^{\alpha'}{}_{\beta'\gamma'}\Delta x^{\gamma'} +G^{\alpha'}{}_{\beta'\gamma'\delta'}\Delta x^{\gamma'}\Delta x^{\delta'}+\ldots, \end{align} acting with a partial derivative, and then determining the coefficients using the identity $g^{\alpha'}_{\beta;\gamma'}\sigma^{\gamma'}=g^{\alpha'}_{\beta,\gamma'}\sigma^{\gamma'}+\Gamma^{\alpha'}_{\gamma'\delta'}g^{\delta'}_\beta\sigma^{\gamma'}=0$. The end result will be an expansion of the form~\eqref{hP2_coords}~\cite{Pound:13b}. To aid the discussion in the following section, we rewrite that result more transparently as \begin{subequations}\label{coord_expansions}% \begin{align} h^{\S\S}_{\mu\nu} & \sim \frac{(\Delta x)^2}{\lambda^{2}\rho^4}+\frac{(\Delta x)^5}{\lambda\rho^6} +\lambda^{0}\frac{(\Delta x)^8}{\rho^8}+\lambda\frac{(\Delta x)^{11}}{\rho^{10}}\nonumber\\ &\quad +\left[\lambda^0(\Delta x)^0+\lambda(\Delta x)^1\right]\ln(\lambda\rho)\nonumber\\ &\quad +O(\lambda^2\ln\lambda),\\ h^{\delta z}_{\mu\nu}& \sim \frac{(\Delta x)^1}{\lambda^{2}\rho^3}+\frac{(\Delta x)^4}{\lambda\rho^5} +\lambda^0\frac{(\Delta x)^{7}}{\rho^{7}}+\lambda\frac{(\Delta x)^{10}}{\rho^{9}}\nonumber\\ &\quad+O(\lambda^2),\\ h^{\S\R}_{\mu\nu} & \sim \frac{(\Delta x)^2}{\lambda\rho^3}+\lambda^{0}\frac{(\Delta x)^5}{\rho^5} +\lambda\frac{(\Delta x)^{8}}{\rho^{7}}+O(\lambda^2),\\ h^{\delta m}_{\mu\nu}& \sim \frac{(\Delta x)^0}{\lambda\rho}+\lambda^{0}\frac{(\Delta x)^3}{\rho^3} +\lambda\frac{(\Delta x)^{6}}{\rho^{5}}+O(\lambda^2), \end{align} \end{subequations} where `$(\Delta x)^n$' indicates a polynomial in $\Delta x^{\mu'}$ of homogeneous order $n$. Each polynomial is of the form $P_{\mu'\nu'\alpha'_1\cdots\alpha'_n}(x')\Delta x^{\alpha'_1}\cdots\Delta x^{\alpha'_n}$ with some coefficient $P_{\mu'\nu'\alpha'_1\cdots\alpha'_n}(x')$ that depends only on $x'$. One can easily derive the general structure of the expansion~\eqref{coord_expansions} by substituting generic power expansions $\sigma_{\mu'}\sim \sum_{n\geq0} \lambda^n(\Delta x)^n$ and $g^{\mu'}_\mu\sim\sum_{n\geq0} \lambda^n(\Delta x)^n$ into the covariant expansions of $h^{\S2}_{\mu\nu}$. We have simplified the results by obtaining a common denominator at each order in $\lambda$, using the fact that $\rho^2\sim (\Delta x)^2$. \subsection{Required order of the puncture}\label{required_order} Before implementation, one must also decide how many orders in distance should be included in the puncture for one's particular purposes. We cursorily described the requisite orders in Sec.~\ref{strategy}; we explain them more thoroughly here. To calculate the second-order force, one requires $\partial h^{\res 2}_{\mu\nu}=\partial h^{\R 2}_{\mu\nu}$ on the worldline. This means we must have $\lim_{x\to\gamma}\left(\partial h^{\P 2}_{\mu\nu}-\partial h^{\S 2}_{\mu\nu}\right)=0$, or in other words, $\partial h^{\P 2}_{\mu\nu}-\partial h^{\S 2}_{\mu\nu}=o(\lambda^0)$. From this one might infer that for the purpose of calculating the second-order force, $h^{\P 2}_{\mu\nu}$ must include all terms in $h^{\S 2}_{\mu\nu}$ through order $\lambda$. If one were to implement a puncture scheme in 3+1D, that would be true. However, analysis has shown~\cite{Barack-Golbourn-Sago:07} that in a puncture scheme that decomposes the field into azimuthal $m$-modes $e^{im \phi}$, one can sometimes lower the required order of the puncture by one power. (Of course, the same statements also hold true if one performs a complete tensor-harmonic decomposition rather than an $m$-mode decomposition alone.) % Specifically, one can neglect an order-$\lambda^0$ term, even though it is finite in the limit to the worldline, if it has odd parity about the worldline. By odd parity we mean a change of sign under the parity transformation $\Delta x^{\mu'}\to-\Delta x^{\mu'}$. So, for example, a term like $\Delta x^{\mu'}/\sqrt{P_{\alpha'\beta'}\Delta x^{\alpha'}\Delta x^{\beta'}}$ can be dropped from one's puncture. The reason this is allowed can be understood intuitively from the fact that the $m$-mode decomposition of a function converges to the function's average across the point of discontinuity; therefore in the limit to the worldline, these odd-parity terms contribute nothing to the decomposed puncture. One can show that the regular field at a point on the worldline can then be calculated as the sum over modes of the residual field at that point. Unlike in the first-order case, where all terms in the singular field at a given order in $\lambda$ share the same parity, % in the second-order case different pieces of the field have different parities: the order-$\lambda^0$ terms \begin{itemize} \item in $h^{\S\S}_{\mu\nu}$ have even parity, \item in $h^{\S\R}_{\mu\nu}$, $h^{\delta z}_{\mu\nu}$, and $h^{\delta m}_{\mu\nu}$ have odd parity, \end{itemize} and at successive orders in $\lambda$ the parity alternates. These properties are made obvious in Eq.~\eqref{coord_expansions}. (They can also be inferred from the effect of the parity transformation $n^i\to-n^i$ on Eq.~\eqref{hS2_form_Fermi}, or from that of $\sigma^{\alpha'}\to-\sigma^{\alpha'}$ on Eq.~\eqref{hS2_form_cov}; the parity in all three cases will be the same~\cite{Pound-Merlin-Barack:14}.) Therefore, assuming at least an $m$-mode decomposition, to calculate $h^{\R2}_{\mu\nu}$ on the worldline one must include in one's puncture the order-$\lambda^0$ terms from $h^{\S\S}_{\mu\nu}$, but one need not include any of the order-$\lambda^0$ terms from $h^{\S\R}_{\mu\nu}$, $h^{\delta z}_{\mu\nu}$, or $h^{\delta m}_{\mu\nu}$. Similarly, since differentiation both reduces the order of a term and reverses its parity, to calculate the second-order force one must include the order-$\lambda$ terms from $h^{\S\S}_{\mu\nu}$, but one need not include those from $h^{\S\R}_{\mu\nu}$, $h^{\delta z}_{\mu\nu}$, or $h^{\delta m}_{\mu\nu}$. In the body of this paper we have presented results for $h^{\S\S}_{\mu\nu}$ through order $\lambda^0$ and $h^{\S\R}_{\mu\nu}$, $h^{\delta z}_{\mu\nu}$, and $h^{\delta m}_{\mu\nu}$ through order $1/\lambda$. Due to the savings in a mode decomposition, these results are of sufficiently high order to calculate the second-order regular field on the worldline. In the self-consistent case, implementing a scheme using this puncture would consist of solving the wave equations~\eqref{h1_SC} and \eqref{h2_SC} with the puncture following a trajectory $\gamma$ governed by the first-order equation of motion \begin{align} \frac{D^2 z^\mu}{d\tau^2} &= -\frac{1}{2}P^{\mu\gamma} \left(2h^{\res1}_{\gamma\alpha;\beta}-h^{\res1}_{\alpha\beta;\gamma}\right)u^\alpha u^\beta, \end{align} and the second-order regular field would be calculated as $h^{\R2}_{\mu\nu}=h^{\res2}_{\mu\nu}$ on $\gamma$ (with $h^{\res2}_{\mu\nu}$ defined as the sum over its modes). In the Gralla-Wald case, the scheme would consist of solving the sequence of equations \eqref{h1_GW}, \eqref{motion_GW}, and \eqref{h2_GW}, and the second-order regular field would be calculated as $h^{\R2}_{\mu\nu}=h^{\res2}_{\mu\nu}$ on the reference geodesic $\gamma_0$. The results we have made available online are of sufficient order to calculate the second-order force even in 3+1D, and we have stringently tested their correctness through at least the order required to do the same in an $m$-mode scheme: order $\lambda$ for $h^{\S\S}_{\mu\nu}$ and order $\lambda^0$ for $h^{\S\R}_{\mu\nu}$, $h^{\delta z}_{\mu\nu}$, and $h^{\delta m}_{\mu\nu}$. With a puncture of that order in the self-consistent case, the wave equations~\eqref{h1_SC} and \eqref{h2_SC} can be solved with the puncture moving according to the second-order equation of motion~\eqref{motion_SC}. This scheme should maintain second-order accuracy on a timescale $\sim1/\e$, whereas the scheme using the first-order equation of motion can be expected to be uniform only on the shorter timescale $\sim1/\sqrt{\e}$, based on the error estimate $\delta z\sim \delta a\, t^2$, where $\delta z$ is the error in position and $\delta a$ the error in acceleration. In the Gralla-Wald case, calculating the second-order force would allow one to calculate the second-order correction to the position, $z_2^\alpha$. \subsection{Order-reduction of the self-consistent system}\label{order-reduction} One major remaining point to consider pertains to the handling of acceleration terms in the self-consistent formalism. It is well known that self-consistent derivations of equations of motion often lead to ill-behaved third-order-in-time differential equations. The most famous example of this is the Abraham-Lorentz-Dirac equation for a charged particle. In the case of a small mass, if we write the first-order-accurate equation of motion in the form \begin{equation}\label{a1-second-order-in-time} a^\mu=-\frac{1}{2}P^{\mu\nu}\left(2h^{\R1}_{\nu\lambda;\rho}[\gamma]-h^{\R1}_{\lambda\rho;\nu}[\gamma]\right)u^\lambda u^\rho, \end{equation} and we write the field $h^1_{\mu\nu}[\gamma]$ in terms of a Green's function $G_{\mu\nu\mu'\nu'}$ as \begin{equation} h^1_{\mu\nu} = 2m\int_\gamma G_{\mu\nu\mu'\nu'}(g^{\mu'\nu'}+2u^{\mu'}u^{\nu'})d\tau, \end{equation} and we then expand $h^1_{\mu\nu}$ near $\gamma$ and identify the contributions to $h^{\R1}_{\mu\nu}[\gamma]$, then we find \begin{equation}\label{a1-third-order-in-time} a^\mu = e^{a\mu}\left(-h^{\rm tail}_{0a0}+\tfrac{1}{2}h^{\rm tail}_{00a}-\tfrac{11}{3}m\dot a_a\right), \end{equation} where the tail terms at time $\tau$ are defined as \begin{equation} h^{\rm tail}_{0a0}\equiv u^\alpha e^\beta_a u^\gamma 2m \int_{-\infty}^{\tau-0^+} G_{\mu\nu\mu'\nu';\gamma}(g^{\mu'\nu'}+2u^{\mu'}u^{\nu'})d\tau \end{equation} and analogously for $h^{\rm tail}_{00a}$. These results can be easily derived from the explicit results for $h^{\R1}_{\mu\nu}(r=0)$ and $\partial_\rho h^{\R1}_{\mu\nu}(r=0)$ in Table I of Ref.~\cite{Pound:10b}.\footnote{However, note that Table I in Ref.~\cite{Pound:10b} is missing a factor of 4 from the $ma_a$ term in the quantity $\hat C_a^{(1,0)}=h^{\R1}_{ta}(r=0)$. The missing 4 appears in its correct location in Eq.~(E.9) of that reference.} The $\dot a_i$ term in Eq.~\eqref{a1-third-order-in-time} is the gravitational antidamping term discovered by Havas~\cite{Havas:57} (as corrected by Havas and Goldberg~\cite{Havas-Goldberg:62}). Its presence has made the apparently second-order-in-time differential equation~\eqref{a1-second-order-in-time} into an apparently third-order-in-time integro-differential equation. An important question is whether this feature manifests in the coupled system we would hope to solve numerically, made up of Eqs.~\eqref{h1_SC}--\eqref{motion_SC}. The answer would seem to be that the problem has been shifted elsewhere: the acceleration and its time derivatives now appear in the source term $E_{\mu\nu}[h^{\P}]$ in the field equation. If we imagine solving the coupled system at a given time step, we can see that we would need to know the acceleration at that time step in order to calculate the field, but we would need to know the value of that same field before we could calculate the acceleration. One possibility might be to solve this problem iteratively at each time step. But a much simpler alternative would be to effectively perform a reduction-of-order procedure on the wave equations. Noting that $a^\mu\sim\e$, we can see that we would still preserve second-order accuracy by moving the acceleration-dependent terms from the first-order puncture into the second-order one; indeed, we have already done an analogous thing by neglecting explicit acceleration terms in our second-order puncture. Furthermore, we can then see that replacing $a^\mu$ with $F_1^\mu[\gamma]$ would also preserve the desired accuracy. Therefore, we can shift the term $h^{\S1\bf a}_{\mu\nu}$ from Eq.~\eqref{hS1_SC_cov} into Eq.~\eqref{hS2_SC_cov}, such that \begin{equation} h^{\P1}_{\mu\nu} = h^{\S1\not a}_{\mu\nu} \end{equation} (with an implied truncation of the right-hand side at a specified order in $\lambda$) and \begin{equation} h^{\P2}_{\mu\nu} = h^{\S\S}_{\mu\nu}+h^{\S\R}_{\mu\nu}+h^{\delta m}_{\mu\nu}+h^{\S1\bf a}_{\mu\nu}. \end{equation} In $h^{\S1\bf a}_{\mu\nu}$ one can then make the replacement $a^\mu\to F_1^\mu[\gamma]$, with $F_1^\mu[\gamma]$ given by Eq.~\eqref{F1_SC}. This alteration offers a substantial simplification of the coupled field-motion system, with the acceleration and its derivatives appearing nowhere except on the left-hand side of Eq.~\eqref{motion_SC}. \subsection{Prospectus} In the future we expect both our self-consistent and Gralla-Wald-type results to be of use in practice, but for differing purposes. While the self-consistent scheme offers the prospect of long-term accuracy, it has the drawback of requiring an evolution in the time domain: since the trajectory sourcing the field responds dynamically to that field, there is no clear way to avoid solving the coupled field-motion equations time-step by time-step. Therefore, in order to take advantage of the long-term accuracy provided by the self-consistent approximation, one must also achieve the feat of maintaining numerical accuracy on those long timescales of $\sim 10^6$ orbits. For that reason, the Gralla-Wald-type scheme, despite its obvious drawback of being valid only on timescales much shorter than an inspiral, will be preferable for many purposes, such as calculating short-term conservative effects and fixing parameters in effective-one-body theory. Furthermore, a Gralla-Wald-type scheme has the distinct advantage of being amenable to treatment in the frequency domain, at least for certain calculations~\cite{Pound:14c}. Warburton and Wardell have recently devised a frequency-domain puncture scheme that should be generalizable to this case~\cite{Warburton-Wardell:14}. However, in extreme cases, such as zoom-whirl orbits in Schwarzschild, which lie near the separatrix between bound and unbound orbits, a self-consistent calculation may be unavoidable, because in such cases the reference geodesic may diverge very rapidly from the accelerated orbit, and the correction $z_1^\mu$ to the position may grow exponentially quickly. The self-consistent formalism also allows one to more easily derive alternative, but more easily implemented, approximation schemes that preserve long-term accuracy; for example, by starting from the self-consistent equations, one can readily derive a two-timescale expansion of the coupled field-motion problem~\cite{Hinderer-Flanagan:08,Pound:14d}, which should be accurate over a complete inspiral without requiring long-term evolutions in the time-domain. | 14 | 3 | 1403.1843 |
1403 | 1403.5717_arXiv.txt | We propose a possible explanation for the recent claim of an excess at 3.5 keV in the X-ray spectrum within a minimal extension of the standard model that explains dark matter and baryon abundance of the universe. The dark matter mass in this model is ${\cal O}({\rm GeV})$ and its relic density has a non-thermal origin. The model includes two colored scalars of ${\cal O}({\rm TeV})$ mass ($X_{1,2}$), and two singlet fermions that are almost degenerate in mass with the proton ($N_{1,2}$). The heavier fermion $N_2$ undergoes radiative decay to the lighter one $N_1$ that is absolutely stable. Radiative decay with a life time $\sim 10^{23}$ seconds can account for the claimed 3.5 keV line, which requires couplings $\sim 10^{-3}-10^{-1}$ between $X_{1,2}, ~ N_{1,2}$ and the up-type quarks. The model also gives rise to potentially detectable monojet, dijet, and monotop signals at the LHC. | Recently XMM-Newton observatory has found an excess at 3.5 keV X-ray in the spectrum of 73 galaxy clusters~\cite{bib:KeVExcess, Boyarsky:2014jta}. If this excess persists % such a photon emission can be the result of the late decay and/or annihilation of multi-keV mass dark matter, or decay of a metastable particle to daughter(s) with a keV mass spilting. Dark matter induced 3.5 keV photon excess has been recently studied in the scenario of extend neutrino sector~\cite{bib:KeVExcess,bib:OtherSterile}, the axion~\cite{pheno:axion} or its supersymmertic partner axino~\cite{pheno:axino}, string moduli~\cite{pheno:moduli}, and annihilation or decay via low energy effective operators~\cite{pheno:effective}. In this paper, we investigate the possibility of the decay of a light nonthermal dark matter, as proposed in previous work~\cite{Bhaskar1}, which connects the DM relic density to baryongenesis. | In conclusion, we have shown that it is possible to explain the recent claim of an excess at 3.5 keV in the X-ray spectrum within a minimal extension of the SM that explains DM and baryon abundance of the universe from a non-thermal origin. The minimum field content that is required includes two colored scalars $X_{1,2}$ and two singlet fermions $N_{1,2}$. The $N_{1,2}$ fermions are almost degenerate in mass with the proton and are coupled to the up-type quarks through interaction terms $\lambda X^* N u^c$. The lighter singlet $N_1$ is absolutely stable, while the heavier one $N_2$ undergoes radiative decay $N_2 \rightarrow N_1 + \gamma$ with a long lifetime $\sim 10^{23}$ seconds. This decay produces the claimed 3.5 keV photon line for $m_X \sim {\cal O}({\rm TeV})$ and $\lambda \sim 10^{-3}$-$10^{-1}$. The model can also be probed through monojet, dijet, and monotop signals at the LHC. | 14 | 3 | 1403.5717 |
1403 | 1403.0074_arXiv.txt | The early 21st century witnesses a dramatic rise in the study of thermal radiation of neutron stars. Modern space telescopes have provided a wealth of valuable information which, when properly interpreted, can elucidate the physics of superdense matter in the interior of these stars. This interpretation is necessarily based on the theory of formation of neutron star thermal spectra, which, in turn, is based on plasma physics and on the understanding of radiative processes in stellar photospheres. In this paper, the current status of the theory is reviewed with particular emphasis on neutron stars with strong magnetic fields. In addition to the conventional deep (semi-infinite) atmospheres, radiative condensed surfaces of neutron stars and "thin" (finite) atmospheres are considered.\\ \noindent{PACS numbers: 97.60.Jd, 97.10.Ex, 97.10.Ld} | \label{sect:intro} Neutron stars are the most compact of all stars ever observed: with a typical mass $M\sim (1$\,--\,$2)\, M_\odot$, where $M_\odot=2\times10^{33}$~g is the solar mass, their radius is $R\approx10$\,--\,13 km. The mean density of such star is $\sim10^{15}$ \gcc, i.e., a few times the typical density of a heavy atomic nucleus $\rho_0=2.8\times10^{14}$~\gcc. The density at the neutron-star center can exceed $\rho_0$ by an order of magnitude. Such matter cannot be obtained in a laboratory, and its properties still remain to be clarified. Even its composition is not completely known, because neutron stars, despite their name, consist not only of neutrons. There are a variety of theoretical models to describe neutron-star matter (see~\cite{NSB1} and references therein), and a choice in favor of one of them requires an analysis and interpretation of relevant observational data. Therefore, observational manifestations of the neutron stars can be used for verification of theoretical models of matter in extreme conditions \cite{Fortov}. Conversely, the progress in studying the extreme conditions of matter provides prerequisites for construction of neutron-star models and adequate interpretation of their observations. A more general review of these problems is given in \cite{P10ufn}. In this paper, I will consider more closely one of them, namely the formation of thermal electromagnetic radiation of neutron stars. Neutron stars are divided into accreting and isolated ones. The former ones accrete matter from outside, while an accretion onto the latter ones is negligible. There are also transiently accreting neutron stars (X-ray transients), whose active periods (with accretion) alternate with quiescent periods, during which the accretion almost stops. The bulk of radiation from the accreting neutron stars is due to the matter being accreted, which forms a circumstellar disk, accretion flows, and a hot boundary layer at the surface. At contrast, a significant part of radiation from isolated neutron stars, as well as from the transients in quiescence, appear to originate at the surface or in the atmosphere. To interpret this radiation, it is important to know the properties of the envelopes that contribute to the spectrum formation. On the other hand, comparison of theoretical predictions with observations may be used to deduce these properties and to verify theoretical models of the dense magnetized plasmas that constitute the envelopes. We will consider the outermost envelopes of the neutron stars -- their atmospheres. A stellar atmosphere is the plasma layer in which the electromagnetic spectrum is formed and from which the radiation escapes into space without significant losses. The spectrum contains a valuable information on the chemical composition and temperature of the surface, intensity and geometry of the magnetic field, as well as on the stellar mass and radius. In most cases, the density in the atmosphere grows with increasing depth gradually, without a jump, but stars with a very low temperature or a superstrong magnetic field can have a solid or liquid surface. Formation of the spectrum with presence of such a surface will also be considered in this paper. | \label{sect:concl} We have considered the main features of neutron-star atmospheres and radiating surfaces and outlined the current state of the theory of the formation of their spectra. The observations of bursters and neutron stars in low-mass X-ray binaries are well described by the nonmagnetic atmosphere models and yield ever improving information on the key parameters such as the neutron-star masses, radii, and temperatures. The interpretation of observations enters a qualitatively new phase, unbound from the blackbody spectrum or the ``canonical model'' of neutron stars. Absorption lines have been discovered in thermal spectra of strongly magnetized neutron stars. On the agenda is their detailed theoretical description, which provides information on the surface composition, temperature and magnetic field distributions. Indirectly it yields information on heat transport and electrical conductivity in the crust, neutrino emission, nucleon superfluidity, and proton superconductivity in the core. In order to clear up this information, it still remains to solve a number of problems related to the theory of the magnetic atmospheres and radiating surfaces. Let us mention just a few of them. First, the calculations of the quantum-mechanical properties of atoms and molecules in strong magnetic fields beyond the adiabatic approximation have been so far performed only for atoms with $\Znuc\lesssim10$ and for one- and two-electron molecules and molecular ions. The thermal motion effect on these properties has been rigorously treated only for the hydrogen atom and helium ion, and approximately for the heavier atoms. It is urgent to treat the finite nuclear mass effects for heavier atoms, molecules, and their ions, including not only binding energies and characteristic sizes, but also cross sections of interaction with radiation. This should underlie computations of photospheric ionization equilibrium and opacities, following the technique that is already established for the hydrogen photospheres. In the magnetar photospheres, one can anticipate the presence of a substantial fraction of exotic molecules, including polymer chains. The properties of such molecules and their ions are poorly known. In particular, nearly unknown are their radiative cross sections that are needed for the photosphere modeling. Second, the emissivities of condensed magnetized surfaces have been calculated in frames of the two extreme models of free and fixed ions. It will be useful to do similar calculations using a more realistic description of ionic bonding in a magnetized condensed matter. This should be particularly important in the frequency range $\omega\lesssim\omci$, which is observable for the thermal spectrum in the superstrong magnetic fields. Third, the radiative transfer theory, currently used for neutron-star photospheres, implies the electron plasma frequency to be much smaller than photon frequencies. In superstrong magnetic fields, this condition is violated in a substantial frequency range. Thus the theory of magnetar spectra requires a more general treatment of radiative transfer in a magnetic field. In conclusion, I would like to thank my colleagues, with whom I had a pleasure to work on some of the problems described in this review: V.G.~Bezchastnov, G.~Chabrier, W.C.G.~Ho, D.~Lai, Z.~Medin, G.G.~Pavlov, Yu.A.~Shibanov, V.F.~Suleimanov, M.~van Adelsberg, J.~Ventura, K.~Werner. My special thanks are to Vasily Beskin, Wynn Ho, Alexander Kaminker, Igor Malov, Dmitry Nagirner, Yuri Shibanov, and Valery Suleimanov for useful remarks on preliminary versions of this article. This work is partially supported by the Russian Ministry of Education and Science (Agreement 8409, 2012), Russian Foundation for Basic Research (Grant 11-02-00253), Programme for Support of the Leading Scientific Schools of the Russian Federation (Grant NSh--294.2014.2), and PNPS (CNRS/INSU, France). \newcommand{\artref}[4]{\textit{#1} \textbf{#2} #3 (#4)} \newcommand{\AandA}[3]{\artref{Astron.\ Astrophys.}{#1}{#2}{#3}} \newcommand{\AnnPhysNY}[3]{\artref{Ann.\ Phys. (N.Y.)}{#1}{#2}{#3}} \newcommand{\ApJ}[3]{\artref{Astrophys.\ J.}{#1}{#2}{#3}} \newcommand{\ApJS}[3]{\artref{Astrophys.\ J.\ Suppl.\ Ser.}{#1}{#2}{#3}} \newcommand{\ApSS}[3]{\artref{Astrophys.\ Space Sci.}{#1}{#2}{#3}} \newcommand{\ARAA}[3]{\artref{Annu.\ Rev.\ Astron.\ Astrophys.}{#1}{#2}{#3}} \newcommand{\JPB}[3]{\artref{J.\ Phys.\ B: At.\ Mol.\ Opt.\ Phys.}{#1}{#2}{#3}} \newcommand{\jpb}[3]{\artref{J.\ Phys.\ B: At.\ Mol.\ Phys.}{#1}{#2}{#3}} \newcommand{\AL}[3]{\artref{Astron.\ Lett.}{#1}{#2}{#3}} \newcommand{\SvAL}[3]{\artref{Sov.\ Astron.\ Lett.}{#1}{#2}{#3}} \newcommand{\SvA}[3]{\artref{Sov.\ Astron.}{#1}{#2}{#3}} \newcommand{\MNRAS}[3]{\artref{Mon.\ Not.\ R.\ astron.\ Soc.}{#1}{#2}{#3}} \newcommand{\PL}[4]{\artref{Phys.\ Lett. #1}{#2}{#3}{#4}} \newcommand{\PR}[4]{\artref{Phys.\ Rev. #1}{#2}{#3}{#4}} \newcommand{\PRL}[3]{\artref{Phys.\ Rev.\ Lett.}{#1}{#2}{#3}} \newcommand{\RMP}[3]{\artref{Rev.\ Mod.\ Phys.}{#1}{#2}{#3}} \newcommand{\SSRv}[3]{\artref{Space Sci.\ Rev.}{#1}{#2}{#3}} \renewcommand{\refname}{References} \makeatletter \renewcommand\@biblabel[1]{#1.} \makeatother \label{sect:bib} \addcontentsline{toc}{section}{References} | 14 | 3 | 1403.0074 |
1403 | 1403.6786_arXiv.txt | \noindent We consider the effect of a period of inflation with a high energy density upon the stability of the Higgs potential in the early universe. The recent measurement of a large tensor-to-scalar ratio, $r_T \sim 0.16$, by the BICEP-2 experiment possibly implies that the energy density during inflation was very high, comparable with the GUT scale. Given that the standard model Higgs potential is known to develop an instability at $\Lambda \sim 10^{10}$ GeV this means that the resulting large quantum fluctuations of the Higgs field could destabilize the vacuum during inflation, even if the Higgs field starts at zero expectation value. We estimate the probability of such a catastrophic destabilisation given such an inflationary scenario and calculate that for a Higgs mass of $m_h=125.5$ GeV that the top mass must be less than $m_t\sim 172$ GeV. We present two possible cures: a direct coupling between the Higgs and the inflaton and a non-zero temperature from dissipation during inflation. | 14 | 3 | 1403.6786 |
||
1403 | 1403.3395_arXiv.txt | We study the interplay between turbulent heating, mixing, and radiative cooling in an idealized model of cool cluster cores. Active galactic nuclei (AGN) jets are expected to drive turbulence and heat cluster cores. Cooling of the intracluster medium (ICM) and stirring by AGN jets are tightly coupled in a feedback loop. We impose the feedback loop by balancing radiative cooling with turbulent heating. In addition to heating the plasma, turbulence also mixes it, suppressing the formation of cold gas at small scales. In this regard, the effect of turbulence is analogous to thermal conduction. For uniform plasma in thermal balance (turbulent heating balancing radiative cooling), cold gas condenses only if the cooling time is shorter than the mixing time. This condition requires the turbulent kinetic energy to be $\gtrsim$ the plasma internal energy; such high velocities in cool cores are ruled out by observations. The results with realistic magnetic fields and thermal conduction are qualitatively similar to the hydrodynamic simulations. Simulations where the runaway cooling of the cool core is prevented due to {\em mixing} with the hot ICM show cold gas even with subsonic turbulence, consistent with observations. Thus, turbulent mixing is the likely mechanism via which AGN jets heat cluster cores. The thermal instability growth rates observed in simulations with turbulence are consistent with the local thermal instability interpretation of cold gas in cluster cores. | Galaxy clusters are the largest virialized structures ($\sim 10^{14}-10^{15} \msun$) in the universe, consisting of hundreds of galaxies bound by the gravitational pull of dark matter. The intracluster medium (ICM) consists of plasma at the virial temperature, $10^7-10^8$ K. Out of the total mass in galaxy clusters, only $\sim 15\%$ is baryonic matter, majority ($\gtrsim 80 \%$) of which is in the ICM and only a small fraction ($\lesssim 20 \%$) is in stars (e.g., \citealt{gon07}). The dark matter is responsible for setting up a quasi-static gravitational potential (except during major mergers) in the ICM, which along with cooling and heating decides the thermal and dynamic properties of the ICM (e.g., \citealt{piz05,mcn07,sha12,gas12}). The central number density in a typical ICM ranges from $0.1$ cm\textsuperscript{-3} in peaked clusters to $0.001$ cm\textsuperscript{-3} in non-peaked ones. The plasma in the ICM cools by radiative cooling so it is a strong source of X-rays with luminosity of about $10^{43}-10^{46}$ erg s\textsuperscript{-1}. The cooling time in dense central cores of some clusters is few $100$ Myr, much shorter than the cluster age ($\sim$ Hubble time). However, spectroscopic signatures of cooling (e.g., \citealt{tam01,pet03,pfab06}) and the expected cold gas and young stars are missing (e.g., \citealt{edg01,odea08}). As a result, theoretical models and numerical simulations without additional heating predict excessive cooling and star formation (e.g. \citealt{saro06,li12}). This is the well known cooling flow problem. The simplest resolution for the lack of cooling is that the ICM is heated. This heating does not significantly increase the temperature of the ICM but instead roughly balances cooling losses in the core. Hence, the ICM is in rough {\em global} thermal equilibrium. Possible mechanisms for heating include mechanical energy injection from AGN jets and bubbles (see \citealt{mcn07} for a review), turbulence in the ICM caused by galactic wakes (e.g., \citealt{kim05}), cosmic ray convection (e.g., \citealt{cha07}), and thermal conduction (e.g., \citealt{zak03}, and references therein). While non-feedback processes, e.g., conduction, can contribute to heating, we expect AGN feedback to become dominant in the cluster core (\citealt{guo08}). Moreover, non-feedback heating is globally unstable because of enhanced cooling at lower temperatures (e.g., \citealt{sok03}). The feedback heating mechanism may be outlined as follows. Since cooling occurs in rough equilibrium with heating, only a fraction of the core cools (\citealt{pet03}) via the formation of multiphase gas if the cooling time of the hot gas is sufficiently short (e.g., \citealt{piz05,cav09,sha12}). The cold multiphase gas increases the accretion rate onto the central black hole which powers a radio jet (e.g., \citealt{cav08}). Close to the black hole the jet is relativistic (e.g., \citealt{bir95,tom10}) but slows down as the low-inertia jet ploughs through the dense ICM core (e.g., \citealt{chu01,guo11}). The irregular jets generate weak shocks and sound waves (e.g., see \citealt{san07} for observations; \citealt{ste09} for simulations), thus heating the ICM. In addition, the buoyant bubbles can drive turbulence in the core (e.g., \citealt{cha07,sha09}), which can heat the cooling ICM by turbulent forcing and by mixing the outer hot gas with the inner cool core. While numerical simulations of feedback jets have recently been successful in demonstrating thermal balance in cool cluster cores for cosmological timescales (e.g., \citealt{dub11,gas12}), there are several puzzles remaining to be answered. The biggest being, what mechanism heats the cool core? Is it turbulent heating, mixing of hotter ICM with the cooler core, or jet-driven weak shocks? This is a challenging question, so we focus on idealized models for the jet and the ICM in this paper. We posit that the anisotropic injection of mechanical energy by AGN jets is effectively (of course with an efficiency factor) converted into isotropic, small scale turbulence due to a dynamic ICM (e.g., \citealt{hei06,gas12}). Large vorticity and turbulence can be generated when jet-driven shocks interact with preexisting bubbles/cavities (\citealt{fri12}). Thus, in our models energy is deposited via homogeneous, isotropic turbulence, with mechanical energy input. To prevent catastrophic cooling we balance radiative cooling with turbulent heating. This heat input prevents catastrophic cooling of the core. However, local thermal instabilities can still result in the formation of localized multiphase gas, much like what has been observed (e.g. \citealt{fab08}). We investigate two classes of models: first, with uniform initial conditions similar to cool cores; second, with two regions with different densities/temperatures in pressure balance. The first model investigates if turbulent heating can balance radiative losses in the core and the second one focuses on turbulent mixing of gases at dissimilar temperatures (\citealt{kim03,voi04,den05} have considered phenomenological models of turbulent mixing in past). We require almost sonic velocities for turbulent heating to balance radiative cooling but our velocities are much smaller and consistent with observations if turbulent mixing is the primary heating mechanism of the cool core. While clearly very idealized, our models have minimal adjustable free parameters and are (astro)physically well-motivated. Although we focus on the application to cluster cores, there are obvious connections of our work to the interstellar medium (ISM), where turbulence provides significant pressure support (e.g., \citealt{bou90}). Most studies of the phases of the ISM focus on heating by photoelectric effect due to stellar and extragalactic UV photons (e.g., \citealt{wol03}). However, turbulent heating and mixing are expected to be important in the ISM (especially in regions shielded from photons); indeed recent numerical simulations try to incorporate the effects of turbulence on the phase structure of the warm/cold ISM (e.g., \citealt{vaz00,aud05}). Our paper follows an approach very similar to \citet{sha10}, but with an important difference. While in \citet{sha10} the feedback energy was directly added to internal energy, here we add the energy in the momentum equation; i.e., work done by turbulent forcing goes into building up kinetic energy at all scales via a turbulent cascade, and is only eventually converted into irreversible heating at small viscous scales. Such mechanical stirring more closely mimics the kinetic feedback of AGN jets in clusters. Very importantly, turbulent stirring also affects the nature of multiphase cooling; only length scales over which the turbulent mixing time is longer than the cooling time are able to condense out of the hot gas. Earlier papers, \citet{mcc12,sha12}, studied models including gravity, with idealized thermal heating balancing cooling at every height. Turbulence generated in these simulations was solely due to buoyancy and cooling; there was no mechanical energy input mimicking jets/bubbles. It was found that mixing generated due to Kelvin-Helmholz instabilities at the interface of an overdense blob and the ICM prevents runaway cooling of the blob if the ratio of the local thermal instability timescale and the free-fall time ($t_{\rm TI}/t_{\rm ff}$) is smaller than a critical value. This conclusion was confirmed with realistic AGN jet simulations in \citet{gas12}. Our present simulations allow us to take a closer look at multiphase cooling and stirring of cool cluster cores by AGN jets. While it is possible to measure the radiative cooling time ($t_{\rm cool}$) from observations, we cannot measure the thermal instability timescale ($t_{\rm TI}$) directly from observations because it depends on the density dependence of the {\em microscopic} heating rate. Cluster observations show that extended cold gas is seen in clusters with $t_{\rm cool}/t_{\rm ff} \lesssim 10$ (see Fig. 11 in \citealt{mcc12}), which in the context of above models suggests that $t_{\rm TI} \approx t_{\rm cool}$; this is expected if the effective microscopic heating rate per unit volume is independent of density. We note that a simple density dependence of the microscopic heating rate does not capture the complexity of turbulent heating/mixing, but our numerical experiments are consistent with $t_{\rm TI} \simeq t_{\rm cool}$. Our paper is organized as follows. In section \ref{sec:numerics} we discuss our numerical setup and the associated equations. In section \ref{sec:results} we discuss the results from our simulations, and finally in section \ref{sec:conc} we discuss the astrophysical implications of our results. | \label{sec:conc} In this paper we focus on turbulent heating/mixing as a mechanism via which the mechanical energy is thermalized, using an idealized well-posed setup adhering to the phenomenological model where cooling in the core is roughly balanced by average energy injected through turbulence. The model assumes uniform, isotropic distribution of turbulence, and global thermal equilibrium in the ICM core. While non-turbulent mechanisms, such as thermal conduction (thermal conduction is expected to be suppressed substantially because of the wrapping of magnetic fields perpendicular to the radial direction; e.g., \citealt{par09,wag13}) and cosmic ray streaming (e.g., \citealt{guo08a}), can heat the cluster core, AGN jet driven turbulence is expected to be the dominant heating mechanism. In reality, the interaction of the AGN jet with the ICM is expected to be rather complicated but small scale heating should be qualitatively similar to our idealized model. At large scales buoyancy forces, which are independent of the scale, are important but as we go to small scales turbulent forcing becomes more important; the effect of global stable stratification is even more easily overcome for a thermally conducting plasma such as the ICM (e.g., Fig. 11 in \citealt{sha09}). In previous idealized models (\citealt{sha10,sha12}) we added heating as a term in the thermal energy equation. This is very idealized because in reality heating involves turbulent motions in a fundamental way. Turbulence can stir up the ICM and suppresses the formation of cold gas, especially at small scales (e.g., \citealt{gas13}). In idealized setups turbulence is weaker and is {\em generated by} thermal instability in presence of gravity, but in reality there is turbulence stirred by AGN jets which heats and mixes the ICM. \subsection{Cold gas condensation with turbulent heating \& mixing} The formation of cold gas in uniform gas is determined by the ratio of the cooling time $t_{\rm cool}~(\equiv 1.5nk_BT/n_en_i\Lambda$ ; assuming that the thermal instability timescale $t_{\rm TI} \approx t_{\rm cool}$) and the mixing time $t_{\rm{mix}}$. The ratio $t_{\rm cool}/t_{\rm mix}$ is a scale dependent quantity, which increases with a decreasing length scale because $t_{\rm mix} \equiv l/v_l$ is shorter at smaller scales ($l$ is the length scale and $v_l$ is the velocity at this scale). If turbulent heating balances cooling, then \be \label{eq:scaling_MP} \dot{E}_{\rm turb} \approx \rho v_L^3/L = \rho v_L^2/ t_{{\rm mix},L} \approx n_i n_e \Lambda = U/t_{\rm cool}, \ee where $L$ is the driving scale and $U=P/(\gamma -1)$ is the internal energy density. This energy balance equation implies that \be \label{eq:cond_MP} t_{\rm cool}/t_{{\rm mix},L} \approx U/2K, \ee where $K \equiv \rho v_L^2/2$ is the kinetic energy density at the driving scale. % According to Kolmogorov scaling in subsonic turbulence, for scales smaller than the driving scale, $v_l \propto l^{1/3}$ and $K \propto l^{2/3}$; thus $t_{\rm cool}/t_{\rm mix}$ decreases with an increasing length scale. The scales larger than the driving scale ($l>L$) have negligible velocities; transport at these scales happens due to eddies of size $L$.\footnote{We are grateful to the anonymous referee for drawing our attention to scales larger than $L$, and their importance for generating multiphase gas. Thus the mixing time at $l \gtrsim L$ is given by $t_{{\rm mix}, l>L} \approx l^2/(L v_L) = (l/L)^2 t_{{\rm mix},L}$, which can be significantly longer than the mixing time at the driving scale.} Multiphase gas can condense out only at scales where $t_{\rm cool}/t_{\rm mix} \lesssim 1$ (otherwise cooling blobs are mixed before they can cool to the stable temperature). % Cold gas can condense out most easily at scales larger than the driving scale for $t_{\rm cool} \lesssim t_{{\rm mix}, l>L}$, or equivalently, for $2K/U \gtrsim (L/l)^2$. In case the driving scale ($L$) is comparable to the size of the cool core, cold gas condenses only if $2K/U \approx {\cal M}^2 \gtrsim 1$; i.e., if the driving velocity is $\gtrsim$ the sound speed. The size of AGN jets is typically $\gtrsim$ the cluster core, and therefore ${\cal M}^2 \gtrsim 1$ is required for cold gas condensation in a uniformly stirred core. The Mach number pdf at late times in Figure \ref{fig:pdf_fid} indeed shows a broad peak at Mach number of unity. Because of large turbulent motions, the cold gas in uniform runs comprise of large clouds, and not slender filaments as observed in cluster cores. Observations of cool core clusters show that the Mach number in cool core clusters (at temperatures traced by diagnostic lines) is $\lesssim 0.4$ (\citealt{wer09,san10}). Therefore, heating of the cool core due to turbulent dissipation (with driving at the scale length of cluster cores) is ruled out. However, mixing of hotter and cooler gas, driven by AGN jet turbulence and thermal conduction, can still heat the cool core without large turbulent velocities, as we discuss later. For lower mass halos, such as groups and individual galaxies, the ``core" size is much bigger (e.g., \citealt{sha12b}) than the stirring scale (due to supernovae and AGN) and cold gas can condense out for ${\cal M} < 1$. Turbulent mixing, like thermal conduction, suppresses thermal instability at small scales. Consider a uniform medium where turbulent heating (assuming stirring at largest scales) balances cooling globally. With conduction, the Field length is the length at which thermal diffusion timescale equals the thermal instability timescale ($t_{\rm TI}$; Eq. 8 in \citealt{sha10}). The turbulent Field length should be estimated by equating the turbulent mixing time $l/v_l = l^{2/3}L^{1/3}/v_L $ (here we have used Kolmogorov scaling; $v_l^3/l =$ constant, irrespective of scale) and $t_{\rm TI}$; i.e., $l_{F, {\rm turb}} = L^{-1/2} (v_L t_{\rm TI})^{3/2} \approx c_s t_{\rm cool}$ (assuming global thermal balance). Thus, only scales with ${\cal M} \gtrsim 1$ are thermally unstable. If stirring is at scales smaller than the box-size, the mixing time is longer by a factor $(l/L)^2$ for $l>L$, and $l_{F, {\rm turb}} \approx L (t_{\rm cool}/t_{{\rm mix},L})^{1/2} \approx L (U/2K)^{1/2} > L$. \begin{figure*} \centering \includegraphics[scale=0.6]{Mach_pdf.eps} \caption{Probability distribution functions of mass ($\frac{\rm{dM}}{\rm{d} \log _{10} \rm{T}}$) and volume ($\frac{\rm{dV}}{\rm{d} \log _{10} \rm{T}}$) with respect to temperature at early and late times for the uniform hydro run (H), the uniform MHD runs with anisotropic conduction (MA), and the mixing MHD run with anisotropic conduction (MAm). The temperature distribution is bimodal after a thermal instability timescale; the bimodality is sharper for MHD runs with conduction. \label{fig:pdf_mix}} \end{figure*} To understand the mixing runs consider a setup with two zones in pressure equilibrium with temperature $T_0$ (density $n_0$) and $fT_0$ ($n_0/f$); the cooler zone occupies a volume fraction $f_v$. For our mixing runs $f=3$ and $f_v=1/8$. Now we will estimate the turbulent velocities required to balance cooling in the cooler zone. The turbulent energy injection rate per unit volume $\rho v_L^3/L$ (which is equal in hotter and cooler regions) for global thermal balance is $g n_0^2 \Lambda_0$, where $g=\{ f_v + (1-f_v)f^{-3/2}\} $ and we have assumed $\Lambda \propto T^{1/2}$. The net cooling rate of the cooler zone is, therefore, $n_0^2 \Lambda_0[1- g ]$. Now we want to estimate the rate at which turbulent mixing can bring heat from the hotter to the cooler regions. The turbulent velocity in the hot zone is obtained by noting that ${\rho_0 v_{L,{\rm hot}}^3}/(f L) = g n_0^2 \Lambda_0 $. This gives the turbulent velocity on the driving scale in the hot zone $v_{L,{\rm hot}} \approx (fg)^{1/3} c_{s0}^{2/3} (L/t_{\rm cool,0})^{1/3}$, where $c_{s0}$ is the sound speed in the cooler zone and $t_{\rm cool,0}$ its cooling time. Since the hot zone is overheated and the cooler zone is cooling on average, there is a flow of energy from the hotter to cooler zone and flow of mass in the opposite direction. The total energy equation, Eq. \ref{eq:total_energy}, in absence of magnetic fields and thermal conduction, can be simplified to \be \label{eq:e_simple} \frac{\partial E}{\partial t}+ \vec{\grad} \cdot \{ (E + P) \vec{v} \} = \vec{F}\cdot\vec{v} - \mathcal{L}, \ee where the first term on the right hand size represents turbulent heating (work done by turbulent force that is dissipated as heat in steady state) and the second term on the left hand size represents heating due to turbulent mixing. Integrating Eq. \ref{eq:e_simple} over the cooler region and assuming steady state gives $$ \int_0 (E+P) \vec{v} \cdot \vec{dS} = -(1-g)n_0^2 \Lambda_0 V_0. $$ where $V_0$ is the volume of the cooler region. The integral on the left hand side can be estimated to be $\gamma p (v_{L,{\rm hot}} - v_{L,{\rm cool}}) S_0 /(\gamma-1) \sim (1-f^{-1/3}) (L/L_0) U/t_{\rm mix, hot}$ ($f^{-1/3}$ factor appears because the velocity in the cooler region is smaller by this factor), where $S_0$ is the surface area of the cooler region and $L_0 \sim V_0/S_0$ is the length-scale of the cooler region, and $t_{\rm mix, hot}= L/v_{L,{\rm hot}}$. The $(L/L_0)$ factor should be replaced by 1 if the driving scale is larger than $L_0$. Assuming the driving scale to be similar to the core size ($L\approx L_0$), the heating rate due to turbulent mixing is $\sim n_0^2 \Lambda_0 (1-f^{-1/3}) (fg)^{1/3} (c_{s0}t_{{\rm cool},0}/L)^{2/3} \sim n_0^2 \Lambda_0 (1-f^{-1/3}) (c_{s0}t_{{\rm cool},0}/L)^{2/3}$, which can be comparable to the cooling rate for subsonic cooling ($c_{s0}t_{{\rm cool},0} \gtrsim L$) in the cooler region (mimicking the core). Here we have assumed that $(fg)^{1/3} \approx 1$; this holds not only for our choice of $f$ and $f_v$ but also for a wide range of reasonable values. Thermal conduction will also transport heat from hotter to cooler regions without turbulence. The Mach number in the hotter and cooler regions are $\sim (fg)^{1/3}f^{-1/2} (L/c_{s0} t_{{\rm cool},0})^{1/3}$ and $\sim g^{1/3} (L/c_{s0} t_{{\rm cool},0})^{1/3}$, respectively. For subsonic cooling both zones can have Mach number ${\cal M} \lesssim 1$. The cooler zone is thermally unstable (and cooling on an average at the beginning) and part of it cools to thermally stable temperature and the rest is mixed in the hot phase. This is clearly seen by comparing early and late time pdfs in Figure \ref{fig:pdf}. Most importantly, the Mach number in the hot phase for turbulent mixing runs is small and consistent with the observational limits. Figure \ref{fig:pdf_mix} shows the volume and mass pdfs as a function of the Mach number for the hydro, MHD and the MHD mixing runs at early and later times. At late times the Mach number pdf for the mixing run is peaked at significantly smaller Mach numbers as compared to the hydro runs. The lower Mach number peak, corresponding to the hot phase, is roughly consistent with the velocity constraints in cool core clusters. The higher Mach number peak corresponds to the gas at thermally stable temperature, and cold filaments can indeed have slightly supersonic velocities. Thus, turbulent mixing of hot and cooler ICM via AGN jets is a viable source for heating cool cluster cores. \subsection{Density dependence of microscopic heating} Some of the previous work (\citet{sha10,mcc12,sha12}) has added heat in cool core clusters as thermal energy. However, observations of jets expanding in the ICM suggest that heating should be via injection of kinetic energy due to shocks and turbulence. In this paper we have explored the implications of turbulent heating of the ICM. The thermal instability timescale ($t_{\rm TI}$) is not directly measurable from observations (although $t_{\rm cool}$ is) because it depends on the density dependence of the unknown heating function. The internal energy equation is $$ \rho T \frac{ds}{dt} = -n_en_i \Lambda(T) + q^+(n,\vec{r},t), $$ where $s \equiv k_B \ln(P/\rho^\gamma) /[(\gamma-1)\mu m_p]$ is the specific entropy. The isobaric thermal instability timescale for an above form of the heating function is related to the cooling time via \be \label{eq:TI} t_{\rm TI} = \frac{\gamma t_{\rm{cool}}}{2-\frac{\rm{d} \ln \Lambda}{\rm{d} \ln \rm{T}}-\alpha}, \ee where $q^+ \propto n^\alpha$. Thus, $t_{\rm TI} \approx (10/9) t_{\rm cool}$ for $\alpha=0$ and $t_{\rm TI} \approx (10/3) t_{\rm cool}$ for $\alpha=1$ in the free-free regime ($\Lambda \propto T^{1/2}$; see Eq. 19 in \citealt{mcc12} for details). We can thus measure the density dependence of the heating rate by measuring the thermal instability growth rate from numerical simulations. Simulations of cool core clusters in thermal balance (\citealt{sha12}) show that cold gas can condense out of the hot phase only if $t_{\rm TI}/t_{\rm ff} \lesssim 10$. Moreover, observations (see Fig. 11 in \citealt{mcc12}) show that clusters with $t_{\rm cool}/t_{\rm ff} \lesssim 10$ show evidence for extended cold gas filaments.\footnote{The observed critical value of $t_{\rm cool}/t_{\rm ff}$ may actually be close to 20 rather than 10 because \citet{mcc12} interpreted electron pressure as the total pressure in fitting ICM profiles (private communication with M. McCourt). The existence of a critical value is more important than its precise value.} Thus, if thermal instability is responsible for the observed cold gas filaments in clusters then a comparison of observations and AGN jet simulations with cooling can constrain the microscopic heating mechanism. In particular, if $t_{\rm TI} \approx t_{\rm cool}$ then $\alpha \approx 0$ and the heating rate per unit volume of the core is roughly constant. We have calculated the thermal instability timescale from our simulations by measuring the growth rate of the rms density perturbations in the linear thermal instability phase (Fig. \ref{fig:rho}). The measured thermal instability timescales and the corresponding $\alpha$ (c.f. Eq. \ref{eq:TI}) are listed in Table \ref{tab:tab1}. The measured growth for most of our runs are consistent with $\alpha\approx 0$ and a constant heating rate per unit volume. However, there is some variation around this value. We can make a naive estimate of the density dependence of the heating per unit volume. If turbulent mixing in a medium with background temperature gradient behaves like thermal conduction, then we do not expect growth for modes at scales smaller than the turbulent Field length. However, we do not expect turbulent mixing to affect the thermal instability growth rate at larger scales. Thus, turbulent mixing is expected to have a similar dependence of heating rate on density as heating due to thermal conduction (see Eq. 8 in \citealt{sha10}); namely, $\alpha \approx 0$. As already mentioned, this is a crude estimate as the process of turbulent mixing/heating is highly nonlinear, and this quantity should be calculated from numerical simulations. Table \ref{tab:tab1} shows that $\alpha \approx 0$ for most of our runs (irrespective of magnetic fields and conduction), a value supported by comparing idealized simulations and cluster observations. \subsection{How do filaments form?} All our simulations, whenever they show cold gas, show it in form of clouds and not in form of filaments (see Figs. \ref{fig:snap}, \ref{fig:snap_MHD}) but observations of cold gas in cluster cores show filamentary gas (e.g., \citealt{mcd10}). The question is what are we missing in our simulations that produces cold filaments. We can think of two effects: first, our simulations do not include gravity which makes extended cold gas short-lived (being heavier than its surroundings, cold gas falls toward the center on a free-fall timescale) and filamentary because of ram pressure faced by cold gas falling through the hot ICM (e.g., see the right panel of Fig. 1 in \citealt{mcc12}); second, non-thermal component such as small-scale magnetic fields and (adiabatic) cosmic rays may be required to prevent the collapse of cold gas along the magnetic field lines (this is investigated in more detail in \citealt{sha10}). Also, unlike in our setup with uniform turbulence, cold gas in reality may be condensing out of relatively undisturbed gas. One may naively think that anisotropic thermal conduction can lead to long-lived cold filaments elongated along the local magnetic field direction. In the linear regime anisotropic thermal conduction suppresses the growth of modes along field lines for scales smaller than the Field length, but nonlinearly the cold blobs collapse because radiative cooling overwhelms conductive heating. The non-thermal pressure of cosmic rays (or tangled magnetic fields) can prevent the collapse of cold gas along field lines, provided the cosmic ray diffusion coefficient is $\lesssim 10^{29}$ cm$^2$s$^{-1}$ (\citealt{sha10}). Cosmic rays compressed in the cold, dense gas are required in the hadronic scenario for the gamma rays emitted by the Fermi bubble in the Galactic center (\citealt{cro13} and references therein). Numerical simulations are still far from the stage where they can reproduce the observed morphology of cold filaments in the ICM. In conclusion, our numerical simulations show that the scenario in which turbulent heating balances radiative cooling in cluster cores, in order to have condensation of cold filaments, requires a Mach number of order unity (c.f., Eq. \ref{eq:cond_MP}). This is clearly ruled out from observations. The scenario where cool cores are predominantly heated by mixing of hotter gas with the cooler core due to AGN jets, gives reasonable velocities in the hot gas and are consistent with observations (see Fig. \ref{fig:pdf_mix}). This has been pointed out in past by analytic calculations (e.g., \citealt{den05}). Now that AGN jet simulations have become mature enough to achieve thermal balance in cluster cores, the focus should shift on identifying the mechanism via which AGN jets are able to heat up cluster cores. Our paper is a small step in this direction. | 14 | 3 | 1403.3395 |
1403 | 1403.3676_arXiv.txt | We report on spectra of two positions in the XA region of the Cygnus Loop supernova remnant obtained with the InfraRed Spectrograph on the Spitzer Space Telescope. The spectra span the 10--35\,\mum\ wavelength range, which contains a number of collisionally excited forbidden lines. These data are supplemented by optical spectra obtained at the Whipple Observatory and an archival UV spectrum from the International Ultraviolet Explorer. Coverage from the UV through the IR provides tests of shock wave models and tight constraints on model parameters. Only lines from high ionization species are detected in the spectrum of a filament on the edge of the remnant. The filament traces a 180 \kms\ shock that has just begun to cool, and the oxygen to neon abundance ratio lies in the normal range found for Galactic H~II regions. Lines from both high and low ionization species are detected in the spectrum of the cusp of a shock-cloud interaction, which lies within the remnant boundary. The spectrum of the cusp region is matched by a shock of about 150 \kms\ that has cooled and begun to recombine. The post-shock region has a swept-up column density of about $1.3\times 10^{18}$~cm$^{-2}$, and the gas has reached a temperature of 7000 to 8000~K\@. The spectrum of the Cusp indicates that roughly half of the refractory silicon and iron atoms have been liberated from the grains. Dust emission is not detected at either position. | Supernova remnants (SNRs) play an important role in the life-cycle of dust in the interstellar medium (ISM). As SNR shock waves sweep up interstellar material they heat the gas and dust, and they destroy a significant fraction of the grains, whereby refractory elements are released back into the gas phase. The shock-heated dust emits strongly at infrared (IR) wavelengths and is the major contributor to the total IR flux from remnants \citep{arendt89, saken92}. The IR wavelength regime also contains a number of collisionally excited lines that are emitted by radiative shocks. These lines provide diagnostics for the gas-phase elemental abundances, and a comparison of refractory and non-refractory species can yield a measurement of the efficiency of grain destruction. Thus, IR observations of SNR shocks are crucial for studying the dust destruction process in shocks, and useful for studying the shock properties. Most SNRs in the Galaxy are highly extincted and they cannot be detected at ultraviolet or even optical wavelengths, and in those cases the IR emission offers the only way to study the radiative shocks. The Cygnus Loop, a middle-aged remnant, is an ideal object for the study of SNR shocks. It is bright and it is nearby, so the emitting regions can be studied at high spatial resolution. It is located away from the Galactic mid-plane and the foreground extinction is low, and is therefore observable in the ultraviolet and far-ultraviolet wavelength regimes. The Cygnus Loop exhibits a classical shell morphology at all wavelengths. In the IR, this is clearly seen in images obtained by the \textit{Infrared Astronomical Satellite (IRAS)} \citep{braun86}. \citet{arendt92} carried out an analysis of the \iras\ data, where they decomposed the IR emission into two components, one associated with the X-ray limb of the remnant and the other with the bright optical regions. The two components correspond to non-radiative and radiative shocks, respectively. \citet{arendt92} concluded that the component associated with the X-ray limb was due to emission from thermal dust. For the component associated with the optical regions, they estimated that between 10\% and 100\% of the emission in the broad band \iras\ images could be due to IR line emission, as opposed to dust continuum emission. These bright optical regions in the Cygnus Loop provide the opportunity to study the IR emission lines of a radiative shock running into atomic gas. Most IR spectra of SNRs to date pertain to dust emission \citep{sankrit10, winkler13}, shocks in dense molecular clouds \citep{oliva99a, oliva99b, neufeld07, hewitt09}, or shocks in SN ejecta \citep{ghavamian09, rho09, temim06, temim10}. By observing the interaction regions in the Cygnus Loop, we can study the speeds, compositions and swept-up column densities of moderate speed (100--200 \kms\ ) shock waves in interstellar regions. In this paper, we focus on spectra of the well-studied ``XA'' region, obtained with the \textit{Spitzer Space Telescope (Spitzer)}. The XA region is an indentation in the X-ray shell along the southeast perimeter of the Cygnus Loop. It was so named by \citet{hester86} who showed that it was an interaction region between the blast wave and a large cloud. \citet{szentgyorgyi00} obtained narrowband \nev\,$\lambda$3426 images of the Cygnus Loop and in the XA region they identified a ``boundary shock'' - a long \nev\ filament with very little associated H$\alpha$ emission. \citet{danforth01} analyzed optical and ultraviolet data of the XA region and suggested that it was a protrusion on the surface of a much larger cloud. In their picture, the boundary shock is the one traveling through the cavity wall, while a slower shock is being driven into the tip of the finger, and results in bright optical and X-ray emission. Based on spectra obtained with the \textit{Far Ultraviolet Spectroscopic Explorer (FUSE)}, \citet{sankrit07} showed that shocks with velocities spanning the range 120--200\,\kms\ are present in the XA region and that they are effective at liberating silicon from grains. They also showed that the boundary shock has a velocity of $\sim 180$\,\kms. We present \spitzer\ observations and supplementary ground-based optical spectroscopy obtained at the Whipple Observatory in \S2. The IR results are presented in \S3, and the optical results in \S4. The analysis and discussion are presented in \S5, and our conclusions are given in \S6. | The Cygnus Loop XA region provides a rich set of shock excited emission lines across a broad wavelength range. The analysis of the morphologically simple Edge shock can be accomplished with one-dimensional shock models. The emission line spectrum of the more complex interior Cusp shock can be reproduced approximately by a similar simple shock model. However, the limitations of the models imply a corresponding limitation in our interpretation of the shock interaction. The proximity of the Cygnus Loop allows us to study the IR diagnostics of a radiative shock in detail. Our analysis of the XA emission lines will help in establishing robust diagnostics that can be applied to other SNRs, where, presumably, the IR spectra are obtained from regions even more heterogeneous and complex than the Cusp region. Our data shows the lack of dust emission in the mid-IR wavelength range. Even the slower shocks are presumably effective at destroying the smaller grains likely to contribute at these wavelengths. To probe the dust and possible molecular content of the XA region, we will need observations at longer wavelengths. The complex morphology of the cloud shock interaction will be accompanied by comparably complicated kinematics. The identification and use of a kinematic tracer to map the velocity field in the XA region may be useful in disentangling the various shock components contributing to the emission. By combining the IR spectrum of the Cusp with optical and UV spectra, we have obtained tight constraints on the shock speed, pre-shock density, elemental abundances and the column density cutoff, which corresponds to the age of the shock. We find that a speed of about 150 \kms\ is needed to match the high ionization lines, rather efficient destruction of grains is required to match the abundances of refractory elements, and an age of about 1000 years matches the column density cut-off and separation between the Cusp and Edge regions. | 14 | 3 | 1403.3676 |
1403 | 1403.6115_arXiv.txt | We present new two-dimensional (2D) axisymmetric neutrino radiation/hydrodynamic models of core-collapse supernova (CCSN) cores. We use the CASTRO code, which incorporates truly multi-dimensional, multi-group, flux-limited diffusion (MGFLD) neutrino transport, including all relevant $\mathcal{O}(v/c)$ terms. Our main motivation for carrying out this study is to compare with recent 2D models produced by other groups who have obtained explosions for some progenitor stars and with recent 2D VULCAN results that did not incorporate $\mathcal{O}(v/c)$ terms. We follow the evolution of 12, 15, 20, and 25 solar-mass progenitors to approximately 600 milliseconds after bounce and do not obtain an explosion in any of these models. Though the reason for the qualitative disagreement among the groups engaged in CCSN modeling remains unclear, we speculate that the simplifying ``ray-by-ray' approach employed by all other groups may be compromising their results. We show that ``ray-by-ray' calculations greatly exaggerate the angular and temporal variations of the neutrino fluxes, which we argue are better captured by our multi-dimensional MGFLD approach. On the other hand, our 2D models also make approximations, making it difficult to draw definitive conclusions concerning the root of the differences between groups. We discuss some of the diagnostics often employed in the analyses of CCSN simulations and highlight the intimate relationship between the various explosion conditions that have been proposed. Finally, we explore the ingredients that may be missing in current calculations that may be important in reproducing the properties of the average CCSNe, should the delayed neutrino-heating mechanism be the correct mechanism of explosion. | The mechanism underlying the explosive deaths of massive stars remains poorly understood. \citet{Colg66} first suggested that neutrino energy deposition plays a central role in powering core-collapse supernovae (CCSNe). Ever since, much of the effort in CCSN theory has focused on building increasingly sophisticated neutrino radiation hydrodynamical models, with the hopes of reproducing the properties of CCSNe, including the kinetic energies, debris morphologies, nucleosynthetic yields, and the remnant mass, spin, and velocity distributions. Despite this effort, the best models still fall short of accounting for any of these properties, much less all of them simultaneously. Perhaps even more alarming, the various groups involved in CCSN modeling often reach qualitatively different conclusions with calculations that are ostensibly quite similar, vis-\'{a}-vis whether an explosion even occurs. In \citet{Mull12,Mull12_sasi}, results are reported from 2D axisymmetric modeling with the \textsc{Vertex-CoCoNuT} code, which uses a conformally-flat spacetime approximation of general relativity \citep{Mull10}. They find explosions for $8.1$-$\msun$, $11.2$-$\msun$, $15$-$\msun$, and $27$-$\msun$ progenitors, but, when reported, the explosion energies are $\sim$10 times smaller than the canonical $10^{51}\, {\rm erg}$ energy of typical CCSNe. Similar findings were presented for a variety of other progenitors (Janka et al., Nuclear Astrophysics Workshop, Ringberg Castle, 2014). Recently, \citet{Hank13} reported results from a three-dimensional simulation with the \textsc{Prometheus-Vertex} code of the same $27$-$\msun$ progenitor considered in \citet{Mull12_sasi} and found no explosion. This negative result has been recapitulated for all other 3D simulations performed recently by this group \citep{Tamb14}, despite their having seen explosions in the corresponding 2D simulations. \citet{Suwa10} reports an explosion of a $13$-$\msun$ progenitor in a 2D simulation and \citet{Taki12} finds explosions of an $11.2$-$\msun$ progenitor in both 2D and 3D. These models neglected the heavy lepton neutrinos, which were recently incorporated with an approximate leakage scheme \citep{Taki13}. In all cases, the $\nu_e$ and $\bar{\nu}_e$ transport was computed using the isotropic diffusion source approximation (IDSA) \citep{Lieb09}, a crude approximation meant to enable multi-D simulations at minimal cost. While interesting, their results are difficult to interpret in the context of the viability of the neutrino mechanism, as the authors acknowledge. Meanwhile, \citet{Brue13} report results of 2D axisymmetric modeling with their \textsc{Chimera} code. They consider $12$-$\msun$, $15$-$\msun$, $20$-$\msun$, and $25$-$\msun$ progenitors from \citet{Woos07} and find explosions in all cases, curiously at almost the same post-bounce time. They also report energies that are somewhat larger than the those reported in \citet{Mull12}, but that still fall short of the $10^{51}\, {\rm erg}$ mark. Janka at el. (Nuclear Astrophysics Workshop, Ringberg Castle, 2014) recently reported 2D models of the same four progenitors, and found significantly different results, with, for example, the $12$-$\msun$ model not yet exploding more than $700\ms$ after bounce. Importantly, all of the studies discussed above relied on the so-called ``ray-by-ray-plus'' approximation of neutrino transport, which replaces the real transport problem with a series of independent spherically-symmetric transport solves. This is a crude approximation that introduces large variations in angle and time in the neutrino fluxes and the associated neutrino energy deposition so crucial for the neutrino-driven mechanism. This simplification has yet to be clearly justified, and may be producing qualitatively incorrect results, particularly in 2D. The only calculations ever performed which allow for multidimensional transport were the VULCAN/2D results reported in \citet{Burr06}, \citet{Burr07}, \citet{Ott08}, and \citet{Bran11}, and none of these calculations showed a revival of the stalled shock in 2D by the delayed-neutrino mechanism. The calculations of \citet{Ott08} and \citet{Bran11} were multi-angle as well. However, these calculations were performed without $\mathcal{O}(v/c)$ transport effects \citep{Hube07}. We are, therefore, motivated in this paper to perform new 2D multi-group radiation hydrodynamics calculations with a new code with both multi-D transport (avoiding the simplifications of the ray-by-ray approach) and the velocity-dependent terms to determine whether these earlier results were artifacts of the neglect of $\mathcal{O}(v/c)$ terms, and for comparison with the 2D results of other groups. To accomplish this, we have developed the CASTRO radiation hydrodynamics code. CASTRO contains a multi-group flux-limited neutrino transport solver, is time-dependent and multidimensional, and treats three neutrino species ($\nu_e$, $\bar{\nu}_e$, $\nu_x$, where the $\nu_x$ species includes the $\mu$ and $\tau$ neutrinos and their antiparticles), including all relevant $\mathcal{O}(v/c)$ terms. We find that none of our new 2D calculations, employing the same progenitors as \citet{Brue13}, explode by the delayed-neutrino mechanism. With this paper, we describe our results and speculate on the reasons for the different outcomes we find. Since all other groups are using the ray-by-ray approach, we suggest that one reason for the different outcomes may be in the handling of multi-D transport. In 2D, the axial sloshing motions, often not seen in 3D \citep{Burr12}, may be reinforcing the errors in the ray-by-ray approach and leading to a qualitatively incorrect outcome. In 3D, these axial sloshing effects are often absent, and the ray-by-ray approach may be less anomalous (due to the greater sphericity of the hydrodynamics), so the lack of explosions seen by the Garching group in 3D, when they observe explosions for the same progenitors in 2D, remains puzzling. | \label{sum} Using our new multi-group, multi-dimensional radiation hydrodynamics code CASTRO, which incorporates all terms to $\mathcal{O}(v/c)$ in the transport and does not make the ray-by-ray approximation employed by all other groups now modeling core-collapse supernovae, we have simulated in two spatial dimensions the dynamics of four progenitor massive star models. One goal was to determine, using a different code, whether the outcome of our previous simulations using the VULCAN/2D methodology \citep{Burr06,Burr07,Ott08} depended upon the absence of the $\mathcal{O}(v/c)$ terms in VULCAN/2D. We have determined that the results are qualitatively the same and, as when employing VULCAN/2D, we do not see explosions by the neutrino heating mechanism after $\sim$600 milliseconds after bounce. Both codes perform two-dimensional transport, though using a multi-group flux-limited (MGFLD) formulation. This conclusion concerning the overall outcome of these models (i.e., explosion in 2D, driven by neutrino heating) is in contrast with the results of \citet{Brue13} and Janka et al. (Nuclear Astrophysics Workshop, Ringberg Castle, 2014), who also do not agree one with the other, but who do obtain neutrino-driven explosions in some or all of their 2D simulations. One is left to ponder the reasons for these remaining differences in the community of researchers engaged in detailed simulations of the core-collapse supernova phenomenon. We have demonstrated that the ray-by-ray approach does not reproduce the correct angular and temporal neutrino field variations, though no one has yet performed the head-to-head ray-by-ray versus correct transport comparisons needed to definitively clarify the impact of the ray-by-ray approximation. We speculate, however, that the combination of the ray-by-ray approach with the artificiality of the axial sloshing effects manifest in 2D simulations may be the reason the groups using ray-by-ray obtain explosions in 2D (when they do). While the ray-by-ray approximation is clearly suspect, there are other differences that may prove to play an important role in producing the range of findings in the community. One might suspect that differences in the neutrino interaction physics may play an important role, but our experimentation indicates that the numerous hydrodynamic, thermal, and radiative feedbacks in the core-collapse problem mute the effects of even large changes in the neutrino-matter cross sections and associated emissivities on the dynamic evolution after collapse. In 1D test calculations we have performed in which the $\nu_{e}$$-$neutron absorption cross section was changed by a factor of two (both increased and decreased), the resulting stalled shock radii were the same to within a few percent. Some recent calculations suggest there may be some sensitivity to the choice of equation of state (EOS), with calculations using the Lattimer and Swesty EOS tending to explode more easily than those using the Shen EOS \citep{Janka12,Suwa13,Couch13}. Since both the present study and the VULCAN/2D studies used the Shen EOS and failed to explode, it may prove illuminating to repeat some of these calculations with the Lattimer and Swesty EOS. The effects of general relativity (GR) and the differing fidelity with which they are included in calculations may also contribute \citep[e.g.][]{Mull12}, but note that GR seems not to be generally requisite for explosions, as demonstrated by the 2D $27$-$\msun$ models reported in \citet{Mull12_sasi} that included GR and in \citet{Hank13} that used a monopolar gravity approximation with mock GR corrections, nevertheless transitioning to explosion at nearly the same post-bounce time. The marked difference between 1D and 2D in the early evolution of the shock radius in our models, which we attribute to a vigorous burst of prompt convection seeded by perturbations from our aspherical grid, may also be a concern, but we would expect the memory of this defect to be lost within a few dynamical times ($<100\ms$) as the system dynamically relaxes to a quasi-steady configuration. Differences in the transport algorithms (apart from the ray-by-ray versus multi-D transport issue) could be to blame, and code-to-code comparisons are called for. This was one early motivation for embarking upon this study with CASTRO---to see whether the outcomes were different from those we obtained using VULCAN/2D. But, more inter-group comparisons, not just intra-group comparisons, are needed. The fact that the 3D simulations of the Garching group are not exploding when they were in 2D \citep{Hank13} should be a wake-up call to the community to determine the origins of these differences between the various simulation groups and between 2D and 3D. As we have suggested, the use of the ray-by-ray approach is dubious, and since its artificial character is more manifest in 2D we suspect that it is part of the problem. However, this does not explain the current conundrum in 3D---something else may be amiss. It could be that the progenitor models are to blame and a new generation of such models, performed in 3D at the terminal stages of a massive star's life are needed \citep{Meak11}. It could be that rotation, even the modest rotation expected from the pulsar injection constraint \citep{Emme89}, could, by the resultant centrifugal support and consequent expected boost in the stalled shock radius, convert duds into explosions. This is the simplest solution, and one is reminded that the exploding model of \citet{Mare09} was rotating. Both large-scale and turbulent magnetic fields could play a role, through the associated stress, but also due to enhanced angular momentum transport from the core to the mantle \citep[e.g.][]{Sawa14}. However, without very rapid rotation, which might be associated with the rare hypernovae \citep{Burr07mhd} that serve as a bridge to the collapsar model of long-soft gamma-ray bursts, there would not seem to be enough extra free energy to power explosion generically. Perturbations of the progenitor cores that collapse have never been properly included into supernova theory, and might be a fruitful line of investigation \citep{Couc13}. Such perturbations seed the instabilities long identified with more robust dynamics and the viability of the delayed neutrino mechanism. Whatever the solution to this recalcitrant problem, advances in the numerical arts seem destined to play a central role. Approximations have been made by all groups to accommodate the limitations of the available computer resources, leaving one to wonder whether such compromises have corrupted the results. One would hope that simple, compelling reasoning, and physical insight could in the end lead to a solution. This has happened before in astrophysics. However, the complexity of the dynamics, the fact that the explosion energy is a small fraction of the available energy, and the circumstance that the central ``engine'' is shrouded in mystery by the profound opacity of the stellar envelope, and, hence, is itself (almost) inaccessible to direct observation or measurement, may mitigate against a breakthrough unaided by computation. | 14 | 3 | 1403.6115 |
1403 | 1403.1029_arXiv.txt | Testing whether close-in massive exoplanets (hot Jupiters) can enhance the stellar activity in their host primary is crucial for the models of stellar and planetary evolution. Among systems with hot Jupiters, \hd\ is one of the best studied because of its proximity, {strong activity and the presence of a transiting planet, that allows transmission spectroscopy, a measure of the planetary radius and its density.} Here we report on the X-ray activity of the primary star, \hd~A, using a new \xmm\ observation and a comparison with the previous X-ray observations. The spectrum in the quiescent intervals is described by two temperatures at 0.2 keV and 0.7 keV, while during the flares a third component at 0.9 keV is detected. With the analysis of the summed RGS spectra, we obtain estimates of the electron density in the range $n_e = 1.6 - 13 \times 10^{10}$ cm$^{-3}$ and thus the corona of \hd~A appears denser than the solar one. {For the third time, we observe a large flare that occurred just after the eclipse of the planet. Together with the flares observed in 2009 and 2011, the events are restricted to a small planetary phase range of $\phi = 0.55-0.65$. Although we do not find conclusive evidence of a significant excess of flares after the secondary transits, we suggest that the planet might trigger such flares when it passes close to locally high magnetic field of the underlying star at particular combinations of stellar rotational phases and orbital planetary phases. For the most recent flares, a wavelet analysis of the light curve suggests a loop of length of four stellar radii at the location of the bright flare, and a local magnetic field of order of 40-100 G, in agreement with the global field measured in other studies. The loop size suggests an interaction of magnetic nature between planet and star, separated by only $\sim8 R_*$. The X-ray variability of \hd~A is larger than the variability of field stars and young Pleiades of similar spectral type and X-ray luminosity. We also detect the stellar companion (\hd~B, $\sim12\arcsec$ from the primary star) in this \xmm\ observation. Its very low X-ray luminosity ($L_X = 3.4\times 10^{26}$ erg s$^{-1}$) confirms the old age of this star and of the binary system. The high activity of the primary star is best explained by a transfer of angular momentum from the planet to the star.} | The significant fraction of massive exoplanets that orbit at few stellar radii (hot Jupiters) of the host primary is a challenge for the models of evolution of such systems. Evidence of star-planet interaction (SPI) is, however, still a matter of debate. To first order, hot Jupiters should affect their host stars through both tidal and magneto-hydrodynamical effects (cf. \citealp{Cuntz2000}; \citealp{Ip2004}). {Both effects should scale with the separation ($d$) between the two bodies as $d^{-3}$ \citep{Saar2004}.} The interaction between the respective magnetic fields of the hot Jupiter and the star may be a source of enhanced activity that could manifest in X-rays. Transfer of angular momentum from the planet to the star during the inward migration and circularization of the orbit might also affect the stellar dynamo efficiency and thus the intensity of the coronal emission in X-rays. {Evidence of chromospheric activity induced by hot Jupiters in individual cases has been reported first by \citet{Shkolnik03}, with other cases investigated by \citet{Shkolnik05}, \citet{Catala2007}, \citet{Fares2010}, \citet{Fares2012}, \citet{Lanza09}, \citet{Lanza2010}, \citet{Lanza2011}, \citet{Gurdemir2012} and \citet{Shkolnik2013}.} On large sample, enhanced chromospheric activity in stars with hot Jupiters has been reported by \citet{Krejcova2012}, based on the analysis of Ca H\&K lines. \citet{Kashyap08} showed that stars with hot Jupiters are statistically brighter in X-rays than stars without hot Jupiters. On average \citet{Kashyap08} observed an excess of X-ray emission by a factor of 4 in the hot Jupiter sample. A similar result has been reported by \citet{Scharf2010}, who shows a positive correlation between the stellar X-ray luminosity and the mass of their hot Jupiters. However, analysis by \citet{Poppenhager2010} and \citet{Poppenhager2011} warn against the biases that could affect the results of \citet{Kashyap08} and \citet{Scharf2010}, and suggest that SPI can take place only in specific systems under favorable conditions. \hd\ is composed of a K1.5V star at only 19.3 pc from Sun, and an M4 companion at 3200 AU from the primary, orbiting on a plane perpendicular to the line of sight. The primary hosts a hot Jupiter class planet (HD~189733 b) at a distance of only 0.031 AU with an orbital period of $\sim 2.22$d \citep{Bouchy05}. The proximity of \hd\ and its transiting planet allow us detailed observations from IR to X-ray bands, making this one of the best characterized systems with a hot Jupiter. In X-rays \hd\ has been observed with \xmm, \chandra\ and {\em Swift}. In \citet{Pillitteri2010,Pillitteri2011} (hereafter Paper I and II, respectively) we investigated the X-ray emission of \hd\ and discovered signs of tidal/magnetic SPI after the planetary eclipse. In this paper we present the results of the third \xmm\ observation taken at the secondary transit of the hot Jupiter in the \hd\ system. The paper is organized as follows: Sect. 2 presents the target and summarizes our previous results, Sect. 3 describes the \xmm\ observation and the data analysis, Sect, 4 reports our new results, Sect. 5 discuss them, in Sect. 6 we outline our conclusions. | In this paper we have reported on the third \xmm\ observation of \hd\ at the eclipse of the planet. The main findings of this study are: \begin{itemize} \item We observed a third strong flare after the secondary transit of the planet. \item A wavelet analysis of the light curve reveals that in this case the flaring structure may be as big as four stellar radii. The magnetic field in this loop is in the range 40-110 G, in agreement with the estimates of global magnetic field of the star derived from spectropolarimetry \citep[see][]{Fares2010}. \item This large length suggests an origin due to magnetic interaction between the star and the close in planet. Qualitatively speaking, a magnetic field associated with the planet can exert a force on the plasma and the coronal loop when the planet passes close to regions of the stellar surface with enhanced magnetic field. \item The detection of the M type companion \hd B at level of $3.4\times10^{26}$ erg/s confirms the very old age of this star, at odds with the age of the primary estimated from gyrochronology. \item The discrepancy of age between \hd A and \hd B hints a tidal interaction of the main star with its hot Jupiter and the transfer of angular momentum. \item The updated analysis of RGS data confirms the dense corona of \hd\ similar to more X-ray luminous and active stars. \end{itemize} In the introduction we have discussed that star-planet interaction could be either of gravitational/tidal or magnetic origin. The difference of age estimates from magnetic activity indicators between the primary and secondary components of the \hd\ system implies that a tidal transfer of angular momentum must be occurring. The additional suggestion here is that big flares are seen at specific planetary phases and stellar longitudes. More importantly, the discovery of a magnetic loop of the order of a few stellar radii and thus a significant fraction of the star-planet separation, provides a very strong, although not decisive, evidence of magnetic star planet interaction as well. | 14 | 3 | 1403.1029 |
1403 | 1403.0675_arXiv.txt | We investigate the modified $F(R)$ gravity theory with the function $F(R) = (1-\sqrt{1-2\lambda R-\sigma (\lambda R)^2})/\lambda$. The action is converted into Einstein$-$Hilbert action at small values of $\lambda$ and $\sigma$. The local tests give a bound on the parameters, $\lambda(1+\sigma)\leq 2\times 10^{-6}$ cm$^2$. The Jordan and Einstein frames are considered, the potential, and the mass of the scalar field were obtained. The constant curvature solutions of the model are found. It was demonstrated that the de Sitter space is unstable but a solution with zero Ricci scalar is stable. The cosmological parameters of the model are evaluated. Critical points of autonomous equations are obtained and described. | One of ways to explain the inflation and the present time of the Universe acceleration is to modify the Einstein$-$Hilbert (EH) action of general relativity (GR) theory. Here we consider the $F(R)$ gravity model replacing the Ricci scalar $R$ in EH action by the function $F(R)$. Such $F(R)$ gravity model can be an alternative to $\Lambda$-Cold Dark Matter ($\Lambda$CDM) model where the cosmic acceleration appears due to modified gravity. Thus, instead of the introduction of the cosmological constant $\Lambda$ (having the problem with the explanation of the smallness of $\Lambda$) to describe dark energy (DE), we consider new gravitational physics. The requirement of classical and quantum stability leads to the conditions $F'(R)>0$, $F''(R)>0$ (Appleby et al. 2010), where the primes mean the derivatives with respect to the argument. These conditions do not fix the function and, therefore, there are various suggestions in the form of the function $F(R)$ in the literature. It should be mentioned that the first successful models of $F(R)$ gravity were given in Hu (2007), Appleby and Battye (2007), and Starobinsky (2007). The modified gravitational theories $f(R,T)$ with non-minimal curvature matter coupling, where the gravitational Lagrangian is given by an arbitrary function of the Ricci scalar $R$ and of the trace of the stress-energy tensor $T$, were considered by Harko (2011), Sharif (2014), Zubair (2015), Noureen (2015). It was shown the possibility of the transition from decelerating to accelerating phase in some f(R,T) models. In this paper we investigate the Born$-$Infeld (BI) type Lagrangian with the particular function $F(R)= \left(1-\sqrt{1-2\lambda R-\sigma\left(\lambda R\right)^2}\right)/ \lambda$ introducing two scales. In BI electrodynamics there are no divergences connected with point-like charges and the self-energy is finite (Born and Infeld (1934a), Born and Infeld (1934b), Born and Infeld (1935), Plebanski (1970)). In addition, BI type action appears naturally within the string theory. Thus, the low energy D-brane dynamics is governed by a BI type action (Fradkin and Tseytlin (1985)). These two attractive aspects of BI type theories are the motivation to consider BI-like gravity. In Kruglov (2010) we have considered modified BI electrodynamics with two constants. The model under consideration is the gravitational analog of generalized BI electrodynamics with two scales. It should be also mentioned that there are difficulties to quantize $F(R)$ gravity because it is the higher derivative (HD) theory. In HD theories there are additional degrees of freedom and ghosts are present so that unitarity of the theory is questionable. In addition, corrections due to one-loop divergences, introduced by renormalization, contain a scalar curvature squared ($R^2$) and the Ricci tensor squared ($R_{\mu\nu}R^{\mu\nu}$). As a result, $F(R)$ gravity theories are not renormalizable. At the same time, $F(R)$ gravity is the phenomenological model, and can give a description of the Universe evolution including the inflation and the late-time acceleration, modifies gravitational physics, and is an alternative to the $\Lambda$CDM model. The first model including $R^2$ term in the Lagrangian, and describing the self-consistent inflation, was given in Starobinsky (2007). The paper is organized as follows. In Sec. 2, we consider a model of $F(R)$ gravity with the BI-like Lagrangian density with two scales. A bound on the parameters $\lambda$ (with the dimension (length)$^2$) and $\sigma$ (the dimensionless parameter) is obtained. Constant curvature solutions corresponding the de Sitter space are obtained. In Sec. 3, the scalar-tensor form of the model is studied, the potential of the scalar degree of freedom and the mass are found, and the plots of the functions $\phi(\lambda R)$, $V(\lambda R)$, and $m^2_{\phi}(\lambda R)$ are given for $\sigma=-0.9$. We show that the de Sitter phase is unstable and the flat space (a solution with the zero curvature scalar) is stable. The slow-roll cosmological parameters of the model under consideration are evaluated and the plots of functions $\epsilon(\lambda R)$, $\eta(\lambda R)$ are given in Sec. 4. In Sec. 5 critical points of autonomous equations are investigated. The function m(r) characterizing the deviation from the $\Lambda$CDM model is evaluated and the plot is presented. A particular case of the model with the parameter $\sigma=0$ is studied in subsection 5.1 in details. The results obtained are discussed in Sec. 6. The Minkowski metric $\eta_{\mu\nu}$=diag(-1, 1, 1, 1) is used and we assume $c$=$\hbar$=1. | 14 | 3 | 1403.0675 |
|
1403 | 1403.5266_arXiv.txt | We derive new constraints on the mass, rotation, orbit structure and statistical parallax of the Galactic old nuclear star cluster and the mass of the supermassive black hole. We combine star counts and kinematic data from \citet{fc2014}, including 2'500 line-of-sight velocities and 10'000 proper motions obtained with VLT instruments. We show that the difference between the proper motion dispersions $\sigma_l$ and $\sigma_b$ cannot be explained by rotation, but is a consequence of the flattening of the nuclear cluster. We fit the surface density distribution of stars in the central $1000''$ by a superposition of a spheroidal cluster with scale $\sim 100''$ and a much larger nuclear disk component. We compute the self-consistent two-integral distribution function $f(E,L_z)$ for this density model, and add rotation self-consistently. We find that: (i) The orbit structure of the $f(E,L_z)$ gives an excellent match to the observed velocity dispersion profiles as well as the proper motion and line-of-sight velocity histograms, including the double-peak in the $v_l$-histograms. (ii) This requires an axial ratio near $q_1=0.7$ consistent with our determination from star counts, $q_1 = 0.73 \pm 0.04$ for $r<70''$. (iii) The nuclear star cluster is approximately described by an isotropic rotator model. (iv) Using the corresponding Jeans equations to fit the proper motion and line-of-sight velocity dispersions, we obtain best estimates for the nuclear star cluster mass, black hole mass, and distance ${M_*}(r\!<\!100'')\!=\!(8.94\!\pm\! 0.31{|_{\rm stat}} \!\pm\!0.9{|_{\rm syst}})\!\times\! {10^6}{M_\odot}$, ${M_\bullet } \!=\! (3.86\!\pm\!0.14{|_{\rm stat} \!\pm\! 0.4{|_{\rm syst}}}) \!\times\! {10^6}{M_\odot }$, and ${R_0} \!=\! 8.27 \!\pm\! 0.09{|_{\rm stat}}\!\pm\! 0.1{|_{\rm syst}}$ kpc, where the estimated systematic errors account for additional uncertainties in the dynamical modeling. (v) The combination of the cluster dynamics with the S-star orbits around Sgr A$^*$ strongly reduces the degeneracy between black hole mass and Galactic centre distance present in previous S-star studies. A joint statistical analysis with the results of \citet{ge2009} gives ${M_\bullet } \!=\! (4.23\!\pm\!0.14)\!\times\! {10^6}{M_\odot}$ and ${R_0} \!=\! 8.33 \!\pm\! 0.11$ kpc. | Nuclear star clusters (NSC) are located at the centers of most spiral galaxies \citep{carollo1997,boeker2002}. They are more luminous than globular clusters \citep{boeker2004}, have masses of order $\sim10^6-10^7 M_\odot$ \citep{walcher2005}, have complex star formation histories \citep{rossa2006,seth2006}, and obey scaling-relations with host galaxy properties as do central supermassive black holes \citep[SMBH;][]{ferrarese2006,wehner2006}; see \citet{boeker2010} for a review. Many host an AGN, i.e., a SMBH \citep{seth2008}, and the ratio of NSC to SMBH mass varies widely \citep{graham2009, kormendy2013}. The NSC of the Milky Way is of exceptional interest because of its proximity, about 8 kpc from Earth. It extends up to several hundred arcsecs from the center of the Milky Way (Sgr A*) and its mass within 1 pc is $\sim 10^6M_\odot$ with $\sim50\%$ uncertainty \citep{sm2009,geisen2010}. There is strong evidence that the center of the NSC hosts a SMBH of several million solar masses. Estimates from stellar orbits show that the SMBH mass is ${M_\bullet } = (4.31 \pm 0.36)\times{10^6}{M_\odot}$ \citep{schoe2002,ghez2008,ge2009}. Due to its proximity, individual stars can be resolved and number counts can be derived; however, due to the strong interstellar extinction the stars can only be observed in the infrared. A large number of proper motions and line-of-sight velocities have been measured, and analyzed with spherical models to attempt to constrain the NSC dynamics and mass \citep{hr1996,gt1996,genz2000,tg2008,sm2009,fc2014}. The relaxation time of the NSC within 1 pc is ${t_r} \sim{10^{10}}$ yr \citep{a2005, m2013}, indicating that the NSC is not fully relaxed and is likely to be evolving. One would expect from theoretical models that, if relaxed, the stellar density near the SMBH should be steeply-rising and form a \citet{bw1976} cusp. In contrast, observations by \citet{dg2009,b2009,b2010} show that the distribution of old stars near the SMBH appears to have a core. Understanding the nuclear star cluster dynamics may therefore give useful constraints on the mechanisms by which it formed and evolved \citep{m2010}. In this work we construct axisymmetric Jeans and two-integral distribution function models based on stellar number counts, proper motions, and line-of-sight velocities. We describe the data briefly in Section~\ref{sDataset}; for more detail the reader is referred to the companion paper of \citet{fc2014}. In Section~\ref{sSpherical} we carry out a preliminary study of the NSC dynamics using isotropic spherical models, in view of understanding the effect of rotation on the data. In Section \ref{sAxis} we describe our axisymmetric models and show that they describe the kinematic properties of the NSC exceptionally well. By applying a $\chi^2$ minimization algorithm, we estimate the mass of the cluster, the SMBH mass, and the NSC distance. We discuss our results and summarize our conclusions in Section~\ref{s_discussion}. The Appendix contains some details on our use of the \cite{qh1995} algorithm to calculate the two-integral distribution function for the fitted density model. \section[]{DATASET} \label{sDataset} We first give a brief description of the data set used for our dynamical analysis. These data are taken from \citet{fc2014} and are thoroughly examined in that paper, which should be consulted for more details. The coordinate system used is a shifted Galactic coordinate system ($l^*,b^*$) where Sgr A* is at the center and ($l^*,b^*$) are parallel to Galactic coordinates ($l,b$). In the following we always refer to the shifted coordinates but will omit the asterisks for simplicity. The dataset consists of stellar number densities, proper motions and line-of-sight velocities. We use the stellar number density map rather than the surface brightness map because it is less sensitive to individual bright stars and non-uniform extinction. The stellar number density distribution is constructed from NACO high-resolution images for $R_{\rm box}<20''$, in a similar way as in \citet{schoedel2010}, from HST WFC3/IR data for $20''<R_{\rm box}<66''$, and from VISTA-VVV data for $66''<R_{\rm box}<1000''$. The kinematic data include proper motions for $\sim$10'000 stars obtained from AO assisted images. The proper motion stars are binned into 58 cells \citep[Figure \ref{plot_5};][]{fc2014} according to distance from Sgr A* and the Galactic plane. This binning assumes that the NSC is symmetric with respect to the Galactic plane and with respect to the $b$-axis on the sky, consistent with axisymmetric dynamical modeling. The sizes of the bins are chosen such that all bins contain comparable numbers of stars, and the velocity dispersion gradients are resolved, i.e., vary by less than the error bars between adjacent bins. Relative to the large velocity dispersions at the Galactic center (100 km/s), measurement errors for individual stars are typically $\sim10\%$, much smaller than in typical globular cluster proper motion data where they can be $\sim50\%$ (e.g., in Omega Cen; \cite{v2006}). Therefore corrections for these measurement errors are very small. We also use $\sim$2'500 radial velocities obtained from SINFONI integral field spectroscopy. The binning of the radial velocities is shown in Fig.~\ref{plot_6}. There are 46 rectangular outer bins as shown in Fig.~\ref{plot_6} plus 6 small rectangular rings around the center \citep[not shown; see App.~E of][]{fc2014}. Again the outer bins are chosen such that they contain similar numbers of stars and the velocity dispersion gradients are resolved. The distribution of radial velocity stars on the sky is different from the distribution of proper motion stars, and it is not symmetric with respect to $l=0$. Because of this and the observed rotation, the binning is different, and extends to both positive and negative longitudes. Both the proper motion and radial velocity binning are also used in \citet{fc2014} and some tests are described in that paper. Finally, we compare our models with (but do not fit to) the kinematics derived from about 200 maser velocities at $r > 100''$ \citep[from][]{lindquist1992,deguchi2004}. As for the proper motion and radial velocity bins, we use the mean velocities and velocity dispersions as derived in \citet{fc2014}. The assumption that the NSC is symmetric with respect to the Galactic plane and the $b=0$ axis is supported by the recent Spitzer/IRAC photometry \citep{schodel2014} and by the distribution of proper motions \citep{fc2014}. The radial velocity data at intermediate radii instead show an apparent misalignment with respect to the Galactic plane, by $\sim 10^{\circ}$; see \citet{feldmeier2014} and \citet{fc2014}. We show in Section~\ref{s_distance} that, even if confirmed, such a misaligned structure would have minimal impact on the results obtained here with the symmetrised analysis. \begin{figure} \centering \includegraphics[width=\linewidth]{plots/plot_5.eps} \caption{Binning of the proper motion velocities. The stars are binned into cells according to their distance from Sgr A* and their smallest angle to the Galactic plane \citep{fc2014}.} \label{plot_5} \end{figure} \begin{figure} \centering \includegraphics[width=\linewidth]{plots/plot_6.eps} \caption{Binning of the line-of-sight velocities. The stars are binned into 46 rectangular outer cells plus 6 rectangular rings at the center. The latter are located within the white area around $l\!=b\!=0$ and are not shown in the plot; see App.~E of \citet{fc2014}.} \label{plot_6} \end{figure} \section[]{SPHERICAL MODELS OF THE NSC} \label{sSpherical} In this section we study the NSC using the preliminary assumption that the NSC can be described by an isotropic distribution function (DF) depending only on energy. We use the DF to predict the kinematical data of the cluster. Later we add rotation self-consistently to the model. The advantages of using a distribution function instead of common Jeans modeling are that (i) we can always check if a DF is positive and therefore if the model is physical, and (ii) the DF provides us with all the moments of the system. For the rest of the paper we use $(r,\theta ,\varphi )$ for spherical and $(R,\varphi ,z)$ for cylindrical coordinates, with $\theta=0$ corresponding to the z-axis normal to the equatorial plane of the NSC. \begin{figure} \centering \includegraphics[width=\linewidth]{plots/plot_1.eps} \caption{A combination of two $\gamma$-models gives an accurate approximation to the spherically averaged number density of late-type stars versus radius on the sky (points with error bars). Blue line: inner component, purple line: outer component, brown line: both components.} \label{plot_1} \end{figure} \subsection[]{Mass model for the NSC} \label{oneIntegralDF} The first step is to model the surface density. We use the well-known one-parameter family of spherical $\gamma$-models \citep{d1993}: \begin{align} \rho_{\gamma} (r) = \frac{{3 - \gamma }}{{4\pi }}\frac{{M\,a }} {{{r^\gamma }{{(r + a)}^{4 - \gamma }}}}\,,\,0 \le \gamma < 3 \end{align} where $a$ is the scaling radius and $M$ the total mass.The model behaves as $\rho \sim {r^{ - \gamma }}$ for ${r \to 0}$ and $\rho \sim {r^{ - 4}}$ for $r \to \infty $. Dehnen $\gamma$ models are equivalent to the $\eta$-models of \cite{tr1994} under the transformation $\gamma=3-\eta$. Special cases are the \cite{j1983} and \cite{h1990} models for $\gamma=2$ and $\gamma=1$ respectively. For $\gamma=3/2$ the model approximates de Vaucouleurs $R^{1/4}$ law. In order to improve the fitting of the surface density we use a combination of two $\gamma$-models, i.e. \begin{align} \label{eqDenSph} \rho(r) = \sum\limits_{i = 1}^2 {\frac{{3 - {\gamma_i}}}{{4\pi }} \frac{{M_i\,a_i }}{{{r^{{\gamma_i}}}{{(r + a_i )}^{4 - {\gamma_i}}}}}}. \end{align} The use of a two-component model will prove convenient later when we move to the axisymmetric case. The projected density is \begin{align} \label{eqSDensity} \Sigma (R_s) = 2\int_{R_s}^\infty {{\rho}(r)r} /{({r^2} - {R_s^2})^{1/2}}dr \end{align} and can be expressed in terms of elementary functions for integer $\gamma$, or in terms of elliptic integrals for half-integer $\gamma$. For arbitrary $\gamma_1$ and $\gamma_2$ the surface density can only be calculated numerically using equation~(\ref{eqSDensity}). The surface density diverges for $\gamma>1$ but is finite for $\gamma<1$. \begin{figure} \centering \includegraphics[width=\linewidth]{plots/plot_2.eps} \caption{Isotropic DF for the two-component spherical model of the NSC in the joint gravitational potential including also a central black hole. Parameters for the NSC are as given in (\ref{sphParams}), and ${M_ \bullet }/(M_1 + M_2)=1.4\times10^{-3}$.} \label{plot_2} \end{figure} The projected number density profile of the NSC obtained from the data of \cite{fc2014} (see Section~\ref{sDataset}) is shown in Figure \ref{plot_1}. The inflection point at $R_s \sim 100''$ indicates that the NSC is embedded in a more extended, lower-density component. The surface density distribution can be approximated by a two-component model of the form of equation~(\ref{eqDenSph}), where the six parameters $({\gamma_1},{M_1},{a_1},\,{\gamma_2},{M_2},{a_2})$ are fitted to the data subject to the following constraints: The slope of the inner component should be ${\gamma_1}>0.5$ because isotropic models with a black hole and ${\gamma_1}<0.5$ are unphysical \citep{tr1994}, but it should be close to the limiting value of 0.5 to better approximate the observed core near the center \citep{b2009}. For the outer component $\gamma_2\ll 0.5$ so that it is negligible in the inner part of the density profile. In addition $M_1<M_2$ and $a_1< a_2$. With these constraints we start with some initial values for the parameters and then iteratively minimize $\chi^2$. The reduced $\chi^2$ resulting from this procedure is $\chi^2/\nu=0.93$ for $\nu = 55$ d.o.f. and the corresponding best-fit parameter values are: \begin{align} \label{sphParams} \begin{array}{*{20}{c}} {{\gamma_1} = 0.51\,}&{a_1 = 99''}\\ {{\gamma_2} = 0.07}&{a_2 = 2376''} \end{array}\,\,\,\,\frac{{M_2}}{{M_1}} = 105.45. \end{align} Here we provide only the ratio of masses instead of absolute values in model units since the shape of the model depends only on the ratio. The surface density of the final model is overplotted on the data in Figure \ref{plot_1}. \subsection{Spherical model} With the assumption of constant mass-to-light ratio and the addition of the black hole the potential ($\Phi=-\Psi$) will be \citep{d1993} \begin{equation} \begin{array}{l} {\Psi}(r) = \sum\limits_{i = 1}^2 {\frac{{G{M_i}}}{{{a_i}}}} \frac{1}{{(2 - {\gamma_i})}} \left( {1 - {{\left( {\frac{r}{{r + a}}} \right)}^{2 - {\gamma_i}}}} \right)+\frac{{G{M_ \bullet }}}{r} \end{array} \end{equation} where ${M_ \bullet }$ is the mass of the black hole. Since we now know the potential and the density we can calculate the distribution function (DF) numerically using Eddington's formula, as a function of positive energy $E=\Psi- \frac{1}{2}{\upsilon^2}$, \begin{align} f(E) = \frac{1}{{\sqrt 8 {\pi^2}}}\left[ {{{\int_0^E {\frac{{d\Psi }}{{\sqrt {E - \Psi } }}\frac{{{d^2}\rho }}{{d{\Psi^2}}} + \frac{1}{{\sqrt E }}\left( {\frac{{d\rho }}{{d\Psi }}} \right)} }_{\Psi = 0}}} \right]. \end{align} The 2nd term of the equation vanishes for reasonable behavior of the potential and the double derivative inside the integral can be calculated easily by using the transformation \begin{align} \label{transformation} \frac{{{d^2}\rho }}{{d{\Psi^2}}} = \left[ { - {{\left( {\frac{{d\Psi }}{{dr}}} \right)}^{ - 3}}\frac{{{d^2}\Psi }}{{d{r^2}}}} \right] \frac{{d\rho }}{{dr}} + {\left( {\frac{{d\Psi }}{{dr}}} \right)^{ - 2}}\frac{{{d^2}\rho }}{{d{r^2}}}. \end{align} Figure \ref{plot_2} shows the DF of the two components in their joint potential plus that of a black hole with mass ratio ${M_ \bullet }/(M_1 + M_2)=1.4\times 10^{-3}$. The DF is positive for all energies. We can test the accuracy of the DF by retrieving the density using \begin{align} \label{eqDFdenSph} \rho (r) = 4\pi \int\limits_0^\Psi {dE f(E )\sqrt {\Psi-E}} \end{align} and comparing it with equation~(\ref{eqDenSph}). Both agree to within $0.1\%$. The DF has the typical shape of models with a shallow cusp of $\gamma<\frac{3}{2}$. It decreases as a function of energy both in the neighborhood of the black hole and also for large energies. It has a maximum near the binding energy of the stellar potential well \citep{bd2005}. For a spherical isotropic model the velocity ellipsoid \citep{bt2008} is a sphere of radius $\sigma$. The intrinsic dispersion $\sigma$ can be calculated directly using \begin{align} \label{eqSDis} {\sigma^2}(r) = \frac{{4\pi }}{{3\rho (r)}}\int_0^\infty {d\upsilon {\upsilon^4}f({\textstyle{1\over2}}{\upsilon^2} - \Psi )}. \end{align} The projected dispersion is then given by: \begin{align} \label{eqPDis} \Sigma ({R_s})\sigma_P^2({R_s}) = 2\int_{{R_s}}^\infty {{\sigma^2}(r)\frac{{\rho (r)r}}{{\sqrt {{r^2} - R_s^2} }}dr}. \end{align} In Figure \ref{plot_3} we see how our two-component model compares with the kinematical data using the values $R_0 = 8$ kpc for the distance to the Galactic centre, ${M_\bullet} = 4\times{10^6}{M_ \odot }$ for the black hole mass, and ${M_*}(r < 100'') = 5\times{10^6}{M_\odot}$ for the cluster mass inside 100''. The good match of the data up to $80''$ suggests that the assumption of constant mass-to-light ratio for the cluster is reasonable. Later-on we will see that a flattened model gives a much better match also for the maser data. \begin{figure} \centering \includegraphics[width=\linewidth]{plots/plot_3.eps} \caption{Line-of-sight velocity dispersion $\sigma_{\rm los}$ of the two-component spherical model with black hole, compared to the observed line-of-sight dispersions (black) and the proper motion dispersions in $l$ (red) and $b$ (blue). The line-of-sight data includes the outer maser data, and for the proper motions a canonical distance of $R_0=8$ kpc is assumed.} \label{plot_3} \end{figure} \subsection{Adding self-consistent rotation to the spherical model} We describe here the effects of adding self-consistent rotation to the spherical model, but much of this also applies to the axisymmetric case which will be discussed in Section~\ref{sAxis}. We assume that the rotation axis of the NSC is aligned with the rotation axis of the Milky Way disk. We also use a cartesian coordinate system $(x,y,z)$ where $z$ is parallel to the axis of rotation as before, $y$ is along the line of sight, and $x$ is along the direction of negative longitude, with the center of the NSC located at the origin. The proper motion data are given in Galactic longitude $l$ and Galactic latitude $b$ angles, but because of the large distance to the center, we can assume that $x\parallel l$ and $z\parallel b$. Whether a spherical system can rotate has been answered in \cite{l1960}. Here we give a brief review. Rotation in a spherical or axisymmetric system can be added self-consistently by reversing the sense of rotation of some of its stars. Doing so, the system will remain in equilibrium. This is equivalent with adding to the DF a part that is odd with respect to $L_z$. The addition of an odd part does not affect the density (or the mass) because the integral of the odd part over velocity space is zero. The most effective way to add rotation to a spherical system is by reversing the sense of rotation of all of its counterrotating stars. This corresponds to adding $f_{-}(E,L^2,L_z) = {\rm sign}({L_z})f(E,L^2)$ \citep[Maxwell's daemon,][]{l1960} to the initially non-rotating DF, and generates a system with the maximum allowable rotation. The general case of adding rotation to a spherical system can be written $f'(E,L^2,L_z) = (1 + g({L_z}))f(E,L^2)$ where $g({L_z})$ is an odd function with $\max |g({L_z})| < 1$ to ensure positivity of the DF. We notice that the new distribution function is a three-integral DF. In this case the density of the system is still rotationally invariant but $f_{-}$ is not. \begin{figure} \begin{center} \includegraphics[width=\linewidth]{plots/plot_4.eps} \end{center} \caption{Mean line-of-sight velocity data compared to the prediction of the two-component spherical model with added rotation for $F=-0.90$ and two $\kappa$ values for illustration. Each data point corresponds to a cell from Figure \ref{plot_6}. Velocities at negative $l$ have been folded over with their signs reversed and are shown in red. The plot also includes the maser data at $R_s > 100''$. The model prediction is computed for $b=20''$. For comparison, cells with centers between $b=15''$ and $b=25''$ are highlighted with filled triangles.} \label{plot_4} \end{figure} In Figure \ref{plot_3} we notice that the projected velocity dispersion in the $l$ direction is larger than the dispersion in the $b$ direction which was first found by \cite{tg2008}. This is particularly apparent for distances larger than $10''$. A heuristic attempt to explain this difference was made in \cite{tg2008} where they imposed a rotation of the form ${\upsilon_\varphi }(r,\theta)$ along with their Jeans modeling, as a proxy for axisymmetric modeling. Here we show that for a self-consistent system the difference in the projected $l$ and $b$ dispersions cannot be explained by just adding rotation to the cluster. Specifically, we show that adding an odd part to the distribution function does not change the proper motion dispersion $\sigma_x$. The dispersion along the $x$ axis is $\sigma_x^2 = \overline{\upsilon_x^2} - {\overline\upsilon_x}^2$. Writing $\upsilon_x$ in spherical velocity components (see the beginning of this section for the notation), \begin{equation} \upsilon_x= \upsilon_R {x\over R} - \upsilon_\varphi {y\over R} = \upsilon_r\sin\theta {x\over R} + \upsilon_\theta\cos\theta {x\over R} - \upsilon_\varphi {y\over R} \end{equation} we see that \begin{align} \begin{array}{l} \overline{\upsilon_x^2} = \int {d{\upsilon_r}} \int {d{\upsilon_\theta}} \int {d{\upsilon_\varphi}}\upsilon_x^2\left( {1 + g({L_z})} \right){f_+ } = \\ = \int {d{\upsilon_r}} \int {d{\upsilon_\theta}} \int {d{\upsilon_\varphi}} \upsilon_x^2 {f_+ } + 0. \end{array} \end{align} The second term vanishes because $f_+(E,L^2) g(L_z)$ is even in $\upsilon_r$, $\upsilon_\theta$ and odd in $\upsilon_\varphi$, so that the integrand for all terms of $f_+ g\, \upsilon_x^2$ is odd in at least one velocity variable. We also have \begin{align} \begin{array}{l} \overline\upsilon_x= \int {d{\upsilon_r}} \int {d{\upsilon_\theta}} \int {d{\upsilon_\varphi}}{\upsilon_x} \left( {1 + g({L_z})} \right){f_ + } = \\ = 0 - \int {d{\upsilon_r}} \int {d{\upsilon_\theta}} \int {d{\upsilon_\varphi}} {\upsilon_\varphi} {y\over R} {f_+} g. \end{array} \end{align} The first part is zero because ${\upsilon_x}{f_ + }$ is odd. The second part is different from zero; however when projecting $\overline\upsilon_\varphi$ along the line-of-sight this term also vanishes because $f_+ g$ is an even function of $y$ when the integration is in a direction perpendicular to the $L_z$ angular momentum direction. Hence the projected mean velocity $\overline\upsilon_x$ is zero, and the velocity dispersion $\sigma_x^2=\overline\upsilon^2_x$ is unchanged. An alternative way to see this is by making a particle realization of the initial DF \citep[e.g.][]{ah1974}. Then we can add rotation by reversing the sign of $L_z$ of a percentage of particles using some probability function which is equivalent to changing the signs of $\upsilon_x$ and $\upsilon_y$ of those particles. $\overline{\upsilon_x^2}$ will not be affected by the sign change and the $\overline\upsilon_x^2$ averaged over the line-of-sight will be zero because for each particle at the front of the system rotating in a specific direction there will be another particle at the rear of the system rotating in the opposite direction. In this work we do not use particle models to avoid fluctuations due to the limited number of particles near the center. For the odd part of the DF we choose the two-parameter function from \citet{qh1995}. This is a modified version of \citet{d1986} which was based on maximum entropy arguments: \begin{align} \label{eqAlpha} g(L_z) = G(\eta ) = F\frac{{\tanh (\kappa \eta /2)}}{{\tanh (\kappa /2)}} \end{align} where $\eta = {L_z}/{L_m}(E)$, ${L_m}(E)$ is the maximum allowable value of $L_z$ at a given energy, and $-1<F<1$ and $\kappa>0$ are free parameters. The parameter F works as a global adjustment of rotation while the parameter $\kappa$ determines the contributions of stars with different $L_z$ ratios. Specifically for small $\kappa$ only stars with high $L_z$ will contribute while large $\kappa$ implies that all stars irrespective of their $L_z$ contribute to rotation. For F=1 and $\kappa \gg 0$, $g(L_z) = {\rm sign}({L_z})$ which corresponds to maximum rotation. From the resulting distribution function $f(E,L_z)$ we can calculate ${\overline\upsilon_\varphi}(R,z)$ in cylindrical coordinates using the equation \begin{align} {\overline\upsilon_\varphi}(R,z) = \frac{{4\pi }}{{\rho {R^2}}} \int\limits_0^\Psi{dE\int\limits_0^{R\sqrt {2(\Psi - E)} } {{f_ - }(E,{L_z}){L_z}d{L_z}} }. \label{eqUphi} \end{align} To find the mean line-of-sight velocity versus Galactic longitude $l$ we have to project equation~(\ref{eqUphi}) to the sky plane \begin{align} {\upsilon_{\rm los}}(x,z) = \frac{2}{\Sigma }\int_x^\infty {{\overline\upsilon_\varphi }(R,z)\frac{x}{R}\frac{{\rho (R,z)RdR}}{{\sqrt {{R^2} - {x^2}} }}}. \label{ulos} \end{align} Figure \ref{plot_4} shows the mean line-of-sight velocity data vs Galactic longitude $l$ for $F=-0.9$ and two $\kappa$ values for the parameters in equation~(\ref{eqAlpha}). Later in the axisymmetric section we constrain these parameters by fitting. Each data point corresponds to a cell from Figure \ref{plot_6}. The maser data ($r>100''$) are also included. The signs of velocities for negative $l$ are reversed because of the assumed symmetry. The line shows the prediction of the model with parameters determined with equation~(\ref{ulos}). Figure \ref{plot_6} shows that the line-of-sight velocity cells extend from b=0 to up to $b=50''$, but most of them lie between 0 and $b=20''$. For this reason we compute the model prediction at an average value of $b=20''$. \section[]{AXISYMMETRIC MODELING OF THE NSC} \label{sAxis} \begin{figure} \centering \includegraphics[width=\linewidth]{plots/plot_7.eps} \caption{Axisymmetric two-component model for the surface density of the nuclear cluster. The points with error bars show the number density of late-type stars along the $l$ and $b$ directions \citep{fc2014} in red and blue respectively. The blue lines show the model that gives the best fit to the surface density data with parameters as in \ref{bestSDvalues}.} \label{plot_7} \end{figure} We have seen that spherical models cannot explain the difference between the velocity dispersions along the $l$ and $b$ directions. The number counts also show that the cluster is flattened; see Figure \ref{plot_7} and \citet{fc2014}. Therefore we now continue with axisymmetric modeling of the nuclear cluster. The first step is to fit the surface density counts with an axisymmetric density model. The available surface density data extend up to $1000''$ in the $l$ and $b$ directions. For comparison, the proper motion data extend to $\sim 70''$ from the centre (Figure \ref{plot_5}). We generalize our spherical two-component $\gamma$-model from equation~(\ref{eqDenSph}) to a spheroidal model given by \begin{align} \label{dAxis} \rho (R,z) = \sum\limits_{i = 1}^2 {\frac{{3 - {\gamma_i}}} {{4\pi {q_i}}}\frac{{{M_i}\,{a_i}}}{{{m_i^{{\gamma_i}}}{{(m_i + {a_i})}^{4 - {\gamma_i}}}}}} \end{align} where $m_i^2 = {R^2} + {z^2}/q_i^2$ is the spheroidal radius and the two new parameters $q_{1,2}$ are the axial ratios (prolate $>1$, oblate $<1$) of the inner and outer component, respectively. Note that the method can be generalized to N components. The mass of a single component is given by $4\pi q_i\int\limits_0^\infty {{m_i^2}\rho (m_i)dm_i} $. From Figure \ref{plot_7} we expect that the inner component will be more spherical than the outer component, although when the density profile gets flatter near the center it becomes more difficult to determine the axial ratio. In Figure \ref{plot_7} one also sees that the stellar surface density along the $l$ direction is larger than along the $b$ direction. Thus we assume that the NSC is an oblate system. To fit the model we first need to project the density and express it as a function of $l$ and $b$. The projected surface density as seen edge on is \begin{align} \label{eqaxisSD} \Sigma (x,z) = 2\int\limits_x^\infty {\frac{{\rho (R,z)R}}{{\sqrt {{R^2} - {x^2}} }}dR}. \end{align} In general, to fit equation~(\ref{eqaxisSD}) to the data we would need to determine the eight parameters ${\gamma_{1,2}},\,{M_{1,2}},\,{a_{1,2}},\,{q_{1,2}}$. However, we decided to fix a value for $q_2$ because the second component is not very well confined in the 8-dimensional parameter space (i.e. there are several models each with different $q_2$ and similar ${\chi^2}$). We choose $q_2=0.28$, close to the value found in \cite{fc2014}. For similar reasons, we also fix the value of ${\gamma_2}$ to that used in the spherical case. The minimum value of ${\gamma_1}$ for a semi-isotropic axisymmetric model with a black hole cannot be smaller than $0.5$ \citep{qh1995}, as in the spherical case. For our current modeling we treat ${\gamma_1}$ as a free parameter. Thus six free parameters remain. To fit these parameters to the data in Fig.~\ref{plot_7} we apply a Markov chain Monte Carlo algorithm. For comparing the model surface density (\ref{eqaxisSD}) to the star counts we found it important to average over angle in the inner conical cells to prevent an underestimation of the $q_1$ parameter. The values obtained with the MCMC algorithm for the NSC parameters and their errors are: \begin{align} \begin{array}{*{20}{c}} {{\gamma_1} = 0.71 \pm 0.12}&{{a_1} = 147.6'' \pm 27''}&{{q_1} = 0.73 \pm 0.04}\\ {{\gamma_2} = 0.07}&{{a_2} = 4572'' \pm 360''}&{{q_2} = 0.28}\\ &{{{M_2}}}/{{{M_1}}} = 101.6 \pm 18 \end{array} \label{bestSDvalues} \end{align} The reduced ${\chi^2}$ that corresponds to these parameter values is ${\chi^2}/\nu_{\rm SD} = 0.99$ for $\nu_{\rm SD}=110$ d.o.f. Here we note that there is a strong correlation between the parameters $a_2$ and $M_2$. The flattening of the inner component is very similar to the recent determination from Spitzer/IRAC photometry \citep[$0.71\pm0.02$,][]{schodel2014} but slightly more flattened than the best value given by \citet{fc2014}, $0.80\pm0.04$. The second component is about 100 times more massive than the first, but also extends more than one order of magnitude further. Assuming constant mass-to-light ratio for the star cluster, we determine its potential using the relation from \cite{qh1995}, which is compatible with their contour integral method (i.e. it can be used for complex $R^2$ and $z^2$). The potential for a single component $i$ is given by: \begin{equation} \begin{array}{l} \Psi_i(R,z) = {\Psi_{0i}} - \frac{{2\pi Gq_i}}{e_i}\int\limits_0^\infty {\rho_i \left( U \right)\left[ {\frac{{{R^2}}}{{{{(1 + u)}^2}}} + \frac{{{z^2}}}{{{{({q_i^2} + u)}^2}}}} \right]} \,\\ \hspace{1.3 cm} \times (\arcsin \,e_i - \arcsin \frac{e_1}{{\sqrt {1 + u} }})du \label{ax_pot} \end{array} \end{equation} with $e_i = \sqrt {1 - {q_i^2}}$, $U = \frac{{{R^2}}}{{1 + u}} + \frac{{{z^2}}}{{{q_i^2} + u}}$, and where $\Psi_{0i}$ is the central potential (for a review of the potential theory of ellipsoidal bodies consider \citet{s1969}). The total potential of the two-component model is \begin{align} \Psi (R,z) = \sum\limits_{i = 1}^2 {{\Psi_i}(R,z) + \frac{{G{M_ \bullet }}}{{\sqrt {{R^2} + {z^2}} }}}. \label{ax_tot_pot} \end{align} \subsection{Axisymmetric Jeans modeling} Here we first continue with axisymmetric Jeans modeling. We will need a large number of models to determine the best values for the mass and distance of the NSC, and for the mass of the embedded black hole. We will use DFs for the detailed modeling in Section 4.3, but this is computationally expensive, and so a large parameter study with the DF approach is not currently feasible. In Section~4.3 we will show that a two-integral (2I) distribution function of the form $f(E,L_z^2)$ gives a very good representation to the histograms of proper motions and line-of-sight velocities for the nuclear star cluster in all bins. Therefore we can assume for our Jeans models that the system is semi-isotropic, i.e., isotropic in the meridional plane, $\overline{\upsilon_z^2}=\overline{\upsilon_R^2} $. From the tensor virial theorem \citep{bt2008} we know that for 2I-models $\overline {\upsilon_\Phi^2}> \overline{\upsilon_R^2}$ in order to produce the flattening. In principle, for systems of the form $f(E,L_z)$ it is possible to find recursive expressions for any moment of the distribution function \citep{m1994} if we know the potential and the density of the system. However, here we will confine ourselves to the second moments, since later we will recover the distribution function. By integrating the Jeans equations we get relations for the independent dispersions \citep{nm1976}: \begin{align} \begin{array}{l} {\overline {\upsilon_z^2} } (R,z) = {\overline {\upsilon_R^2} }(R,z) = - \frac{1}{{\rho (R,z)}}\int_z^\infty {dz'\rho (R,z')\frac{{\partial \Psi }}{{\partial z'}}} \\ {\overline {\upsilon_\varphi^2}} (R,z) = {\overline {\upsilon_R^2} } (R,z) + \frac{R}{{\rho (R,z)}}\frac{{\partial (\rho\overline{\upsilon_R^2} )}} {{\partial R}} - R\frac{{\partial \Psi }}{{\partial R}} \end{array} \label{nagai} \end{align} The potential and density are already known from the previous section. Once $\overline{\upsilon_z^2}$ is found it can be used to calculate $\overline{\upsilon_\varphi^2}$. The intrinsic dispersions in $l$ and $b$ direction are given by the equations: \begin{align} \begin{array}{l} \label{sigma_proj0} \sigma_b^2 = \overline{\upsilon_z^2}\\ \sigma_l^2 = \overline{ {\upsilon_{x}^2}} = \overline{\upsilon_R^2}{\sin^2}\theta + \overline{\upsilon_\varphi^2}{\cos^2}\theta \\ \overline{ {\upsilon_{\rm los}^2}} = \overline{ {\upsilon_{y}^2}} = \overline{\upsilon_R^2}{\cos^2}\theta + \overline{\upsilon_\varphi^2}{\sin^2}\theta \end{array} \end{align} where ${\sin^2}\theta = x^2/R^2$ and ${\cos^2}\theta = 1 - x^2/R^2$. Projecting the previous equations along the line of sight we have: \begin{align} \begin{array}{l} \label{sigma_proj} \Sigma \sigma_l^2(x,z) =\\ 2\int_x^\infty {\left[ {\overline{\upsilon_R^2}\frac{{{x^2}}}{{{R^2}}} + \overline{\upsilon_\varphi^2}\left( {1 - \frac{{{x^2}}}{{{R^2}}}} \right)} \right] \frac{{\rho (R,z)}}{{\sqrt {{R^2} - {x^2}} }}dR}, \\ \Sigma \sigma_b^2(x,z) =\\ 2\int_x^\infty {\overline{\upsilon_z^2}(R,z)\frac{{\rho (R,z)}}{{\sqrt {{R^2} - {x^2}} }}dR}, \\ \Sigma{\overline{ {\upsilon_{\rm los}^2}}}(x,z)=\\ 2\int_x^\infty {\left[ {\overline{\upsilon_R^2}\left( {1 - \frac{{{x^2}}}{{{R^2}}}} \right) + \overline{\upsilon_\varphi^2} \frac{{{x^2}}}{{{R^2}}}} \right] \frac{{\rho (R,z)}}{{\sqrt {{R^2} - {x^2}} }}dR}, \end{array} \end{align} where we note that the last quantity in (\ref{sigma_proj0}) and (\ref{sigma_proj}) is the $2^{nd}$ moment and not the line-of-sight velocity dispersion. \begin{figure} \centering \includegraphics[width=\linewidth]{plots/plot_9.eps} \caption{ Velocity dispersions $\sigma_l$ and $\sigma_b$ compared to axisymmetric, semi-isotropic Jeans models. The measured dispersions $\sigma_l$ (red points with error bars) and $\sigma_b$ (blue points) for all cells are plotted as a function of their two-dimensional radius on the sky, with the Galactic centre at the origin. The black lines show the best model; the model velocity dispersions are averaged over azimuth on the sky. The dashed black lines show the same quantities for a model which has lower flattening ($q_1=0.85$ vs $q_1=0.73$) and a smaller central density slope ($0.5$ vs $0.7$).} \label{plot_9} \end{figure} \begin{figure} \centering \includegraphics[width=\linewidth]{plots/plot_12.eps} \caption{Root mean square line-of-sight velocities compared with the best model, as a function of two-dimensional radius on the sky as in Fig.~\ref{plot_9}. In both plots the stellar mass of the NSC is $7.73\times10^6$ ${M_\odot}$ within $m < 100''$, the black hole mass is $3.86\times10^6$ ${M_\odot}$, and the distance is $8.3$ kpc (equation~\ref{bestModel}). All the maser data are included in the plot.} \label{plot_12} \end{figure} In order to define our model completely, we need to determine the distance $R_0$ and mass $M_*$ of the cluster and the black hole mass $M_{\bullet}$. To do this we apply a $\chi^2$ minimization technique matching all three velocity dispersions in both sets of cells, using the following procedure. First we note that the inclusion of self-consistent rotation to the model will not affect its mass. This means that for the fitting we can use ${\overline{ {\upsilon_{\rm los}^2} }^{1/2}}$ for each cell of Figure~\ref{plot_6}. Similarly, since our model is axisymmetric we should match to the ${ \overline{\upsilon^2_{l,b}}^{1/2}}$ for each proper motion cell; the ${{\overline\upsilon_{l,b}}}$ terms should be and indeed are negligible. Another way to see this is that since the system is axially symmetric, the integration of $ {{\overline\upsilon_{l,b}}} $ along the line-of-sight should be zero because the integration would cancel out for positive and negative $y$. With this in mind we proceed as follows, using the cluster's density parameters\footnote{It is computationally too expensive to simultaneously also minimize $\chi^2$ over the density parameters.} as given in (\ref{bestSDvalues}). First we partition the 3d space ($R_0$, $M_*$, $M_{\bullet}$) into a grid with resolution $20\times20\times20$. Then for each point of the grid we calculate the corresponding $\chi^2$ using the velocity dispersions from all cells in Figs.~\ref{plot_5} and \ref{plot_6}, excluding the two cells at the largest radii (see Fig.~\ref{plot_9}). We compare the measured dispersions with the model values obtained from equations~(\ref{sigma_proj}) for the centers of these cells. Then we interpolate between the $\chi^2$ values on the grid and find the minimum of the interpolated function, i.e., the best values for ($R_0$, $M_*$, $M_{\bullet}$). To determine statistical errors on these quantities, we first calculate the Hessian matrix from the curvature of $\chi^2$ surface at the minimum, $\partial {\chi^2}/\partial {p_i}\partial {p_j}$. The statistical variances will be the diagonal elements of the inverted matrix. With this procedure we obtain a minimum reduced $\chi^2/\nu_{\rm Jeans}=1.07$ with $\nu_{\rm Jeans}=161$ degrees of freedom, for the values \begin{align} \begin{array}{l} {R_0} = 8.27 \, {\rm kpc}\\ {M_*}(m < 100'') = 7.73 \times {10^6}{M_ \odot }\\ {M_ \bullet } = 3.86 \times {10^6}{M_ \odot }, \label{bestModel} \end{array} \end{align} where \begin{equation} M_*(m)\equiv \int_0^m 4\pi m^2 \left[ q_1\rho_1(m)+q_2\rho_2(m)\right] dm, \label{Minsidem} \end{equation} and the value given for $M_*$ in (\ref{bestModel}) is not the total cluster mass but the stellar mass within elliptical radius $100''$. In Section~\ref{s_distance} we will consider in more detail the determination of these parameters and their errors. The model with density parameters as in (\ref{bestSDvalues}) and dynamical parameters as in (\ref{bestModel}) will be our best model. In Section~\ref{s_2i} we will see that it also gives an excellent prediction to the velocity histograms. \begin{figure} \centering \includegraphics[width=\linewidth]{plots/plot_11.eps} \caption{All three projected velocity disperions compared. Red: ${\sigma_l}$, Blue: ${\sigma_b}$, Brown: ${\sigma _{\rm los}}={\overline{ {\upsilon_{\rm los}^2} }^{1/2}}$ . Note that ${\sigma_b}$ is slightly lower than ${\sigma_{\rm los}}$. The difference between ${\sigma_b}$ and ${\sigma_l}$ comes from the flattening of both the inner and outer components of the model.} \label{plot_11} \end{figure} First, we now look at the comparison of this model with the velocity data. Figure~\ref{plot_9} shows how the azimuthally averaged dispersions $\sigma_l$ and $\sigma_b$ compare with the measured proper motion dispersions. Figure~\ref{plot_12} shows how this best model, similarly averaged, compares with the line-of-sight mean square velocity data. The maser data are also included in the plot. It is seen that the model fits the data very well, in accordance with its $\chi^2/\nu_{\rm Jeans}=1.07$ per cell. Figure~\ref{plot_11} shows how all three projected dispersions of the model compare. $\sigma_{\rm b}$ is slightly lower than $\sigma_{\rm los}$ due to projection effects. The fact that all three velocity dispersion profiles in Figs.~\ref{plot_9},~\ref{plot_12} are fitted well by the model suggests that the assumed semi-isotropic dynamical structure is a reasonable approximation. The model prediction in Fig.~\ref{plot_9} is similar to Figure 11 of \cite{tg2008} but the interpretation is different. As shown in the previous section, the difference in projected dispersions cannot be explained by imposing rotation on the model. Here we demonstrated how the observational finding $\sigma_l>\sigma_b$ can be quantitatively reproduced by flattened axisymmetric models of the NSC and the surrounding nuclear disk. Most of our velocity data are in the range 7''-70'', i.e., where the inner NSC component dominates the potential. In order to understand the dynamical implications of these data on the flattening of this component, we have also constructed several density models in which we fixed $q_1$ to values different from the $q_1=0.73$ obtained from star counts. In each case we repeated the fitting of the dynamical parameters as in (\ref{bestModel}). We found that models with $q_1$ in a range from $\sim0.69$ to $\sim0.74$ gave comparable fits ($\chi^2/\nu$) to the velocity dispersion data as our nominal best model but that a model with $q_1=0.77$ was noticeably worse. We present an illustrative model with flattening about half-way between the measured $q_1=0.73$ and the spherical case, for which we set $q_1=0.85$. This is also close to the value given by \citep{fc2014}, $q_1=0.80\pm0.04$. We simultaneously explore a slightly different inner slope, $\gamma_1=0.5$. We then repeat the fitting of the starcount density profile in Fig.~\ref{plot_7} (model not shown), keeping also $\gamma_2$ and $q_2$ fixed to the previous values, and varying the remaining parameters. Our rounder comparison model then has the following density parameters: \begin{align} \begin{array}{*{20}{c}} {{\gamma_1} = 0.51}&{{a_1} = 102.6''\,}&{{q_1} = 0.85}\\ {{\gamma_2} = 0.07}&{{a_2} = 4086''}&{{q_2} = 0.28} \end{array}\,\,\,\frac{{{M_2}}}{{{M_1}}} = 109.1 \label{SDvalues1} \end{align} The best reduced $\chi^2$ that we obtain for the velocity dispersion profiles with these parameters is $\chi^2/\nu_{\rm Jeans}=1.16$ and corresponds to the values \begin{align} \begin{array}{l} {R_0} = 8.20 \, {\rm kpc}\\ {M_*}(m < 100'') = 8.31 \times {10^6}{M_ \odot }\\ {M_ \bullet } = 3.50 \times {10^6}{M_ \odot }, \label{lessFlattened} \end{array} \end{align} Compared to the best and more flattened model, the cluster mass has increased and the black hole mass has decreased. The sum of both masses has changed only by $2\%$ and the distance only by $1\%$. In Figures~\ref{plot_9}, \ref{plot_12} we see how the projected velocity dispersions of this model compare with our best model. The main difference seen in $\sigma_l$ comes from the different flattening of the inner component, and the smaller slope of the dispersions near the center of the new model is because of its smaller central density slope. \begin{figure*} \centering \includegraphics[width=\linewidth]{plots/plot_18.eps} \caption{Contour plots for the marginalized $\chi^2$ in the three parameter planes $(R_0,M_\bullet)$, $(M_\bullet,M_*)$, $(R_0,M_*)$. Contours are plotted at confidence levels corresponding to $1\sigma$, $2\sigma$ and $3\sigma$ of the joint probability distribution. The minimum corresponds to the values $R_0 = 8.27 {\rm kpc}$, $M_*(m < 100'') = 7.73 \times{10^6}{M_\odot}$, $M_\bullet=3.86 \times {10^6}{M_\odot}$, with errors discussed in Section~\ref{s_distance}. } \label{plot_18} \end{figure*} \subsection{Distance to the Galactic Center, mass of the star cluster, and mass of the black hole} \label{s_distance} We now consider the determination of these parameters from the NSC data in more detail. Fig \ref{plot_18} shows the marginalized $\chi^2$-plot for the NSC model as given in equation~(\ref{bestSDvalues}), for pairs of two parameters $(R_0,M_\bullet)$, $(M_\bullet,M_*)$, $(R_0,M_*)$, as obtained from fitting the Jeans dynamical model to the velocity dispersion profiles. The figure shows contour plots for constant $\chi^2/\nu_{\rm Jeans}$ with $1\sigma$, $2\sigma$ and $3\sigma$ in the three planes for the two-dimensional distribution of the respective parameters. We notice that the distance $R_0$ has the smallest relative error. The best-fitting values for $(R_0,M_*,M_\bullet)$ are given in equation~(\ref{bestModel}); these values are our best estimates based on the NSC data alone. For the dynamical model with these parameters and the surface density parameters given in (\ref{bestSDvalues}), the flattening of the inner component inferred from the surface density data is consistent with the dynamical flattening, which is largely determined by the ratio of $\sigma_l/\sigma_b$ and the tensor virial theorem. Statistical errors are determined from the Hessian matrix for this model. Systematic errors can arise from uncertainties in the NSC density structure, from deviations from the assumed axisymmetric two-integral dynamical structure, from dust extinction within the cluster (see Section~\ref{s_discussion}), and other sources. We have already illustrated the effect of varying the cluster flattening on $(R_0,M_\bullet,M_*)$ with our second, rounder model. We have also tested how variations of the cluster density structure $(a_2,q_2,M_2)$ beyond $500''$ impact the best-fit parameters, and found that these effects are smaller than those due to flattening variations. We have additionally estimated the uncertainty introduced by the symmetrisation of the data if the misalignment found by \cite{feldmeier2014, fc2014} were intrinsic to the cluster, as follows. We took all radial velocity stars and rotated each star by 10$^{\circ}$ clockwise on the sky. Then we resorted the stars into our radial velocity grid (Fig.~\ref{plot_6}). Using the new values ${\overline{ {\upsilon_{\rm los}^2} }^{1/2}}$ obtained in the cells we fitted Jeans models as before. The values we found for $R_0$, $M_*$, $M_\bullet$ with these tilted data differed from those in equation~(\ref{bestModel}) by $\Delta R_0=-0.02$ kpc, $\Delta M_*(m < 100'')=-0.15 \times 10^6 M_\odot$, and $\Delta M_\bullet=+0.02\times 10^6 M_\odot$, respectively, which are well within the statistical errors. Propagating the errors of the surface density parameters from the MCMC fit and taking into account the correlation of the parameters, we estimate the systematic uncertainties from the NSC density structure to be $\sim 0.1$ kpc in $R_0$, $\sim 6\%$ in $M_\bullet$, and $\sim 8\%$ $M_*(m<100'')$. We will see in Section~\ref{s_2i} below that the DF for our illustrative rounder NSC model gives a clearly inferior representation of the velocity histograms than our best kinematic model, and also that the systematic differences between both models appear comparable to the residual differences between our preferred model and the observed histograms. Therefore we take the differences between these models, $\sim 10\%$ in ${M_*}$, $\sim10\%$ in ${M_\bullet }$, and $\sim0.1 {\rm kpc}$ in $R_0$, as a more conservative estimate of the dynamical modeling uncertainties, so that finally \begin{align} \begin{array}{l} {R_0} = 8.27 \pm 0.09{|_{\rm stat}}\pm 0.1{|_{\rm syst}} \, {\rm kpc}\\ {M_*}(m < 100'') = (7.73 \pm 0.31{|_{\rm stat}}\pm 0.8{|_{\rm syst}}) \times {10^6}{M_\odot }\\ {M_ \bullet } = (3.86 \pm 0.14{|_{\rm stat}\pm 0.4{|_{\rm syst}}}) \times {10^6}{M_\odot }. \label{bestflattenedWithErrors} \end{array} \end{align} We note several other systematic errors which are not easily quantifiable and so are not included in these estimates, such as inhomogeneous sampling of proper motions or line-of-sight velocities, extinction within the NSC, and the presence of an additional component of dark stellar remnants. Based on our best model, the mass of the star cluster within $100''$ converted into spherical coordinates is ${M_*}(r < 100'') = (8.94 \pm 0.32{|_{\rm stat}} \pm 0.9{|_{\rm syst}}) \times {10^6}{M_\odot }$. The model's mass within the innermost pc ($25''$) is ${M_*}(m < 1{\rm pc}) = 0.729\times{10^6}{M_\odot }$ in spheroidal radius, or ${M_*}(r < 1{\rm pc}) = 0.89\times{10^6}{M_\odot }$ in spherical radius. The total mass of the inner NSC component is ${M_{1}} = 6.1 \times {10^7}{M_\odot }$. Because most of this mass is located beyond the radius where the inner component dominates the projected star counts, the precise division of the mass in the model between the NSC and the adjacent nuclear disk is dependent on the assumed slope of the outer density profile of NSC, and is therefore uncertain. \begin{figure} \centering \includegraphics[width=\linewidth]{plots/plot_22.eps} \caption{Blue: $\chi^2$ contours in the $(R_0, M_{\bullet})$ plane from stellar orbits of S-stars, as in Figure 15 of \citet{ge2009}, at confidence levels corresponding to $1\sigma$, $2\sigma$, $3\sigma$ for the joint probability distribution. Brown: Corresponding $\chi^2$ contours from this work. Black: Combined contours after adding the $\chi^2$ values.} \label{plot_22} \end{figure} The distance and the black hole mass we found differ by $0.7\%$ and $12\%$, respectively, from the values $R_0 = 8.33 \pm 0.17{|_{\rm stat}} \pm 0.31{|_{\rm syst}}$ kpc and ${M_{\bullet}} = 4.31 \pm 0.36\times{10^6}{M_\odot }$ for $R_0=8.33$ kpc, as determined by \cite{ge2009} from stellar orbits around Sgr A$^*$. Figure~\ref{plot_22} shows the $1\sigma$ to $3\sigma$ contours of marginalized $\chi^2$ for $(R_0, M_\bullet)$ jointly from stellar orbits \citep{ge2009}, for the NSC model of this paper, and for the combined modeling of both data sets. The figure shows that both analyses are mutually consistent. When marginalized over $M_*$ and the respective other parameter, the combined modeling gives, for each parameter alone, $R_0=8.33\pm0.11$ kpc and $M_\bullet=4.23\pm0.14 \times 10^6 M_\odot$. We note that these errors for $R_0$ and $M_\bullet$ are both dominated by the distance error from the NSC modeling. Thus our estimated additional systematic error of $0.1$ kpc for $R_0$ in the NSC modeling translates to a similar additional error in the combined $R_0$ measurement and, through the SMBH mass-distance relation given in Gillessen et al (2009), to an additional uncertainty $\simeq0.1\times10^6 M_\odot$ in $M_\bullet$. We see that the combination of the NSC and S-star orbit data is a powerful means for decreasing the degeneracy between the SMBH mass and Galactic center distance in the S-star analysis. \begin{figure*} \centering \includegraphics[width=\linewidth]{plots/plot_10.eps} \caption{We used the HQ algorithm to calculate the 2I-DF for our best Jeans model. The left plot shows the DF in $E$ and $\eta = {L_z}/{L_{z\max }}(E)$ space. The DF is an increasing function of $\eta$. The right plot shows the projection of the DF on energy space for several values of $\eta$. The shape resembles that of the spherical case in Figure \ref{plot_2}. } \label{plot_10} \end{figure*} \subsection[]{Two-integral distribution function for the NSC.} \label{s_2i} Now we have seen the success of fitting the semi-isotropic Jeans models to all three velocity dispersion profiles of the NSC, and determined its mass and distance parameters, we proceed to calculate two-integral (2I) distribution functions. We use the contour integral method of \citet[][HQ]{hq1993} and \citet{qh1995}. A 2I DF is the logical, next-simplest generalization of isotropic spherical models. Finding a positive DF will ensure that our model is physical. Other possible methods to determine $f(E,L_z$) include reconstructing the DF from moments \citep{m1995}, using series expansions as in \cite{dg1994}, or grid-based quadratic programming as in \cite{k1995}. We find the HQ method the most suitable since it is a straightforward generalization of Eddington's formula. The contour integral is given by: \begin{align} \begin{array}{l} {f_+}(E,{L_z}) =\\ \frac{1}{{4{\pi^2}i\sqrt 2 }}\oint {\frac{{d\xi }}{{{{(\xi - E)}^{1/2}}}}{\tilde\rho_{11}}\left( {\xi ,\frac{{L_z^2}}{{2{{(\xi - E)}^{1/2}}}}} \right)} \label{HQDF} \end{array} \end{align} where ${\tilde\rho_{11}}(\Psi ,R) = \frac{{{\partial^2}}}{{\partial {\Psi^2}}}\rho (\Psi ,R)$. Equation~(\ref{HQDF}) is remarkably similar to Eddington's formula. Like in the spherical case the DF is even in $L_z$. The integration for each $(E,{L_z})$ pair takes place on the complex plane of the potential $\xi$ following a closed path (i.e. an ellipse) around the special value ${\Psi_{\rm env}}$. For more information on the implementation and for a minor improvement over the original method see Appendix A. We find that a resolution of $(120\times60)$ logarithmically placed cells in the $(E,{L_z})$ space is adequate to give us relative errors of the order of $10^{-3}$ when comparing with the zeroth moment, i.e., the density, already known analytically, and with the second moments, i.e., the velocity dispersions from Jeans modeling. The gravitational potential is already known from equations~(\ref{ax_pot}) and (\ref{ax_tot_pot}). For the parameters (cluster mass, black hole mass, distance) we use the values given in (\ref{bestModel}). Figure \ref{plot_10} shows the DF in $(E,{L_z})$ space. The shape resembles that of the spherical case (Fig.~\ref{plot_2}). The DF is a monotonically increasing function of $\eta = {L_z}/{L_{z\max }}(E)$ and declines for small and large energies. The DF contains information about all moments and therefore we can calculate the projected velocity profiles (i.e., velocity distributions, hereafter abbreviated VPs) in all directions. The normalized VP in the line-of-sight (los) direction $y$ is \begin{align} VP({\upsilon_{\rm los}};x,z) = \frac{1}{\Sigma }\iiint\limits_{E>0} {f(E,{L_z})\,d{\upsilon_{x}}d{\upsilon_z}dy}. \label{VPlos} \end{align} Using polar coordinates in the velocity space $({\upsilon_x},{\upsilon_z}) \to ({\upsilon_ \bot },\varphi )$ where ${\upsilon_x} = {\upsilon_ \bot }\cos \varphi$ and ${\upsilon_z} = {\upsilon_ \bot }\sin \varphi$ we find \begin{align} VP({\upsilon_{\rm los}};x,z) = \frac{1}{{2\Sigma }}\int\limits_{{y_1}}^{{y_2}} {dy} \int\limits_0^{2\Psi-\upsilon_{\rm los}^2} {d\upsilon_ \bot^2} \int\limits_0^{2\pi } {\,d\varphi } f(E ,{L_z}) \label{VPlos1} \end{align} where \begin{align} \begin{array}{l} E=\Psi (x,y,z) - \frac{1}{2}(\upsilon_{\rm los}^2 + \upsilon_\bot^2),\\ {L_z} = x{\upsilon_{\rm los}} - y{\upsilon_ \bot }\cos \varphi. \end{array} \end{align} and $y_{1,2}$ are the solutions of $\Psi(x,y,z)-v^2_{\rm los}/2=0$. Following a similar path we can easily find the corresponding integrals for the VPs in the $l$ and $b$ directions. The typical shape of the VPs in the $l$ and $b$ directions within the area of interest $(r<100'')$ is shown in Figure \ref{plot_14}. We notice the characteristic two-peak shape of the VP along $l$ that is caused by the near-circular orbits of the flattened system. Because the front and the back of the axisymmetric cluster contribute equally, the two peaks are mirror-symmetric, and adding rotation would not change their shapes. The middle panels of Figure~\ref{plot_23} and Figures~\ref{plot_15} and \ref{plot_16} in Appendix B show how our best model (with parameters as given in~(\ref{bestSDvalues}) and (\ref{SDvalues1})) predicts the observed velocity histograms for various combinations of cells. The reduced $\chi^2$ for each histogram is also provided. The prediction is very good both for the VPs in $\upsilon_l$ and $\upsilon_b$. Specifically, for the $l$ proper motions our flattened cluster model predicts the two-peak structure of the data pointed out by several authors \citep{tg2008,sm2009,fc2014}. In order to calculate the VP from the model for each cell we averaged over the VP functions for the center of each cell weighted by the number of stars in each cell and normalized by the total number of stars in all the combined cells. Figure \ref{plot_23} compares two selected $\upsilon_l$-VPs for our two main models with the data. The left column shows how the observed velocity histograms (VHs) for corresponding cells compare to the model VPs for the less flattened model with parameters given in (\ref{SDvalues1}) and (\ref{lessFlattened}), the middle column compares with the same VPs from our best model with parameters given in (\ref{bestSDvalues}) and (\ref{bestModel}). Clearly, the more flattened model with $q_1=0.73$ fits the shape of the data much better than the more spherical model with $q_1=0.85$, justifying its use in Section~\ref{s_distance}. This model is based on an even DF in $L_z$ and therefore does not yet have rotation. To include rotation, we will (in Section 4.4) add an odd part to the DF, but this will not change the even parts of the model's VPs. Therefore, we can already see whether the model is also a good match to the observed los velocities by comparing it to the even parts of the observed los VHs. This greatly simplifies the problem since we can think of rotation as independent, and can therefore adjust it to the data as a final step. Figure \ref{plot_17} shows how the even parts of the VHs from the los data compare with the VPs of the 2I model. Based on the reduced $\chi^2$, the model provides a very good match. Possible systematic deviations are within the errors. The los VHs are broader than those in the $l$ direction because the los data contain information about rotation (the broader the even part of the symmetrized los VHs, the more rotation the system possesses, and in extreme cases they would show two peaks). \begin{figure} \centering \includegraphics[width=\linewidth]{plots/plot_14.eps} \caption{Typical velocity distributions for $l$ and $b$-velocities within the area of interest $(r<100'')$. The red line shows the VPs in the $b$ direction, the blue line in the $l$ direction. The VPs along $l$ show the characteristic two-peak-shape pointed out from the data by several authors \citep{se2007,tg2008,fc2014}.} \label{plot_14} \end{figure} \begin{figure*} \centering \includegraphics[width=\linewidth]{plots/plot_23.eps} \caption{Predicted distributions of $\upsilon_l$ velocity compared to the observed histograms. In each row, model VPs and observed VHs are shown averaged over the cells indicated in red in the right column, respectively. Left column: predictions for the less flattened model which we use as an illustration model, i.e., for parameters given in (26) and (27). Middle column: predicted VPs for our best model with parameters given in (18) and (24). This more flattened model with $q_1 = 0.73$ fits the data much better than the rounder cluster model with $q_1 = 0.85$.} \label{plot_23} \end{figure*} \subsection{Adding rotation to the axisymmetric model: is the NSC an isotropic rotator?} As in the spherical case, to model the rotation we add an odd part in $L_z$ to the initial even part of the distribution function, so that the final DF takes the form $f(E,{L_z}) = (1 + g(L_z) )f(E,{L_z})$. We use again equation~(\ref{eqAlpha}); this adds two additional parameters ($\kappa$, F) to the DF. Equation~(\ref{ulos}) gives the mean los velocity vs Galactic longitude. In order to constrain the parameters ($\kappa$, F) we fitted the mean los velocity from equation~(\ref{ulos}) to the los velocity data for all cells in Fig.~\ref{plot_6}. The best parameter values resulting from this 2D-fitting are $\kappa=2.8\pm1.7$, $F=0.85\pm0.15$ and $\chi_{r}^2=1.25$. Figure \ref{plot_19} shows that the VPs of this rotating model compare well with the observed los VHs. An axisymmetric system with a DF of the form $f(E,{L_z})$ is an isotropic rotator when all three eigenvalues of the dispersion tensor are equal \citep{bt2008} and therefore \begin{align} \overline \upsilon_\varphi^2 =\overline{\upsilon_\varphi^2}-\overline{\upsilon_R^2}. \label{isotropic} \end{align} In order to calculate $\overline \upsilon_\varphi$ from equation~(\ref{isotropic}) it is not necessary for the DF to be known since $\overline{\upsilon_\varphi^2}$ and $\overline{\upsilon_R^2}$ are already known from the Jeans equations~(\ref{nagai}). Figure \ref{plot_13} shows the fitted $u_{\rm los}$ velocity from the DF against the isotropic rotator case calculated from equation~(\ref{isotropic}), together with the mean los velocity data. The two curves agree well within $\sim30''$, and also out to $\sim 200''$ they differ only by $\sim 10$ km/s. Therefore according to our best model the NSC is close to an isotropic rotator, with slightly lower rotation and some tangential anisotropy outwards of 30''. \begin{figure} \centering \includegraphics[width=\linewidth]{plots/plot_13.eps} \caption{Best fitting model from the 2I DF compared to the isotropic rotator model. Each data point corresponds to a cell from Figure \ref{plot_6}. Velocities at negative $l$ have been folded over with their signs reversed and are shown in red. The plot also includes the maser data at $R_s > 100''$. The predictions of both models are computed for $b=20''$. For comparison, cells with centers between $b=15''$ and $b=25''$ are highlighted with full triangles. } \label{plot_13} \end{figure} | Our results can be summarized as follows: \begin{itemize} \item The density distribution of old stars in the central $1000''$ in the Galactic center can be well-approximated as the superposition of a spheroidal nuclear star cluster (NSC) with a scale length of $\sim 100''$ and a much larger nuclear disk (NSD) component. \item The difference between the proper motion dispersions $\sigma_l$ and $\sigma_b$ cannot be explained by rotation alone, but is a consequence of the flattening of the NSC. The dynamically inferred axial ratio for the inner component is consistent with the axial ratio inferred from the star counts which for our two-component model is $q_1=0.73 \pm 0.04$. \item The orbit structure of an axisymmetric two-integral DF $f(E,L_z)$ gives an excellent match to the observed double-peak in the $v_l$-proper motion velocity histograms, as well as to the shapes of the vertical $v_b$-proper motion histograms. Our model also compares well with the symmetrized (even) line-of-sight velocity histograms. \item The rotation seen in the line-of-sight velocities can be modelled by adding an odd part of the DF, and this shows that the dynamical structure of the NSC is close to an isotropic rotator model. \item Fitting proper motions and line-of-sight dispersions to the model determines the NSC mass within $100''$, the mass of the SMBH, and the distance to the NSC. From the star cluster data alone, we find ${M_*}(r\!<\!100'')\!=\!(8.94\!\pm\! 0.31{|_{\rm stat}} \!\pm\!0.9{|_{\rm syst}})\!\times\! {10^6}{M_\odot}$, ${M_\bullet } \!=\! (3.86\!\pm\!0.14{|_{\rm stat} \!\pm\! 0.4{|_{\rm syst}}}) \!\times\! {10^6}{M_\odot }$, and ${R_0} \!=\! 8.27 \!\pm\! 0.09{|_{\rm stat}}\!\pm\! 0.1{|_{\rm syst}}$ kpc, where the estimated systematic errors account for additional uncertainties in the dynamical modeling. The fiducial mass of the NSC is larger than in previous spherical models. The total mass of the NSC is significantly more uncertain due to the surrounding nuclear disk; we estimate $M_{\rm NSC}\!=\!(2-6)\!\times\! 10^7 M_\odot$. The mass of the black hole determined with this approach is consistent with results from stellar orbits around Sgr A$^{*}$. The Galactic center distance agrees well with recent accurate determinations from RR Lyrae stars and masers in the Galactic disk, and has similarly small errors. \item Combining our modeling results with the stellar orbit analysis of \citet{ge2009}, we find ${M_\bullet } \!=\! (4.23\!\pm\!0.14)\!\times\! {10^6}{M_\odot}$ and ${R_0} \!=\! 8.33 \!\pm\! 0.11$ kpc. Because of the better constrained distance, the accuracy of the black hole mass is improved as well. Combining with the parameters of the cluster, the black hole radius of influence is $3.8$ pc ($=94''$) and the ratio of black hole to cluster mass is estimated to be $0.12\!\pm\!0.04$. \end{itemize} | 14 | 3 | 1403.5266 |
1403 | 1403.2994_arXiv.txt | We present a comparison of the physical properties of a rest-frame $250\mu$m selected sample of massive, dusty galaxies from $0<z<5.3$. Our sample comprises 29 high-redshift submillimetre galaxies (SMGs) from the literature, and 843 dusty galaxies at $z<0.5$ from the \emph{Herschel}-ATLAS, selected to have a similar stellar mass to the SMGs. The $z>1$ SMGs have an average SFR of $390^{+80}_{-70}\,$M$_\odot$yr$^{-1}$ which is 120 times that of the low-redshift sample matched in stellar mass to the SMGs (SFR$=3.3\pm{0.2}$\,M$_\odot$yr$^{-1}$). The SMGs harbour a substantial mass of dust ($1.2^{+0.3}_{-0.2}\times{10}^9\,$M$_\odot$), compared to $(1.6\pm0.1)\times{10}^8\,$M$_\odot$ for low-redshift dusty galaxies. At low redshifts the dust luminosity is dominated by the diffuse ISM, whereas a large fraction of the dust luminosity in SMGs originates from star-forming regions. At the same dust mass SMGs are offset towards a higher SFR compared to the low-redshift H-ATLAS galaxies. This is not only due to the higher gas fraction in SMGs but also because they are undergoing a more efficient mode of star formation, which is consistent with their bursty star-formation histories. The offset in SFR between SMGs and low-redshift galaxies is similar to that found in CO studies, suggesting that dust mass is as good a tracer of molecular gas as CO. | \label{Intro} The first blind submillimetre surveys discovered a population of luminous ($L_\mathrm{IR}>10^{12}$\,L$_\odot$), highly star-forming ($100-1000\,$M$_\odot$yr$^{-1}$), dusty ($10^{8-9}$M$_\odot$) galaxies at high redshift \citep{Smail97, Hughes98, Barger98, Eales99}. These submillimetre galaxies (SMGs) are thought to be undergoing intense, obscured starbursts \citep{Greve05, Alexander05, Tacconi06, Pope08}, which may be driven by gas-rich major mergers \citep[e.g.][]{Tacconi08, Engel10, Wang11, Riechers11, Bothwell13}, or streams of cold gas \citep{Dekel09, Dave10, vandeVoort11a}. Measurements of the stellar masses, star-formation histories (SFHs) and clustering properties of SMGs indicate that they may be the progenitors of massive elliptical galaxies observed in the local Universe \citep{Eales99, Blain02, Dunne03b, Chapman05, Swinbank06, Hainline11, Hickox12}. Due to their extreme far-infrared (FIR) luminosities, it was proposed that SMGs were the high-redshift analogues of local ultra-luminous infrared galaxies (ULIRGs), which are undergoing major mergers. Recent observations \citep{Magnelli12, Targett13} and simulations \citep{Dave10, Hayward11} have suggested that the SMG population is a mix of starbursts and massive star-forming galaxies, with the most luminous SMGs ($L_\mathrm{IR}\sim10^{13}$\,L$_\odot$) being major mergers and lower luminosity SMGs being consistent with turbulent, star-forming disks. There are, however, still considerable uncertainties in the physical properties of SMGs \citep[e.g.][]{Hainline11, Michalowski12}, which affects our view of how SMGs fit into the general picture of galaxy evolution. SMGs are found to typically reside at $z\sim1-3$ \citep{Chapman05, Chapin09, Lapi11, Wardlow11, Yun11, Michalowski12b, Simpson14}, partly due to the effect of the negative $k$-correction, which allows galaxies which are bright at $>850\mu$m to be detected across a large range in redshift \citep{Blain02}. Due to the long integration times required to survey a large area of sky at $850\mu$m, submillimetre survey volumes at low redshift have until recently been relatively small, leading to difficulties in obtaining a representative sample of dusty galaxies at low redshift. With the launch of the \emph{Herschel Space Observatory} \citep{Pilbratt10}, we can now get an unprecedented view of dust in local galaxies. \emph{Herschel} observed at FIR--submillimetre wavelengths across and beyond the peak of the dust emission, making it an unbiased tracer of the dust mass in galaxies. The \emph{Herschel} Astrophysical TeraHertz Large Area Survey (H-ATLAS, \citealt{Eales_ATLAS10}) is the largest area extra-galactic survey carried out with \emph{Herschel} and has allowed us to quantify the amount of dust in galaxies at low redshift. By studying galaxies selected at $250\mu$m, \citet{Smith12} found an average dust mass of $9.1\times10^{7}$\,M$_{\odot}$ in local ($z<0.35$) dusty galaxies. Furthermore, the dust mass in galaxies is found to increase by a factor of $3-4$ between $0<z<0.3$ \citep{Dunne11, Bourne12a}, which may be linked to higher gas fractions in galaxies at earlier epochs \citep{Geach11, Tacconi13, Combes13}. The question of how the modes of star formation in SMGs relates to those in local star-forming galaxies warrants a comparison between galaxy samples. Comparisons between SMGs and the low redshift galaxy population has been carried out for small galaxy samples, e.g. \citet{Santini10} compared the properties of 21 SMGs to 26 local spirals from SINGS \citep{Kennicutt03} and 24 local ULIRGs from \citet{Clements10} and found that SMGs have dust-to-stellar mass ratios 30 times larger than local spirals, and a factor of 6 more than local ULIRGs. However, a comparison to large representative samples of the general dusty galaxy population has not yet been carried out. In this paper we investigate the physical properties of dusty galaxies over a wide range in cosmic time, utilising carefully selected samples of high and low redshift galaxies which occupy comparable co-moving volumes of $\sim 10^{8}$\,Mpc$^{3}$. We describe our sample selection in \S\ref{sec:sample_selection} and spectral energy distribution (SED) fitting method to explore the properties of SMGs in \S\ref{sec:SED_fitting}. Our results are presented in \S\ref{sec:results} and our conclusions are in \S\ref{sec:conclusions}. We adopt a cosmology with $\Omega_m=0.27,\,\Omega_{\Lambda}=0.73$ and $H_o=71\, \rm{km\,s^{-1}\,Mpc^{-1}}$. | \label{sec:conclusions} We have presented the physical properties and SEDs of a rest-frame $250\mu$m selected sample of massive, dusty galaxies, in the range $0<z<5.3$. The sample consists of a compilation of 29 high-redshift SMGs with photometry from \citet{Magnelli12} and 843 dusty galaxies at $z<0.5$ from the \emph{Herschel}-ATLAS, selected to have a similar stellar mass to the SMGs. Both samples have panchromatic photometry from the rest-frame UV to the submillimetre, which allowed us to fit SEDs to derive statistical constraints on galaxy physical parameters using an energy balance technique. We compared the physical properties of the high and low redshift samples and found significant differences in the submillimetre-selected galaxy populations. Our main results are as follows: \begin{itemize} \item The sample of $z>1$ SMGs have an average SFR of $390^{+80}_{-70}\,$M$_\odot$yr$^{-1}$ which is around 120 times that of the low redshift sample matched in stellar mass to the SMGs (SFR $=3.3\pm{0.2}$\,M$_\odot$yr$^{-1}$). This is consistent with the observed evolution in characteristic SFR of galaxies out to $z\sim2$. The SMGs harbour an order of magnitude more dust ($1.2^{+0.3}_{-0.2}\times{10}^9\,$M$_\odot$), compared to $(1.6\pm0.1)\times{10}^8\,$M$_\odot$ for low redshift dusty galaxies selected to have a similar stellar mass. \item From the SED analysis we find that a large fraction of the dust luminosity in SMGs originates from star-forming regions, whereas at lower redshifts the dust luminosity is dominated by the diffuse ISM. The means that for SMGs the SFR can be reliably predicted from the K98 calibration between far-infrared luminosity and SFR. Where the dust luminosity is produced mainly by the diffuse ISM component, the \citet{Kennicutt98} relation will over-estimate the SFR, which is the case for the majority of low redshift H-ATLAS galaxies. \item The median SED of the SMGs is more luminous, has a higher effective temperature and is more obscured, with stars in birth clouds experiencing a factor of $\sim2$ more obscuration compared to the median low redshift H-ATLAS SED. There is a sudden change in the optical--UV SED between the highest SSFR H-ATLAS galaxies and the SMGs, which cannot be due to a sharp change in the contribution to the total dust luminosity from birth clouds. Since the effective optical depth in SMGs is higher than in H-ATLAS galaxies the change in SED shape may be due to a physical difference in the structure of birth clouds in SMGs. \item We find that at the same dust mass the SMGs are offset by 0.9\,dex towards a higher SFR compared to the low redshift H-ATLAS galaxies. This is not only due to the higher gas fraction in SMGs but also because they are undergoing a more efficient mode of star formation. The offset cannot be explained by differences in the metallicities between the samples, or variations in the dust emissivity. \item The offset in SFR and dust mass between the SMGs and low redshift galaxies is similar to that found in CO studies. Due to the consistency between observations of gas and dust in individual SMGs and the gas-to-dust ratios implied by the ratio of FIR to CO luminosity we conclude that dust mass is as good a tracer of molecular gas as CO. \item At the same gas fraction, SMGs/ULIRGs have more star-formation activity than `normal' star-forming $250\mu$m selected sources. This is consistent with their best-fit SFHs which are bursty in nature. \end{itemize} | 14 | 3 | 1403.2994 |
1403 | 1403.0958_arXiv.txt | We investigate \ion{C}{IV} broad absorption line (BAL) variability within a sample of 46 radio-loud quasars (RLQs), selected from SDSS/FIRST data to include both core-dominated (39) and lobe-dominated (7) objects. The sample consists primarily of high-ionization BAL quasars, and a substantial fraction have large BAL velocities or equivalent widths; their radio luminosities and radio-loudness values span $\sim$2.5 orders of magnitude. We have obtained 34 new Hobby-Eberly Telescope (HET) spectra of 28 BAL RLQs to compare to earlier SDSS data, and we also incorporate archival coverage (primarily dual-epoch SDSS) for a total set of 78 pairs of equivalent width measurements for 46 BAL RLQs, probing rest-frame timescales of $\sim$80--6000~d (median 500~d). In general, only modest changes in the depths of segments of absorption troughs are observed, akin to those seen in prior studies of BAL RQQs. Also similar to previous findings for RQQs, the RLQs studied here are more likely to display BAL variability on longer rest-frame timescales. However, typical values of $|{\Delta}EW|$ and $|{\Delta}EW|/{\langle}EW{\rangle}$ are $\sim$40$\pm20$\% lower for BAL RLQs when compared with those of a timescale-matched sample of BAL RQQs. Optical continuum variability is of similar amplitude in BAL RLQs and BAL RQQs; for both RLQs and RQQs, continuum variability tends to be stronger on longer timescales. BAL variability in RLQs does not obviously depend upon their radio luminosities or radio-loudness values, but we do find tentative evidence for greater fractional BAL variability within lobe-dominated RLQs. Enhanced BAL variability within more edge-on (lobe-dominated) RLQs supports some geometrical dependence to the outflow structure. | Accretion in quasars appears to lead naturally to the formation of outflows that may regulate supermassive black hole growth and provide feedback to the host galaxy (e.g., Arav et al.~2013 and references therein), potentially helping to quench star formation. In radio-quiet quasars (RQQs) such outflows are most readily apparent as broad absorption lines (BALs; Weymann et al.~1981, 1991) found blueward\footnote{A rare handful of quasars display redshifted BALs, perhaps from an infall or a rotating outflow (Hall et al.~2013).} of UV emission lines; these features can occur at a wide range of velocities (to greater than 0.1$c$) and are observed in 10--20\% of optically selected quasars (e.g., Hewett \& Foltz 2003). The ``orientation'' model hypothesizes that BALs are common to RQQs but only apparent over a limited range of inclinations to the line of sight. (While successful at explaining many observed properties of BAL and non-BAL quasars, this simple model may not capture the full physical complexity of outflow generation and structure.) The prevalence of velocity structure within \ion{C}{IV} BALs that matches the Ly$\alpha$--N~V velocity offset (Weymann et al.~1991; Arav \& Begelman 1994) indicates that these BAL outflows are radiatively accelerated, as does the correlation between maximum outflow velocity and UV luminosity (Laor \& Brandt 2002; Ganguly et al.~2007). Simulations (e.g., Proga et al.~2000) demonstrate that winds can be driven off a classical accretion disk, with interior ``shielding gas'' (Murray et al.~1995) preventing overionization and likely accounting for \hbox{X-ray} absorption in BAL QSOs (e.g., Gallagher et al.~2006; see also Gibson et al.~2009a and Wu et al.~2010 for discussion of mini-BALs). Observational evidence favoring the disk-wind model includes the relatively high degree of polarization among BAL quasars in general and in BAL troughs in particular (e.g., Ogle et al.~1999; Young et al.~2007; DiPompeo et al.~2011) and the similarity of the spectral energy distributions of BAL and non-BAL quasars (e.g., Willott et al.~2003; Gallagher et al.~2007; but see also DiPompeo et al.~2013). On the other hand, BAL quasars have been argued to accrete at particularly high Eddington ratios (e.g., Ganguly et al.~2007), as inferred based on apparent [O~III] weakness (Yuan \& Wills 2003). Quasars also possessing low-ionization BALs (LoBALs, in contrast to the more common high-ionization only HiBALs) in particular tend toward weak [O~III] and seemingly lack [Ne~V] (e.g., Zhang et al.~2010). After accounting for intrinsic absorption, Luo et al.~(2013) estimate that 17--40\% of BAL quasars are still \hbox{X-ray} weak, and suggest that \hbox{X-ray} weak quasars may more easily launch outflows (due to reduced overionization) with potentially large covering factors. An initial lack of detected BALs among radio-loud quasars (RLQs) was interpreted to indicate that jets and BALs were mutually exclusive (e.g., Stocke et al.~1992). This paradigm was challenged by a series of discoveries of individual BAL RLQs (e.g., Becker et al.~1997; Brotherton et al.~1998; Wills et al.~1999; Gregg et al.~2000) and then undermined by the identification of a population of BAL RLQs (e.g., Becker et al.~2000, 2001; Menou et al.~2001; Shankar et al.~2008), mostly detected in the VLA 1.4~GHz FIRST survey (Becker et al.~1995) with systematic optical spectroscopic coverage obtained by the FIRST Bright Quasar Survey (FBQS; White et al.~2000) and the Sloan Digital Sky Survey (SDSS; York et al.~2000). Several BAL RLQs display radio spectral and/or morphological properties similar to those of compact steep spectrum (CSS) or GHz-peaked spectrum (GPS) radio sources, which are commonly presumed to be young (e.g., Stawarz et al.~2008), although in general BAL RLQ radio morphologies do not require youth (Bruni et al.~2013). Additionally, dust-reddened quasars (plausibly newly active; Urrutia et al.~2008; Glikman et al.~2012) appear more likely to host low-ionization BALs (Urrutia et al.~2009), suggesting that at least ``LoBALs'' may be linked to source age rather than inclination. These observations, in concert with the remaining scarcity of BALs within strongly radio-loud and lobe-dominated objects, have revived alternative ``evolutionary'' models (Gregg et al.~2006) that associate BALs with emerging young quasars clearing their kpc-scale environment through outflows spanning equatorial through polar (e.g., Zhou et al.~2006) latitudes (though a purely evolutionary model requires fine-tuning to match observations; Shankar et al.~2008). The reality may lie between a stark orientation/evolution dichotomy, with some types of quasars more able to host winds that themselves have a range of covering factors (Richards et al.~2011; DiPompeo et al.~2012, 2013). In any event, it is currently unclear whether BALs in RLQs have a similar physical origin to those in RQQs, or indeed whether BALs in RLQs are even a homogeneous class. Variability studies provide one method of assessing BAL structure, and they can potentially constrain the location and dynamics of the UV absorber. In principle, BAL variability could be induced through an alteration in the ionization parameter as a result of fluctuations in the incident flux (e.g., Trevese et al.~2013); the variability timescale then constrains the absorber density (Netzer et al.~2002) and/or distance (Narayanan et al.~2004). However, this is unlikely to be the dominant mechanism for cases in which the \ion{C}{IV} variability is confined to a restricted velocity segment within the full BAL absorption trough, which indeed constitute the majority of observed variability behavior (e.g., Gibson et al.~2008; Capellupo et al.~2013; see also discussion of saturated \ion{C}{IV} outflows in the latter). Alternatively, depth changes within BAL profiles may plausibly arise from dynamical restructuring of the absorber along the line of sight, with the variability timescale providing estimates of crossing speeds and location (Risaliti et al.~2002; Capellupo et al.~2013). Simulations suggest that although the shielding component of the wind can be dynamic (Sim et al.~2012; see also observational X-ray results from Saez et al.~2012), synthesized absorption profiles are relatively constant at lower velocities whereas variabilty becomes more pronounced at higher velocities, which correspond to well-shielded material streaming to larger distances (Proga et al.~2012). Such disk azimuthal asymmetries can potentially link variability, at differing amplitudes, across multiple velocity components (Filiz Ak et al.~2012). In general, dynamical wind outflow models can produce extremely complex behavior (e.g., Giustini \& Proga 2012). Broad absorption line variability within radio-quiet quasars has now been characterized through several studies, many of which make use of SDSS data for one or more epochs of coverage. BALs in RQQs often show minor depth changes within narrow portions of troughs (Barlow 1993; Lundgren et al.~2007; Gibson et al.~2008, 2010; Capellupo et al.~2011; hereafter B93, L07, G10, and C11, respectively). Variability is perhaps more common within shallower or higher-velocity BALs (L07; C11) which are occasionally even observed to disappear completely (Filiz Ak et al.~2012). BALs of greater equivalent width ($EW$) tend to show greater absolute changes in $EW$, and BALs spanning a wider velocity range tend to show variability within a larger absolute subset of velocity bins (Gibson et al.~2008, their Figure 9 and Equation 1, respectively). However, the absolute value of the fractional change in $EW$ is greater in BALs of lower $EW$ (L07), and similarly within a given velocity segment shallower absorption increases the likelihood of variability (C11). Acceleration of BALs is rarely observed (e.g., Hall et al.~2007; Gibson et al.~2008). Changes in velocity width can sometimes transition features out of (or into) formal BAL classification (e.g., Rodr{\'{\i}}guez Hidalgo et al.~2013; Misawa et al.~2005; see also discussion in Gibson et al.~2009a), indicative of a connection between narrow-absorption line or mini-BAL troughs and the more extreme BALs. While BAL variability is not necessarily monotonic, in general BALs in RQQs tend to vary more often and more strongly on longer timescales (Gibson et al.~2008; G10; C11), although variability on only rest-frame $\sim$8--10~d has been seen (Capellupo et al.~2013). These results provide a baseline for RQQ BAL variability, but to date there has not been a systematic survey of BAL variability within RLQs\footnote{A preliminary sketch of some portions of this paper is given in Miller et al.~(2012). Filiz Ak et al.~(2013) study BAL quasars in SDSS and briefly compare RQQs to a small set of RLQs.} and so no statistical comparison has been possible. This work conducts such a study through measurement of \ion{C}{IV} absorption at multiple epochs in a large sample of RLQs. We have obtained 34 new spectra for 28 BAL RLQs, primarily selected as such from FIRST/SDSS data, with the Hobby-Eberly Telescope (HET; Ramsey et al.~1998) Low-Resolution Spectrograph (LRS; Hill et al.~1998). This sample was chosen to cover a wide range in radio-loudness and luminosity and also BAL velocity and equivalent width. BAL variability is assessed through a comparison of the HET/LRS spectra to the earlier SDSS spectra. We also incorporate BAL variability measurements obtained for 18 additional RLQs with two (or more) SDSS or archival spectra available. Together, the 46 RLQs have 78 pairs of BAL equivalent width measurements, probing rest-frame timescales of $\sim$80--6000~d (median 800~d). \begin{figure} \includegraphics[scale=0.42]{rstar_ew.ps} \caption{\small (a) Radio-loudness plotted versus \ion{C}{IV} BAL EW. The BAL RLQs studied here from HET/SDSS, SDSS/SDSS, and other archival pairs of spectra are plotted as diamonds, triangles, and squares, respectively. Lobe-dominated RLQs are marked with double symbols. Additional BAL RLQs, identified from the G09 BAL catalog matched to FIRST data, are also shown (crosses). The dashed/dotted lines show increasingly restrictive cuts in $R^{*}$. (b) Distribution of \ion{C}{IV} BAL EW for RQQs and groups of RLQs.} \end{figure} This paper is organized as follows: $\S$2 describes the selection and radio and BAL characteristics of the sample, $\S$3 quantifies BAL variability, $\S$4 compares to results for BAL RQQs and investigates dependencies upon continuum variability and radio properties, and $\S$5 summarizes and concludes. We use positive values of equivalent width (given in units of rest-frame~\AA) to quantify BAL absorption strength, and express changes in equivalent width such that a positive difference corresponds to the BAL deepening between observations. A standard cosmology with $H_{\rm 0}=70$~km~s$^{-1}$~Mpc$^{-1}$, ${\Omega}_{\rm M}=0.3$, and ${\Omega}_{\rm \Lambda}=0.7$ is assumed throughout. Monochromatic luminosities are given in units of erg~s$^{-1}$~Hz$^{-1}$ and expressed as logarithms, with ${\ell}_{\rm r}$ and ${\ell}_{\rm uv}$ determined at rest-frame 5~GHz and 2500~\AA, respectively. Unless otherwise noted, errors are quoted at 1$\sigma$. Object names are given as SDSS J2000 and taken from the DR7 Quasar Catalog of Schneider et al.~(2010; see also Schneider et al.~2005, 2007). \begin{figure} \includegraphics[scale=0.42]{lr_vmax.ps} \caption{\small (a) Radio luminosity plotted versus \ion{C}{IV} maximum outflow velocity, with symbols as in Figure~1. Note that the maximum outflow velocity is restricted to 25000~km~s$^{-1}$ in G09. The dashed/dotted lines show increasingly restrictive cuts in ${\ell}_{\rm r}$. (b) Distribution of $v_{\rm max}$ for RQQs and for RLQs with cuts as above. } \end{figure} | Below, we investigate the distributions of absolute change in equivalent width $|{\Delta}EW|$ and absolute fractional change in equivalent width $|{\Delta}EW|/{\langle}EW{\rangle}$ as a function of rest-frame interval between spectral observations (${\Delta}{\tau}$; Figure~10) and of average BAL equivalent width (${\langle}EW{\rangle}$; Figure~11). Variability is also assessed with respect to velocity width (Figure~12), and comparisons are made to BAL variability patterns within RQQs. Optical continuum variability is assessed and quantified for BAL RLQs (Figure~13) and BAL RQQs (Appendix~B, Figures 1 and 2) and compared across classes (Figures 14 and 15). For RLQs, the impact of radio loudness or luminosity ($R^{*}$ or ${\ell}_{\rm r}$) upon BAL variability is investigated (Figure~16). The longest-separation total BAL absorption properties for BAL RLQs and for the comparison sample of BAL RQQs are given in Tables~1 and 4, respectively. For a portion of the analysis it is convenient to distinguish between subsamples of BAL RLQs grouped by radio properties. RLQs are separated into core-dominated or lobe-dominated, low or high radio luminosity (at ${\ell}_{\rm r}=33$), and low or high radio loudness (at $R^{*}=2$). Note that while the archival coverage of PG~1004+130 (lobe-dominated, low radio luminosity, high radio loudness) extends to longer timescales than are typical within our sample, for this object the variability over $\sim$1000~d is actually larger than for the longest separation $\sim$6000~d measurement used. Median and mean properties, along with Kolmogorov-Smirnov (KS) test probabilities for selected comparisons, are provided in Table~5. Correlation likelihoods (non-parametric Kendall $\tau$ and Spearman $\rho$) and coefficients along with best-fit linear regression slopes and errors (calculated using the {\tt IDL} robust\_linefit routine) for each tested relationship are listed in Table~6. \begin{figure*} \includegraphics[scale=0.73]{aew_dew.ps} \caption{\small Change in BAL equivalent width versus average BAL equivalent width for RQQs (diamonds) and RLQs (circles). Symbols as in Figure~10. There is a tendency ($\simgt$95\% confidence) toward greater $|{\Delta}EW|$ within stronger BALs, for both RLQs and RQQs.} \end{figure*} \subsection{Comparison to BAL RQQs} To avoid potential biases arising from repeated sampling of particular objects (recall we have 78 pairs of spectra of 46 BAL RLQs), the longest-separation measurement of variability available for each of the 46 BAL RLQs is used for all statistical comparisons to RQQs. We constructed a comparison sample of BAL RQQs from previous studies of BAL variability (B93, L07, G09, G10, and C11), for verified radio-quiet\footnote{We checked radio-loudness against FIRST data where possible, otherwise against the NASA/IPAC Extragalactic Database (NED; {\tt http://ned.ipac.caltech.edu/}); note that the RQQs LBQS 0055+0025 and [HB89] 2225$-$055 are near unassociated radio sources.} quasars; recall from $\S$2.1 that the handful of RLQs covered in these studies are included in our radio-loud sample. In particular, we use 25 pairs of spectra from dual-epoch SDSS measurements from L07 and another 28 from G09 (requiring ${\Delta}\tau>150$~d, hence distinct from L07); 16 pairs of spectra from dual-epoch Lick measurements from B93 (some additional measurements from this reference are superceded by C11); 21 pairs of spectra from Lick/SDSS/HET spectra from G10; and 25 pairs of spectra from Lick/SDSS/MDM spectra from C11. Together, these studies cover short (20--120~d; L07), short/intermediate (80--400~d; B93), intermediate (130--600~d; G09), intermediate/long (100-200~d and 1300--3000~d; C11), and long (1300--2500~d; G10) timescales. The combined sample of RQQs then includes 115 pairs of BAL absorption measuments. From these, we construct a longest-separation sample with a single measurement of BAL variability for each of the 94 unique BAL RQQs. Four RLQs and six RQQs have small absorption equivalent widths ${\langle}EW{\rangle}<3.5$~\AA~which are not typical of BALs; following Gibson et al.~(2008), we compare RQQs and RLQs after removal of these objects. The resulting filtered longest-separation samples of 42 RLQs and 88 RQQs with ${\langle}EW{\rangle}\ge3.5$~\AA~span similar ranges in redshift and luminosity, but have inconsistent distributions (KS test $p<0.03$ of being drawn from the same underlying population) of both ${\Delta}\tau$ and ${\langle}EW{\rangle}$, in the sense that these RQQs cover longer timescales and have larger BAL equivalent widths. We constructed a matched group of 42 RQQs through selecting objects with ${\Delta}\tau$ and ${\langle}EW{\rangle}$ values similar to those of the filtered RLQs, without consideration of any variability properties (the KS probabilities for the matched RQQ sample are now $p=0.26$ and $p=0.56$ for ${\Delta}\tau$ and ${\langle}EW{\rangle}$, respectively). The filtered samples of 42 RLQs and 88 RQQs were also divided into groups of short and long timescale (at ${\Delta}\tau=500$~d, which is approximately the median timescale for both samples) and moderate and large average equivalent width (at ${\langle}EW{\rangle}=20$~\AA, which is approximately the median equivalent width for the BAL RQQs). In general, BAL variability within RLQs appears similar to that within RQQs. Qualitatively, variability within RLQs, when observable, typically consists of a modest change in the absorption depth, often within a discrete section of the full trough (Figures 4, 7, 8, and Appendix~A). Velocity shifts in the structure of BALs appear to be rare (one candidate from our 46 BAL RLQs; Figure~9). These are similar to established tendencies within BAL RQQs ($\S$1). Quantitatively, prior to filtering or matching, the absolute change in $EW$ or fractional variability is lower within RLQs (e.g., the mean $|{\Delta}EW|/{\langle}EW{\rangle}$ is 0.12$\pm$0.02 for RLQs versus 0.24$\pm$0.03 for RQQs). After filtering out objects with ${\langle}EW{\rangle}<3.5$~\AA, the fractional variability in RLQs is still smaller (0.10$\pm$0.02 versus 0.19$\pm$0.02), and this difference persists in the matched RQQs (0.17$\pm$0.03; this is a $\sim$2$\sigma$ difference); the KS test probability of $p=0.01$ is likewise marginally inconsistent with RLQs and matched RQQs possessing similar BAL variability. The percentage of RLQs displaying significant BAL variability is 21\%$\pm$7\% (Poisson errors; Table~3 and Appendix~A), lower than is typical for BAL RQQs on similar timescales (e.g., C11). It is possible that this comparison could be influenced by systematic differences in how BAL variability is measured across different studies; for example, our approach of locking continuum and emission line fit parameters as well as BAL edges between epochs wherever possible may produce lower changes in $EW$ than would result from completely independent fitting and measurement at each epoch. An additional point of concern is that our identification of variability is sensitive to noise. There is an apparent anti-correlation between BAL variability and optical magnitude (at $\sim2\sigma$ significance) for the combined RLQ and RQQ sample. However, the optical magnitudes of the RLQs are similar to those of the matched RQQs (KS test probability $p=0.54$), with means of $18.46\pm0.14$ and $18.34\pm0.10$, respectively. We conservatively interpret our results to indicate that BAL RLQs show similar or perhaps decreased BAL variability as compared to BAL RQQs. This is consistent with the findings of Filiz Ak et al.~(2013) of no significant differences in variability, for a smaller sample of BAL RLQs. BALs in RLQs are more likely to vary and display a greater variability amplitude on longer timescales (Figure~10), similar to established trends for BAL RQQs ($\S$1). The mean $|{\Delta}EW|/{\langle}EW{\rangle}$ is $0.13\pm0.03$ ($0.06\pm0.02$) for ${\Delta}\tau\ge500$~d ($<500$~d). The corresponding mean values for a matched sample of BAL RQQs are somewhat greater ($0.23\pm0.04$ and $0.15\pm0.02$, respectively), although within the longer timescale bin the full distributions of $|{\Delta}EW|/{\langle}EW{\rangle}$ are not inconsistent. Kendall and Spearman tests also find a significant if only moderately strong correlation between ${\Delta}\tau$ and $|{\Delta}EW|$ (Table~6) for both RLQs and RQQs. The best-fit slope for $|{\Delta}EW|$ as a function of $\log{{\Delta}\tau}$ is greater for RQQs. Both BAL RLQs and BAL RQQs also tend to have larger absolute (but not fractional) changes in equivalent width within stronger BALs (Figure~11). The mean ${\Delta}EW$ for RLQs is 3.2$\pm$1.2~\AA~(1.1$\pm$0.2~\AA) for ${\langle}EW{\rangle}\ge20$~\AA~($<20$~\AA). Correlation tests again provide agreement in identifying a significant if moderate trend for both RLQs and RQQs, again with larger best-fit slopes for RQQs. Note that the mean and median timescales are similar in the two groupings of RLQs split by average BAL equivalent width, so we can be confident that this is a distinct trend from the correlation with timescale (the average equivalent widths are also similar between the RLQ groups split by timescale). This is not the case for the RQQs, so here the RLQs provide a cleaner demonstration of the trends (previously discovered in RQQs; see $\S$1) toward increasing BAL variability on longer timescales or (in an absolute but not fractional sense) within stronger BALs. \begin{figure} \includegraphics[scale=0.38]{vel_dv.ps} \caption{\small Illustration of velocity width over which BALs varied for RLQs (blue circles) and RQQs from C11 (red diamonds). The open symbols are the sub-sections of a given trough that varied. For the RQQs, 3 objects have variability within two distinct troughs (connected with lines). For RQQs we only plot the longest separation measurements for clarity. For the RLQs, the longest separation measurement is plotted as larger symbols, the shorter epoch(s) as smaller symbols.} \end{figure} \begin{figure*} \includegraphics[scale=0.75]{fig_rlq_mags.ps} \caption{\small Optical continuum magnitude for BAL RLQs (labeled by truncated RA), plotted versus rest-frame timescale where the zero point is at MJD 53000. The red line is a running mean within rest-frame 200~d bins after outlier rejection (green points omitted, black points retained). A measure of variability is printed at lower right for each object; see $\S$4.2 for details. Objects with mild and strong variability are labeled as v and Var, respectively.} \end{figure*} BALs in RLQs tend to vary within only a fraction of the full velocity width of the BAL trough, similar to RQQs (Gibson et al.~2008; G10; C11). We define $v_{\rm BAL}$ to be the velocity span calculated from the wavelength edges of the BAL, as defined in $\S$3.2 and listed in Table~3, and $v_{\rm var}$ to be the velocity span of the variable portion within the BAL. Figure~12 plots the velocity widths $v_{\rm var}$ and $v_{\rm var}/v_{\rm BAL}$ against $v_{\rm BAL}$, for those RLQs with significant BAL variability and for RQQs from C11. Here the open symbols show the segments of a given trough that varied (three RQQs with variability within two distinct troughs are connected with lines). For RLQs the longest separation measurement is plotted with larger symbol size, while for RQQs only the longest separation measurement is plotted for clarity. For both RLQs and RQQs the velocity width of the varying regions tends to be only a few thousand km~s$^{-1}$, as previously found for RQQs by Gibson et al.~(2008). These and the previous results are consistent with a simple scenario in which component segments within a given BAL have a uniform and independent probability to vary, as could arise from moving material at different radial velocities passing transversely through the line of sight. It would also be of interest to compare and contrast variability within the \ion{Si}{IV} absorption region to that discussed here for \ion{C}{IV} BALs. Unfortunately, our sample is selected in $z$ for \ion{C}{IV} coverage and several of these BAL RLQs do not have \ion{Si}{IV} coverage, which makes a robust statistical comparison difficult. In RQQs, \ion{Si}{IV} absorption may be more variable than \ion{C}{IV} (Capellupo et al.~2012; but see also G10) and segments at similar velocities may show coordinated variability (G10; Capellupo et al.~2012); a larger sample (restricted to higher redshifts) could test whether this also holds for RLQs. \subsection{Optical continuum variability} We next investigate continuum variability in BAL quasars. One motivation for considering optical continuum variability is that it could be indicative of direct incident flux altering the absorber ionization state (e.g., Trevese et al.~2013) or covering factor. However, such potential connections are better explored with EUV or X-ray observations (e.g., Gallagher et al.~2004; Saez et al.~2012; Hamann et al.~2013), which directly probe the high-energy radiation relevant to BAL shielding and driving. Of greater relevance, a mutual origin of BAL outflows and optical emission in the accretion disk might link continuum and absorption line variability; for example, a particularly inhomogeneous disk in some quasars (e.g., speculatively due to near-Eddington accretion) might facilitate the launching of absorbing clumps that are then observable as BALs. Optical magnitudes at multiple epochs were obtained from the Catalina Sky Survey{\footnote{\tt http://nessi.cacr.caltech.edu/DataRelease/}} (Drake et al.~2009), Data Release 2. These unfiltered CCD measurements of the quasar optical continua are plotted for BAL RLQs in Figure~13 and for BAL RQQs in Appendix~B. The magnitudes are screened for outliers using two passes of 3$\sigma$ rejection, after which the median magnitude is taken as the baseline brightness (dashed line in each frame). Timescales are converted to rest-frame with MJD 53000 as the fixed zero point (for reference, the DR2 release date for SDSS is 53079). The plots show a running mean calculated within rest-frame timescale bins of 200~d for bins containing at least 4 valid measurements (plotted as a solid red line). We quantify optical continuum variability using a structure function, following the general approach of Rengstorf et al.~(2006; see also Vanden Berk et al.~2004; di Clemente et al.~1996). In addition to comparing optical continuum variability in BAL RLQs versus BAL RQQs, we wish to test whether optical continuum variability is linked to BAL variability. Given the high-cadence but irregular monitoring of the Catalina Sky Survey and the desired application of a uniform procedure for assessing variability within each BAL quasar, we choose to bin the $\sigma$-clipped magnitudes within intervals of 20 rest-frame days (here we are only interested in variability on longer timescales) prior to calculating the structure function. Errors within each bin are estimated including both the provided measurement uncertainties and the empirical scatter and then additionally enhanced by 30\%. This slight initial smoothing and conservative inflation of errors does not impact the relative ranking between individual objects or the BAL RLQs versus BAL RQQs comparison but may give somewhat lower absolute structure function values than other approaches. The structure function is then considered for time lags up to 1000 rest-frame days, binned by 100 days. For each individual quasar, the sum of these 10 measurements (or weighted for truncated coverage) is used to quantify optical continuum variability, and these values are listed in Figure~13 and Appendix~B. \begin{figure} \includegraphics[scale=0.45]{balq_sf.ps} \caption{\small Optical continuum variability as a function of time lag for BAL RLQs (blue stars) and BAL RQQs (red diamonds). The structure function is calculated as detailed in $\S$4.2. Both BAL RLQs and RQQs show greater variability on longer timescales, and tend toward similar variability for any given interval.} \end{figure} \begin{figure} \includegraphics[scale=0.45]{baloptvar.ps} \caption{\small Optical continuum variability versus BAL variability for BAL RLQs (blue stars) and BAL RQQs (red diamonds). There is no apparent correlation. The one-dimensional variability distributions (shown as peak-normalized histograms) are similar, with BAL RLQs perhaps slightly less variable by both metrics.} \end{figure} The structure functions for BAL RLQs and for BAL RQQs averaged across objects (rather than across raw magnitude measurements) are given in Figure~14. The previously known tendency for quasars to display greater variability on longer timescales (e.g., above references) is clearly also present for BAL quasars, both RLQs and RQQs. There does not appear to be a significant difference between BAL RLQs and BAL RQQs in optical continuum variability (see Figure~14; if anything, BAL RLQs may be less variable). The mean continuum variability for BAL RLQs (for consistency again filtering out objects with ${\langle}EW{\rangle}<3.5$~\AA) is $0.37\pm0.11$, similar to the value of $0.47\pm0.07$ for BAL RQQs. From 41 BAL RLQs, 7 (3) or 17\% (7\%) show mild (strong) continuum variability; in comparison, from 85 BAL RQQs, 22 (7) or 26\% (8\%) show mild (strong) continuum variability. We find no significant correlation between optical continuum and BAL variability (Table~6 and Figure~15). Several previous studies have identified a tendency for RLQs to be more variable than are RQQs (e.g., Vanden Berk et al.~2004; Garcia et al.~1999; and references therein). We speculate that those samples include a substantial number of RLQs for which the inclination is close to the line of sight (including blazars) and that some of the variability in such RLQs may be jet-linked. The similarity between BAL RLQs and BAL RQQs might then support some geometric dependence to BAL outflows (i.e., our RLQs are not viewed down the jet). This is consistent with the intranight optical variability analysis of Joshi \& Chand (2013), which found similar low variability in BAL RLQs and BAL RQQs. Regardless of the underlying physical explanation, the optical continuum results further support that variability in BAL RLQs is similar to (or modestly less than) that in BAL RQQs. \subsection{Influence of radio properties} The absolute change and absolute fractional change in equivalent width are plotted versus radio luminosity and radio loudness in Figure~16. No strong dependencies of BAL variability upon radio properties are apparent. The median and mean values of $|{\Delta}EW|$, or $|{\Delta}EW|/{\langle}EW{\rangle}$ are similar for RLQs split at either ${\ell}_{\rm r}=33$ or $R^{*}=2$, and KS tests find no significant differences in their distributions (Table~5). This is confirmed by Kendall and Spearman correlation tests (Table~6), which show no significant correlation (probability $<0.5$ in all cases) of $|{\Delta}EW|$ or $|{\Delta}EW|/{\langle}EW{\rangle}$ with either ${\ell}_{\rm r}$ or $R^{*}$. At most there is a very slight tendency, not statistically significant, for increased $|{\Delta}EW|/{\langle}EW{\rangle}$ toward higher $R^{*}$ values. However, this may be influenced by the relatively large (within our sample) $|{\Delta}EW|/{\langle}EW{\rangle}$ values of a few lobe-dominated quasars, for which the $R^{*}$ values tend to be high (Figure~16) and might indeed be somewhat overestimated due to inclusion of lobe emission or intrinsic reddening depressing the optical continuum. Groupings of RLQs divided by ${\Delta}\tau$ or ${\langle}EW{\rangle}$ also do not show any significant trends, with the exception of an apparent anti-correlation between $|{\Delta}EW|$ and $R^{*}$ for RLQs with ${\langle}EW{\rangle}>20$~\AA~that may again be related to sample inhomogeneity (in this case, two varying RLQs that have large ${\Delta}\tau$ timescales). It may be noted from Figure~16 that the lobe-dominated RLQs tend toward greater fractional variability than the core-dominated RLQs. Indeed, the mean $|{\Delta}EW|/{\langle}EW{\rangle}$ is $0.24\pm0.07$ ($0.09\pm0.02$) for lobe-dominated (core-dominated) RLQs, and KS tests support marginal ($p=0.04$) inconsistency. However, the small number of lobe-dominated BAL RLQs in our sample, as well as their generally greater ${\Delta}\tau$ and smaller ${\langle}EW{\rangle}$ values (median 1500~d versus 600~d and 11~\AA~versus 20~\AA, respectively), indicates additional study is required to confirm these conclusions. Anecdotally, other cases of notable BAL variability in lobe-dominated RLQs are known; for example, Hall et al.~(2011) report dramatic variability in the Mg~II and Fe~II absorption features in the lobe-dominated RLQ FBQS J1408+3054 (the redshift of $z=0.848$ precludes optical coverage of the \ion{C}{IV} region; this is a ``FeLoBAL'' object that shows absorption within lower ionization features, in this case including iron). Studies of the radio spectral indices of non-BAL and BAL RLQs have found that BAL RLQs tend to have steeper values of ${\alpha}_{\rm r}$, suggestive of greater inclinations to the line of sight (DiPompeo et al.~2012; Bruni et al.~2012). This is consistent with a geometrical dependence to BAL structure, although it does appear that outflows can exist at equatorial-to-polar latitudes. Within our sample, there is no strong dependence between ${\alpha}_{\rm r}$ and BAL variability in core-dominated BAL RLQs; lobe-dominated RLQs, with generally steep radio spectral indices, may tend toward somewhat greater absorption variability as discussed above. \subsection{Relevance to outflow models} In a disk-wind scenario, outflows launched from a rotating disk could maintain an approximately Keplerian transverse velocity while traveling radially, and consequent changes in the covering factor as clouds move across the (extended) source can provide an explanation for the observed minor shifts in depths at constant line-of-sight velocity that characterize BAL variability in RQQs (Gibson et al.~2008; G10; Capellupo et al.~2012). If lobe-dominated RLQs, known to be more inclined than core-dominated RLQs, indeed show enhanced BAL variability, then (particularly given the lack of correlation between variability and general radio properties) this requires some geometrical dependence of the BAL outflow structure. The very presence of BALs in flat-spectrum, core-dominated RLQs is likely incompatible with a strictly equatorial outflow\footnote{Note, however, determination of BAL RLQs as possessing polar outflows based solely on radio variability and inferred brightness temperature may be problematic (Hall \& Chajet~2011).}, but simulations of line-driven disk winds indicate that material may be ejected at a range of angles relative to the accretion disk (e.g., Giustini \& Proga 2012). For the BAL RLQs considered here, it appears unnnecessary to invoke an evolutionary phase, in which the quasar is nearly completely enshrouded, to explain the presence of BALs. Recall, however, that our sample is composed almost exclusively of HiBALs, and LoBALs may have distinct properties ($\S$1; White et al.~2007). \begin{figure} \includegraphics[scale=0.38]{ew_lr.ps} \caption{\small Absolute change in BAL equivalent width and absolute fractional change in BAL equivalent width versus radio luminosity (left) and radio loudness (right) for BAL RLQs. Core-dominated and lobe-dominated objects are plotted as smaller blue and larger red circles, respectively (with the lobe-dominated PG~1004+130 in purple; here and for SDSS J004323.43−001552.4 shorter-separation measurements are shown as open circles). There are no obvious strong trends with ${\ell}_{\rm r}$ or $R^{*}$. The lobe-dominated RLQs tend to display greater absolute and fractional variability than the core-dominated objects. } \end{figure} The lack of any apparent correlation between BAL variability and (core plus lobe) radio loudness or luminosity would seem to suggest that the strength of the jet does not exercise a controlling influence upon the absorbing outflow. While the scarcity of BAL RLQs with both high values of $R^{*}$ and large \ion{C}{IV} absorption $EW$ (Figure~1) could indicate a physical connection (see also Shankar et al.~2008 for modeling of the radio-loud and BAL fractions), it might alternatively be due simply to a low likelihood for any given object to possess, independently, extreme radio and BAL properties. If the jet and wind are not intimately connected in BAL RLQs, this provides an interesting contrast with the situation for \hbox{X-ray} binaries, for which it is found that the jet-aided development of a radiation-driven wind can remove sufficient material to starve a jet (e.g., Neilsen \& Lee 2009), in a feedback cycle between the ``low/hard'' and ``high/soft'' states. The longer timescales (scaling with black hole mass) in quasars, perhaps in concert with a greater influence of the corona upon the accretion structure than operates in \hbox{X-ray} binaries, appear to permit dual-mode feedback with both the high-velocity, low-mass jets and the relatively lower-velocity, higher-mass winds (e.g., Proga et al.~2010) capable of significant energy injection into their surroundings. If indeed mechanical power is ejected primarily in the form of jets below $L_{\rm bol}\sim10^{-2}L_{\rm Edd}$ and as winds at higher accretion luminosities (King et al.~2013), RLQs may sit near this boundary. RLQs also hosting BALs\footnote{If BALs in RLQs are only detectable along a particular line of sight, the intrinsic fraction of RLQs hosting BALs could be much larger than observed.} might therefore be expected to be particularly efficient at quenching star-formation within their host galaxies. | 14 | 3 | 1403.0958 |
1403 | 1403.3056_arXiv.txt | Hot Jupiters (HJs) are usually defined as giant Jovian-size planets with orbital periods $P \le 10$ days. Although they lie close to the star, several have finite eccentricities and significant misalignment angle with respect to the stellar equator, leading to $\sim 20\%$ of HJs in retrograde orbits. More than half, however, seem consistent with near-circular and planar orbits. In recent years two mechanisms have been proposed to explain the excited and misaligned sub-population of HJs: Lidov-Kozai migration and planet-planet scattering. Although both are based on completely different dynamical phenomena, at first hand they appear to be equally effective in generating hot planets. Nevertheless, there has been no detailed analysis comparing the predictions of both mechanisms, especially with respect to the final distribution of orbital characteristics. In this paper we present a series of numerical simulations of Lidov-Kozai trapping of single planets in compact binary systems that suffered a close fly-by of a background star. Both the planet and the binary component are initially placed in coplanar orbits, although the inclination of the impactor is assumed random. After the passage of the third star, we follow the orbital and spin evolution of the planet using analytical models based on the octupole expansion of the secular Hamiltonian. We also include tidal effects, stellar oblateness and post-Newtonian perturbations. The present work aims at the comparison of the two mechanisms (Lidov-Kozai and planet-planet scattering) as an explanation for the excited and inclined HJs in binary systems. We compare the results obtained through this paper with results in Beaug\'e \& Nesvorn\'y 2012, where the authors analyze how the planet-planet scattering mechanisms works in order to form this hot Jovian-size planets. We find that several of the orbital characteristics of the simulated HJs are caused by tidal trapping from quasi-parabolic orbits, independent of the driving mechanism (planet-planet scattering or Lidov-Kozai migration). These include both the 3-day pile-up and the distribution in the eccentricity vs semimajor axis plane. However, the distribution of the inclinations shows significant differences. While Lidov-Kozai trapping favors a more random distribution (or even a preference for near polar orbits), planet-planet scattering shows a large portion of bodies nearly aligned with the equator of the central star. This is more consistent with the distribution of known hot planets, perhaps indicating that scattering may be a more efficient mechanism for producing these bodies. | \label{sec1} More than one hundred of Hot Jupiters (HJ) are presently known around main sequence stars. Although there is no precise definition, for our purposes we will include in this group those planets with observed masses $m > 0.8 m_{\rm Jup}$ and orbital periods $P \le 10$ days. The lower limit for the mass may appear arbitrary, and is dictated more by dynamical considerations than by physical properties of planetary bodies. The upper limit on orbital periods, however, is more easily justified, but also dynamical in nature. For instance, around solar-type stars, this period corresponds to a limit which giant planets in circular orbits suffer significant tidal effects. The origin of these planets is still a matter of debate. It appears very unlikely that they formed in-situ (e.g. Lin et al. 1996), so their present location must have been achieved after a significant orbital decay from outside the ice line. Although several evolutionary mechanisms were proposed, including disk-planet interactions (e.g. Lin et al. 1996, Ben\'itez-Llambay et al. 2011) and planet-planet scattering (e.g. Rasio \& Ford 1996, Juric \& Tremaine 2008), a smooth planetary migration due to disk-planet interactions appeared as the best candidate. Until fairly recently, all detected HJs were consistent with (the assumption of) circular orbits and, more importantly, with values for the missalignment angle consistent with aligned systems (see Winn et al. 2010). These orbital characteristics are expected from disk-induced migration, which led further credibility to this scenario. In the past few years, however, the picture changed. A larger population of HJs was analyzed for the so-called Rossiter-McLaughlin effect, leading to a large portion of bodies displaying significant vaules of the misalignment angle (currently $\sim 40 \%$), including about $\sim 15 \%$ of planets in retrograde orbits with respect to the stellar spin. So, instead of having a rather simple and ``cold'' population of HJs, we are now faced with a more complex dynamics. Since significant misalignment angles are not consistent with smooth planetary migration, their origin must lie elsewhere. At this point we are faced with two questions: (i) what other driving mechanism could explain highly inclined and even retrograde planets, and (ii) are all HJs consistent with this new scenario, or must we assume two separate populations of HJs? Tidal effects tend to align orbits, so any observed misalignment of the orbits must have been caused by the migration mechanism itself. This speaks of a high excitation mechanism which must have affected both the inclination and eccentricity, although the latter may have been later damped by tides. Two scenarios have been proposed for such a mechanism: Lidov-Kozai trapping with a binary companion (e.g. Naoz et al. 2011, 2012), and planet-planet scattering within an initially cold but dynamically unstable planetary system (Nagasawa et al. 2008, Nagasawa \& Ida 2011, Beaug\'e \& Nesvorn\'y 2012). Although both scenarios are completely different, the end result is the same. An initially circular orbit of a giant planet beyond the HJ region is excited to high eccentricities (usually close to parabolic orbits) in such a away that the pericentric distance is so close to the star that tidal effects are not only significant but dominant over the gravitational perturbation that generated the excitation. If this process also affected the inclination, then the subsequent orbital evolution of the planet would damp the eccentricity and semimajor axis, leaving as a final product a Hot planet with highly inclined but near-circular orbit. In Beaug\'e \& Nesvorn\'y (2012) we showed that planet-planet scattering may explain many of the observed orbital characteristics of HJs, including the eccentricity-semimajor axis distribution, the so-called 3-day pile-up, and the distribution of misalignment angles. Obviously the result depends on the tidal model and the adopted values for the tidal parameters, but this may actually serve as observational constraints on these little-known parameters. Although Lidov-Kozai trapping also proved an efficient mechanism, and also explains the existence of highly misalignment HJs, there has been a significantly less comparison with the observed HJ population. For example, it is not clear that this scenario explains the 3-day pile-up, or the eccentricity distribution. In short, which of the observed orbital characteristics are due to the excitation mechanism and which to the subsequent tidal evolution? In this work we wish to address precisely these issues. The main idea is to explore the Lidov-Kozai model using similar tools and dynamical models as developed in Beaug\'e \& Nesvorn\'y (2012), changing planet-planet scattering by Kozai resonance. Our aim is then to present a consistent comparison between the predictions of both scenarios, and try to deduce which observed characteristics of the planets are robust and which are model-dependent. | In this paper we have analyzed the formation scenario of HJs based on tidal trapping from quasi-parabolic orbits excited by Lidov-Kozai resonances with a binary stellar component. We have used the same tidal and gravitational model as in Beaug\'e \& Nesvorn\'y (2012), where we discussed the same problem but assuming planet-planet-scattering as the catalyst. Our aim has been two-fold. First, compare the final distribution of HJs predicted by both mechanisms using similar dynamical tools, models and parameters. Second, try to understand the origin of some of the observed characteristics of the simulations and the observed planets. We have found that several of the final orbital properties of the simulated HJs (as well as the real bodies) are mainly caused by the process of tidal trapping, and independent of the excitation mechanism. This includes both the distribution in the $(e_p,a_p)$ and the 3-day pile-up. Both may be considered as observational evidence that a significant portion (or even most) of the observed HJs originated from tidal trapping and not from smooth disk-induced migration. \begin{figure}[t!] \centerline{\includegraphics*[width=16.0cm]{marti_beauge_fig6.eps}} \caption{Distribution of the final inclination between the planet and the stellar equator, obtained from our Lidov-Kozai simulations (left plot), as compared with the results from planet-planet scattering experiments (Beaug\'e \& Nesvorn\'y 2012) (center plot) and with the distribution of the sky-projected misalignment angle $\lambda$ for observed HJs (right plot).} \label{fig6} \end{figure} However, we have also found that the final distribution of the inclinations is different with respect to those obtained from planet-planet scattering experiments. Figure \ref{fig6} shows a comparison of both, together with the observed distribution of the sky-projected misalignment angles currently available for HJs. While the Lidov-Kozai mechanism shows a notorious absence of low-inclination orbits, the distribution obtained from scattering and the observed values show a greater proportion of orbits nearly coplanar with the stellar equator. The reason behind this difference may lie in the excitation mechanism itself. In planet-planet scattering there is no direct correlation between the excitation in eccentricity and that in inclination, and it is possible (albeit likely) that high-eccentricity orbits remain with low inclinations (e.g. below $\sim 45^\circ$). Thus, in this scenario, most of the HJs will preferably remain aligned with the equator of the central star, as observed in real planets. In Lidov-Kozai trapping, formation of HJs require a high inclination with respect with the perturbing mass. Unless the binary is conveniently located in a polar orbit with respect to the equator of $m_A$, the final inclinations of the hot planets will show a more random distribution, with no preference for almost aligned orbits. Naoz et al. (2012) proposed to solve this issue arguing in favor of the existence of a second population of HJs generated by smooth disk-induced planetary migration and, consequently, containing low inclinations. However, as noted from Figure \ref{fig4}, this second population should not show a 3-day pile-up in orbital periods, and thus cannot explain the fact that most of the real HJs in this pile-up show small misalignment angles. Planet-planet scattering suffers from none of these limitations, and thus appears to be more consistent with the distribution of real planets. Even so, it is indeed possible that all three proposed mechanisms (Lidov-Kozai, scattering and smooth planetary migration) could have contributed to the complete sample of known HJs. Only future work, including combined scenarios and additional data, will allow us to speculate as to the effective role of each of them. \vspace*{0.5cm} | 14 | 3 | 1403.3056 |
1403 | 1403.1579_arXiv.txt | Growing evidence for shocks in nova outflows include (1) multiple velocity components in the optical spectra; (2) hard X-ray emission starting weeks to months after the outburst; (3) an early radio flare on timescales of months, in excess of that predicted from the freely expanding photo-ionized gas; and, perhaps most dramatically, (4) $\sim$ GeV gamma-ray emission. We present a one dimensional model for the shock interaction between the fast nova outflow and a dense external shell (DES) and its associated thermal X-ray, optical, and radio emission. The lower velocity DES could represent an earlier stage of mass loss from the white dwarf or ambient material not directly related to the thermonuclear runaway. The forward shock is radiative initially when the density of shocked gas is highest, at which times radio emission originates from the dense cooling layer immediately downstream of the shock. Our predicted radio light curve is characterized by sharper rises to maximum and later peak times at progressively lower frequencies, with a peak brightness temperature that is approximately independent of frequency. We apply our model to the recent gamma-ray producing classical nova V1324 Sco, obtaining an adequate fit to the early radio maximum for reasonable assumptions about the fast nova outflow and assuming the DES possesses a characteristic velocity $\sim 10^{3}$ km s$^{-1}$ and mass $\sim$ few $10^{-4} M_{\odot}$; the former is consistent with the velocities of narrow line absorption systems observed previously in nova spectra, while the total ejecta mass of the DES and fast outflow is consistent with that inferred independently by modeling the late radio peak as uniformly expanding photo-ionized gas. Rapid evolution of the early radio light curves require the DES to possess a steep outer density profile, which may indicate that the onset of mass loss from the white dwarf was rapid, providing indirect evidence that the DES was expelled as the result of the thermonuclear runaway event. Reprocessed X-rays from the shock absorbed by the DES at early times are found to contribute significantly to the optical/UV emission, which we speculate may be responsible for the previously unexplained `plateaus' and secondary maxima in nova optical light curves. | \label{sec:intro} Novae are sudden outbursts powered by runaway nuclear burning on the surface of a white dwarf accreting from a stellar binary companion (e.g.~\citealt{Gallagher&Starrfield78}; \citealt{Shore12} for a recent review). Novae provide nearby laboratories for studying the physics of nuclear burning and accretion. Accreting systems similar to those producing novae are also candidate progenitors of Type Ia supernovae (e.g.~\citealt{dellaValle&Livio96}; \citealt{Starrfield+04}) and other transients such a `.Ia' supernovae (e.g.~\citealt{Bildsten+07}) and accretion-induced collapse (e.g.~\citealt{Metzger+09}; \citealt{Darbha+10}). Major open questions regarding novae include the quantity and time evolution of mass ejected by the thermonuclear outburst and its possible relationship to the immediate environment of the white dwarf or its binary companion. Detailed hydrodynamical simulations of nova outbursts find that matter is unbound from the white dwarf in at least two distinct stages, driven by different physical processes and characterized by different mass loss rates and outflow velocities (e.g.~\citealt{Prialnik86}; \citealt{Yaron+05}; \citealt{Starrfield+09}). The details of this evolution, however, depend sensitively on theoretical uncertainties such as the efficiency of convective mixing in the outer layers of the white dwarf following runaway nuclear burning (\citealt{Starrfield+00}; \citealt{Yaron+05}; \citealt{Casanova+11}). Radio observations provide a useful tool for studying nova ejecta (e.g.~\citealt{Hjellming&Wade70}; \citealt{Seaquist+80}; \citealt{Hjellming87}; \citealt{Bode&Seaquist87}; \citealt{Sokoloski+08}; \citealt{Roy+12} for a recent review). The standard scenario for nova radio emission invokes thermal radiation from freely expanding ionized ejecta of uniform temperature $\sim 10^{4}$ K (e.g.~\citealt{Seaquist&Palimaka77}; \citealt{Hjellming+79}; \citealt{Seaquist+80}; \citealt{Kwok83}; see \citealt{Seaquist&Bode08} for a recent review). This model predicts a radio flux $F_{\nu}$ that increases $\propto t^{2}$ at early times with an optically-thick spectrum ($\alpha = 2$, where $F_{\nu} \propto \nu^{\alpha}$), which after reaching its peak on a timescale of $\sim$ year, then decays with a flat spectrum ($\alpha = -0.1$) at late times once the ejecta have become optically thin to free-free absorption. The simplest form of the standard model, which assumes a density profile corresponding to homologous expansion (the `Hubble flow'), provides a reasonable fit to the late radio data for most novae (\citealt{Hjellming+79}), from which total ejecta masses $\sim 10^{-4}M_{\odot}$ are typically inferred for classical novae (e.g.~\citealt{Seaquist&Bode08}). Although the standard model provides a relatively satisfactory picture of the late radio emission from nova, deviations from this simple picture are observed at early times. A growing sample of nova radio light curves show a second maximum (an early `bump') on a timescale $\lesssim 100$ days after the visual peak (\citealt{Taylor+87b}; \citealt{Krauss+11}; \citealt{Chomiuk+12}; \citealt{Nelson+12}; \citealt{Weston+13}). The high brightness temperature of this emission $\gtrsim 10^{5}$ K, and its flat spectrum relative to the standard model prediction, have supported the interpretation that it results from shock interaction between the nova ejecta and a dense external shell (DES). The DES may represent matter ejected earlier in the nova outburst (`internal shocks'; e.g.~\citealt{Taylor+87b}; \citealt{Lloyd+96}; \citealt{Mukai&Ishida01}). Alternatively, the DES could represent ambient material which is not directly related to the current nova eruption, such as mass loss from the binary system associated with the white dwarf accretion process (e.g.~\citealt{Williams+08}; \citealt{Williams&Mason10}). Evidence for shocks in novae is present at other wavelengths. In addition to broad P Cygni absorption lines originating from the fast $\sim$ few $10^{3}$ km s$^{-1}$ primary ejecta, the optical spectra of novae near maximum light also contain narrow absorption lines. These lines originate from dense gas ahead of the primary outflow with lower velocities $\lesssim 10^{3}$ km s$^{-1}$ (e.g.~\citealt{Williams+08}; \citealt{Shore+13}). This slow moving material, which is inferred to reside close to the white dwarf and to possess a high covering fraction, is likely to experience a subsequent collision with the fast outflow coming from behind (\citealt{Williams&Mason10}). Other evidence for shocks in novae includes the deceleration of the ejecta inferred by comparing the high velocities of the early (pre-collision) primary nova ejecta with the lower velocities inferred from the late (post-collision) nebular emission (e.g.~\citealt{Friedjung&Duerbeck93}; \citealt{Williams&Mason10}). Some novae produce $\gtrsim $ keV X-ray emission, with peak luminosities $L_{X} \sim 10^{34}-10^{35}$ erg s$^{-1}$ on typical timescales $\gtrsim 20-300$ days after the optical maximum (\citealt{Lloyd+92}; \citealt{OBrien+94}; \citealt{Orio04}; \citealt{Sokoloski+06}; \citealt{Ness+07}; \citealt{Mukai+08}; \citealt{Krauss+11}). This emission, which requires much higher temperatures than thermal emission from the white dwarf surface, has also been interpreted as being shock powered (e.g.~\citealt{Brecher+77}; \citealt{Lloyd+92}). Given the relatively sparse X-ray observations of novae, a substantial fraction of novae may be accompanied by X-ray emission of similar luminosities to the current detections (\citealt{Mukai+08}). Additional dramatic evidence for shocks in novae is the recent discovery of $\sim$ GeV gamma-rays at times nearly coincident with the optical peak (\citealt{Abdo+10}). The first gamma-ray novae occurred in the symbiotic binary V407 Cyg 2010, which appeared to favor a scenario in which the DES was the dense wind of the companion red giant (\citealt{Abdo+10}; \citealt{Vaytet+11}; \citealt{Martin&Dubus13}). However, gamma-rays have now been detected from four ordinary classical novae (\citealt{Cheung+12}; \citealt{Cheung+13}; \citealt{Hill+13}; \citealt{Hays+13}; \citealt{Cheung&Hays13}; \citealt{Cheung&Jean13a}; \citealt{Cheung&Jean13b}), which demonstrates the presence of dense external material even in systems that are not embedded in the wind of an M giant or associated with recurrent novae. Observations across the electromagnetic spectrum thus indicate that shocks are common, if not ubiquitous, in the outflows of classical novae. Theoretical models of nova shocks have been developed in previous works, but most have been applied to specific events or have been focused on the emission at specific wavelengths. \citet{Taylor+87b}, for example, model the early radio peak of Nova Vulpeculae 1984 as being powered by the shock interaction between a high velocity outflow from the white dwarf with slower earlier ejecta. \citet{Lloyd+92} model the X-ray emission of Nova Herculis 1991 as being shock powered, while \citet{Lloyd+96} calculate the free-free radio emission from hydrodynamical simulations of nova shocks. \citet{Contini&Prialnik97} calculate the effects of shocks on the optical line spectra of the recurrent nova T Pyxidis. In this paper we present a one-dimensional model for shock interaction in novae and its resulting radiation. Many of the above works, though ground-breaking, neglect one or more aspects of potentially important physics, such as the influence of the ionization state of the medium on the radio/X-ray opacity, or the effects of radiative shocks on the system dynamics and radio emission. Here we attempt to include these details (if even in only a simple-minded way), in order to provide a unified picture that simultaneously connects the signatures of shocks at radio, optical and X-ray frequencies. Our goal is to provide a flexible framework for interpreting multi-wavelength nova data in order to constrain the properties of the DES and to help elucidate its origin. This work (Paper I) is focused on thermal emission from the shocks, motivated by (1) its promise in explaining the qualitative features of the observed X-ray and radio emission and (2) the virtue that thermal processes can be calculated with greater confidence than non-thermal processes. A better understanding of the mass and radial scale of the DES is also requisite to exploring non-thermal processes, such as the high energy particle acceleration necessary to produce the observed gamma-rays. Non-thermal emission from nova shocks will be addressed in Paper II once the thermal framework is in place. The rest of this paper is organized as follows. In $\S\ref{sec:overview}$ we overview the model for nova shocks and its thermal radiation. In $\S\ref{sec:model}$ we present the details of our dynamical model of the shock-DES interaction. In $\S\ref{sec:emission}$ we describe the resulting shock radiation at X-ray ($\S\ref{sec:Xrays}$), optical ($\S\ref{sec:optical}$), and radio ($\S\ref{sec:radio}$) frequencies. In $\S\ref{sec:results}$ the results of our calculations are presented and compared to available data, focusing on the case of V1324 Sco. In $\S\ref{sec:discussion}$ we discuss our results and in $\S\ref{sec:conclusions}$ we summarize our conclusions. \subsection{Physical Picture} \label{sec:overview} Figure \ref{fig:schematic} summarizes the physical picture. A fast outflow from the white dwarf of velocity $v_w \sim few \times 10^{3}$ km s$^{-1}$ collides with the slower DES of velocity $v_4 < v_w$. In this initial treatment the DES is assumed to originate from small radii (e.g. the white dwarf surface or the binary companion) starting at the time of the initial optical rise ($t = -\Delta t$, where $\Delta t$ is defined below).\footnote{An outflow starting near the optical onset is not necessarily incompatible with scenarios in which the DES represents pre-existing ambient matter unrelated to the thermonuclear runaway if the DES is accelerated by radiation pressure from the nova explosion (\citealt{Williams72}).} The mass of the DES is concentrated about the radius $r_0 \sim v_4(t + \Delta t)$, with its density decreasing as a power-law $\propto (r-r_0)^{-k}$ at larger radii (Fig.~\ref{fig:schematic_density}). The fast nova outflow is assumed to begin around the time of the optical maximum ($t = 0$), typically days to weeks after the optical onset. This is justified by the fact that a substantial fraction of the optical light curve may be shock powered ($\S\ref{sec:optical}$), as evidenced in part by the coincidence between the optical peak and the peak of the gamma-ray emission in most of the {\it Fermi-}detected novae. The fast nova outflow drives a forward shock into the DES while a reverse shock simultaneously propagates back into the nova outflow. Both shocks are radiative at times of interest, such that the swept up gas accumulates in a cool shell between the shocks. Radiation observed from the shocks depends sensitively on the ionization state of the unshocked DES lying ahead. Ionizing UV/X-ray photons produced by free-free emission at the forward shock penetrate the upstream gas to a depth that depends on the balance between photo-ionization and radiative recombination. The structure of the resulting ionized layers controls the escape of X-rays (absorbed by neutral gas) and radio emission (absorbed by ionized gas). At early times, when the forward shock is passing through the densest gas, X-rays are absorbed by the neutral medium ahead of the shock before reaching the observer. A portion of the shock luminosity is re-emitted as optical/soft UV radiation, which freely escapes because of the much lower opacity at these longer wavelengths. As we will show, this shock-heated emission may contribute appreciably to the optical/UV light curves of novae. At later times, as the forward shock moves to larger radii and lower densities, X-rays are able to escape, with their peak luminosity and timescale depending sensitively on the metallicity $X_Z$ of the DES. Gas heated by the forward shock also produces radio emission originating from the dense cooling layer behind the shock. This emission is free-free absorbed by the cooler $\sim 10^{4}$ K ionized layer just ahead of the shock (Fig.~\ref{fig:layer_schematic}). Radio emission peaks once the density ahead of the shock decreases sufficiently to reduce the free-free optical depth to a value of order unity. The peak time and shape of the radio light curve thus depends on the density and radial profile of the DES. | \label{sec:conclusions} We have developed a model for the shock interaction between the fast outflows from nova eruptions and a `dense external shell' or DES, which may represent an earlier episode of mass loss from the white dwarf. This interaction is mediated by a forward shock driven ahead into the DES and a reverse shock driven back into nova ejecta. Our results are summarized as follows: \begin{itemize} \item{Shocks heat the gas to X-ray temperatures (Fig.~\ref{fig:schematic}). The shocks are radiative at early times, producing a dense cooling layer downstream, before transitioning to become adiabatic at late times. Cooled gas swept up by the shocks accumulates in a central shell, the inertia of which largely controls the blast-wave dynamics.} \item{X-ray/UV photons from the forward shock can penetrate the downstream cooling layer ($\S\ref{sec:postshock}$). By photo-ionizing the gas and reducing the neutral fraction below that in collisional ionization equilibrium, we speculate that line cooling will be suppressed, in which case free-free emission dominates cooling of the post-shock gas to lower temperatures than is usually assumed.} \item{At early times when the forward shock is radiative, radio emission originates from a dense cooling layer immediately downstream of the shock (Fig.~\ref{fig:layer_schematic}). The predicted radio maximum is characterized by sharper rises, and later peak times, at progressively lower frequencies. The brightness temperature at peak flux is approximately independent of frequency at a characteristic value $\sim 10^{6}$ K (eq.~[\ref{eq:T0nu}]).} \item{X-rays from the forward shock are absorbed by the neutral DES at early times. Re-radiation of this luminosity at optical/UV frequencies may contribute appreciably to the optical light curves of nova.} \item{At late times when the absorbing column of the DES ahead of the shock decreases, X-rays escape to the observer. The predicted X-ray luminosities peak at values $\lesssim 10^{32}-10^{34}$ erg s$^{-1}$ on timescales of weeks to months, consistent with observations of classical novae.} \item{Our model provides an adequate fit to the radio data of V1324 Sco for reasonable assumptions about the properties of the nova outflow motivated by the optical observations, if one assumes a DES with a velocity $\sim 1,300$ km s$^{-1}$ and mass $\sim 2.5\times 10^{-4}M_{\odot}$. The total ejecta mass (fast nova outflow + DES) of $\sim$ few $\times 10^{-4}M_{\odot}$ is consistent with that inferred independently by modeling the late radio emission as uniformly expanding photo-ionized gas (\citealt{Finzell+14}, in prep).} \item{The sharp early peak in the radio light curve of V1324 Sco requires a steep outer radial density profile for the DES, which may provide evidence for the rapid onset of mass loss from the white dwarf following the thermonuclear runaway.} \end{itemize} | 14 | 3 | 1403.1579 |
1403 | 1403.6673_arXiv.txt | The future generation of telescopes will be equipped with multi-conjugate adaptive optics (MCAO) systems in order to obtain high angular resolution over large fields of view. MCAO comes in two flavors: star- and layer-oriented. Existing solar MCAO systems rely exclusively on the star-oriented approach. Earlier we have suggested a method to implement the layer-oriented approach, and in view of recent concerns we now explain the proposed scheme in further detail. We note that in any layer-oriented system one sensor is conjugated to the pupil and the others are conjugated to higher altitudes. For the latter not all the sensing surface is illuminated by the entire field-of-view. The successful implementation of nighttime layer-oriented systems shows that the field reduction is no crucial limitation. In the solar approach the field-reduction is directly noticeable because it causes vignetting of the Shack-Hartmann sub-aperture images. It can be accounted for by a suitable adjustment of the algorithms to calculate the local wave-front slopes. We dispel a further concern related to the optical layout of a layer-oriented solar system. | To understand the behavior of the solar magnetic fields, high angular resolution of the solar surface must be attained over large fields-of-view. This calls for MCAO correction on ground-based solar telescopes\,\cite{Collados}. In MCAO systems several deformable mirrors are optically conjugated to different turbulent layers in the atmosphere. Each mirror corrects the wavefront distortions introduced close to its conjugate layer. The adaptive control of the mirrors requires the 3D-distribution of the distortions to be reconstructed from a set of wavefront-sensor measurements. There are two different approaches to achieve the required MCAO wavefront sensing: {\it star-oriented\/} and {\it layer-oriented\/}. Up to now, solar MCAO uses exclusively a procedure that corresponds to the star-oriented approach in so far as each sensor measures the integrated wavefront distortions along one direction. This ``directional'' method entails two difficulties: \begin{itemize} \item Adequate inference of the 3D-turbulence from measurements along a few discrete directions is an ill-conditioned problem, and large field-sizes are therefore difficult to correct\,\cite{Berkefeld}. The number of sensing directions can be increased, but eventually the computational load associated with the rapid tomographic reconstruction becomes prohibitive. \item A Shack-Hartmann (SH) wavefront sensor is employed to determine the integral wavefront distortion in the specified direction. In the resulting profiles the high-altitude contribution is most critical for the layer specific adaptive correction in wide fields of view. In the actual nighttime star-oriented method the integration is straightforward, because each lenslet integrates the distortions along a well defined direction. In the solar application of the method each lenslet must form an image of the solar surface with an angular resolution $\sim 0.4''$ which is then correlated against a reference image in order to obtain a local wavefront shift. Since the correlation requires an image sampled over typically $16\times16$ pixels, the viewing field of a lenslet must have an opening angle of $\sim 6.4''$. For adequate resolution the lenslet needs to have a diameter $\sim 0.10$\,m at the telescope pupil. The sub-aperture diameter will then be $\sim 0.41$\,m at altitude 10\,km. Sampling regions at the critical high altitude that overlap substantially and are much larger than at ground level are an undesirable attribute of the directional, star-oriented method. \end{itemize} As an alternative to the star-oriented approach we have proposed a layer-oriented set-up for solar MCAO\,\cite{LO}. In the star-oriented approach the lenslets of the SH sensors are conjugated to the telescope pupil. In the layer-oriented approach they are conjugated to a number of turbulent layers above the pupil, and each deformable mirror is paired with a SH sensor conjugated to the same altitude. This has several advantages: \begin{itemize} \item For a SH sensor conjugated to a layer at altitude $h$, the effective sub-aperture size is smallest at this altitude $h$ and increases with distance from that layer. The fluctuations are thus determined with best resolution near the layer of interest, the contributions from other layers are attenuated, i.e. are averaged out over larger sub-apertures. This reduction is largest for the large fields of view required in solar observations. While the directional, star-oriented approach is limited to fairly narrow fields of view, the layer-oriented approach works best with wide fields of view. \item Each SH sensor images the entire science field. The sensor measurements for the AO correction cover, thus, the entire field, while in the star-oriented approach the correction needs to be extrapolated from measurements along a few discrete directions. \item The cross-correlation is done over large fields, e.g. over $100\times100$ pixels for a $40''$ field sampled at $0.4''$. The quality of the cross-correlation is thus far better than with $16\times16$ pixels in the star-oriented approach. \end{itemize} The main difficulty of the layer-oriented approach is the need for fast detectors with a large number of pixels. | In a layer-oriented adaptive optical system a deformable mirror and a wavefront sensor are conjugated to a dominant turbulent layer. A complication of this method is that, for sensors conjugated to the high altitudes, the sensing surface is illuminated by only part of the field-of-view. This field reduction is inherent to any layer-oriented approach. It weakens the attenuation, i.e. the averaging out of the fluctuations in distant layers. However, in the solar application the signal is averaged over a continuous field, rather than a limited number of directions, and the attenuation of distant layers remains therefore considerably more efficient than in the pyramid-based nighttime systems\,\cite{Ragazzoni, Arcidiacono}. Since these latter systems are successfully used on-sky, the field reduction will be even less critical than in the solar application. Compared to the directional, `star oriented' method, the essential advantage of the layer oriented method is that it focusses on -- and thereby achieves optimal resolution at -- the conjugated layers. A second feature of the solar layer-oriented method is that the field reduction causes the images behind the SH sensors to be vignetted. The vignetting requires an adjustment of the correlation algorithm to determine the local wavefront slopes, but this adjustment is straightforward in view of the broader field images that are attained with the method. Under the implicit assumption of a constant lenslet pitch it has been claimed that lenslet arrays with excessively small focal lengths are needed on large fields-of-view\,\cite{MW}. However, if detector pixels of constant size are used, the focal length of the lenslets increases with field size and the lenslet pitch increases likewise. As it happens, this facilitates system alignment and thereby improves image quality. In conclusion, field-reduction is inherent to any layer-oriented approach and is no critical limitation. The specific complication of vignetted SH images can be accounted for by standard methods. The main difficulty of the new layer oriented multi-conjugate adaptive optical system is the need for fast detectors with a large number of pixels. While this is a limitation today, technological advances will resolve it before long. | 14 | 3 | 1403.6673 |
1403 | 1403.1115_arXiv.txt | We analyze the spatial and velocity distributions of confirmed members in five massive clusters of galaxies at intermediate redshift ($0.5 < z < 0.9$) to investigate the physical processes driving galaxy evolution. Based on spectral classifications derived from broad- and narrow-band photometry, we define four distinct galaxy populations representing different evolutionary stages: red sequence (RS) galaxies, blue cloud (BC) galaxies, green valley (GV) galaxies, and luminous compact blue galaxies (LCBGs). For each galaxy class, we derive the projected spatial and velocity distribution and characterize the degree of subclustering. We find that RS, BC, and GV galaxies in these clusters have similar velocity distributions, but that BC and GV galaxies tend to avoid the core of the two $z\approx0.55$ clusters. GV galaxies exhibit subclustering properties similar to RS galaxies, but their radial velocity distribution is significantly platykurtic compared to the RS galaxies. The absence of GV galaxies in the cluster cores may explain their somewhat prolonged star-formation history. The LCBGs appear to have recently fallen into the cluster based on their larger velocity dispersion, absence from the cores of the clusters, and different radial velocity distribution than the RS galaxies. Both LCBG and BC galaxies show a high degree of subclustering on the smallest scales, leading us to conclude that star formation is likely triggered by galaxy-galaxy interactions during infall into the cluster. | 14 | 3 | 1403.1115 |
||
1403 | 1403.0325_arXiv.txt | Unveiling the intergalactic magnetic field (IGMF) in filaments of galaxies is a very important and challenging subject in modern astronomy. In order to probe the IGMF from rotation measures (RMs) of extragalactic radio sources, we need to separate RMs due to other origins such as the source, intervening galaxies, and our Galaxy. In this paper, we discuss observational strategies for the separation by means of Faraday tomography (Faraday RM Synthesis). We consider an observation of a single radio source such as a radio galaxy or a quasar viewed through the Galaxy and the cosmic web. We then compare the observation with another observation of a neighbor source with a small angular separation. Our simulations with simple models of the sources suggest that it would be not easy to detect the RM due to the IGMF of order $\sim 1~{\rm rad~m^{-2}}$, an expected value for the IGMF through a single filament. Contrary to it, we find that the RM of at least $\sim 10~{\rm rad~m^{-2}}$ could be detected with the SKA or its pathfinders/precursors, if we achieve selections of ideal sources. These results would be improved if we incorporate decomposition techniques such as RMCLEAN and QU-fitting. We discuss feasibility of the strategies for cases with complex Galactic emissions as well as with effects of observational noise and radio frequency interferences. | \label{section1} The intergalactic medium (IGM) in the cosmic web of filaments and clusters of galaxies is expected to be permeated with the intergalactic magnetic field (IGMF) (\cite{wrsst11,rsttw11}). The IGMF plays crucial roles in various subjects of astrophysics; propagations of ultra-high energy cosmic-rays and $\gamma$-rays (\cite{mur08,das08,tnys09,rdk10,tak08,tak11,tak12}), radio emissions in galaxy clusters (\cite{fer12,fo13}), substructures during cluster mergers (\cite{asa05,taki08}), and configurations of magnetic fields in spiral galaxies (\cite{smk10}). Seed IGMFs could be generated in inflation, phase transition, and recombination eras (\cite{gr01,tak05,ich06}), during cosmic reionization (\cite{gfz00,lap05,xu08,ads10}), and through cosmological shock waves (\cite{kcor97,rkb98}). The seed fields of any origins could be further amplified through compression and turbulence dynamo in the structure formation (\cite{rkcd08,dbd08,cr09,sbsa10}). Also, leakages of magnetic fields and cosmic rays from galaxies should be taken into consideration (\cite{ddlm09,mb11}). The above diverse processes underline the importance of observational tests for the IGMF. One of a few possible methods to probe cosmic magnetic fields is to utilize Faraday rotation in radio polarimetry (\cite{ct02,gbf04,beck09,gov10}). A rotation of the polarization angle is proportional to the square of the wavelength, and the proportionality constant, rotation measure (RM), provides an integration of magnetic fields with weights of electron densities. This conventional method, however, could work only in the case of observing a single polarized radio source. Otherwise, in cases of multiple emitters along the line-of-sight (LOS), a rotation of the polarization angle draws a complex curve (\cite{bd05}, hereafter BD05), and RM cannot be easily estimated. Moreover, RMs of a few to several tens ${\rm rad~m^{-2}}$ are usually associated with radio sources (\cite{sc86,osu12}) and the Galaxy (\cite{mao10,opp12}). These RMs are larger than expected RMs through filaments, $\sim 1-10~{\rm rad~m^{-2}}$ (\cite{ar10,ar11,agr13}), and cannot be easily separated from an observed RM by the conventional method. Therefore, we need to establish alternative methods which allow us to estimate hidden RM components along the LOS. As a method to separate multiple sources and RMs along the LOS, a revolutionary technique, called Faraday RM synthesis or Faraday tomography\footnote{We consider one-dimensional reconstruction in this paper. Although the phrase ``tomography" is generally used as an attempt to reconstruct the actual 3D distribution from observed integrals through the volume, we call this technique as Faraday tomography throughout this paper, foreseeing future 3D imaging of the cosmic web.}, was first proposed by \citet{burn66} and extended by BD05. Previous works for the interstellar medium (\cite{skb07,skb09}), the Galaxy (\cite{mao10}), external galaxies (\cite{hbe09}), and active galactic nuclei (\cite{osu12}) have demonstrated that the technique is powerful to resolve RM structures along the LOS. It would be thus promising to study the IGMF in eras of wide-band radio polarimetry including Square Kilometer Array (SKA) and its pathfinders/precursors such as Low Frequency Array (LOFAR), Giant Meterwave Radio Telescope (GMRT), and Australia SKA Pathfinder (ASKAP) (see \cite{bfss12}, a summary of telescopes therein). In this paper, we discuss observational strategies to probe the IGMF by means of Faraday tomography. We consider frequency coverages and numbers of channels of future observations. Faraday tomography is in general improved by incorporating decomposition techniques such as RMCLEAN (\cite{hea09}) and QU-fitting (\cite{osu12}). However, decomposition techniques have their own uncertainties (\cite{far11,kum13}). Therefore, we concentrate on a standard method of Faraday tomography without any corrections to see its original potential. In fact, the decomposition is powerful for our study, which was addressed in a separate paper (\cite{ide14a}). The rest of this paper is organized as follows. In section 2, we introduce the method of Faraday tomography and describe our model. The results are shown in section 3, and the discussion and summary follow in section 4 and 5, respectively. | \label{section4} \begin{figure}[tp] \begin{center} \FigureFile(80mm,40mm){f8.eps} \end{center} \caption{Same as Figure \ref{f4} but for the models with $F_{\rm c0}/F_{\rm d0}=1$ and ${\rm RM_{IGMF}}=10~{\rm rad~m^{-2}}$ and $n=0,1,2,3,4$ (see text) for the SKA observation from the top to bottom panel, respectively.} \label{f8} \end{figure} We have presented the cases for a pure real $F(\phi)$ obtained if the intrinsic polarization angle, $\chi_0$, does not depend on $\phi$. $\chi_0$ is, however, determined by structures of magnetic fields, and could be a function of $\phi$. In order to see effects of $\chi_0(\phi)$ on the reconstruction, we consider a variable $\chi_0(\phi)$ in our model. We multiply a phase factor $e^{2i \chi_0(\phi)}$ to a real function of $F(\phi)$ in Equation (\ref{eq1}), keeping the absolute value of the model FDF to be the same. For $\chi_0(\phi)$, although its general profile is not known, some characteristic behaviors could be understood by using a simple analytic function. We consider a periodic function $\chi_0(\phi) = \cos (2\pi \phi \times 0.1n)$ for $n=$ 1, 2, 3, and 4, since periodicity is expected from multiple reversals of turbulent magnetic fields. The results with $F_{\rm c0}/F_{\rm d0}=1$ and ${\rm RM_{IGMF}}=10$ ${\rm rad~m^{-2}}$ are shown in Figure \ref{f8}. We find that the profiles of the reconstructed FDFs highly depend on $n$. Nevertheless, the edges of the sources are rather sharp compared with the fiducial model ($n=0$). This may be ascribed to the cancellation of polarized emissions at the tails due to a rotation of the intrinsic polarization angle. Therefore, our main simulations could be regarded as conservative cases with largest extension of the skirts, which would be somewhat reduced in realistic situations. Eventually, ${\rm RM_{IGMF}}$ could be better estimated from the gap between the two sources. A real FDF of the Galaxy would be much more complex. It would have $n\gg 4$ based on the coherence length of magnetic fields of several tens of pc (see \cite{arkg13}, references therein). Even in such a case, our observational strategies would be still available, since the intrinsic polarization angle does not alter the key feature: two sources and the gap between them (Figure \ref{f8}). Multiple peaks may become an ambiguity for identifying which peak is an extragalactic origin and which gap is caused by the IGMF. But we could solve the ambiguity, if we carry out an ``off-source" observation and gain the FDF of the Galaxy only. We notice that a real FDF of the Galaxy should depend on Galactic longitude and latitude as well as properties of turbulent magnetic fields such as the driving scale, the Mach number, the plasma $\beta$, and so on (\cite{arkg13}). Although considerations of them are beyond the scope of this paper, developing realistic FDFs of the Galaxy must be an important subject to make the detection of the gap more reliable. Realistic FDFs of the Galaxy based on Akahori et al. (2013) will be presented in a separate paper (\cite{ide14b}). \begin{figure}[tp] \begin{center} \FigureFile(80mm,40mm){f9.eps} \end{center} \caption{Same as Figure \ref{f4} but for the models with $F_{\rm c0}/F_{\rm d0}=1$ and ${\rm RM_{IGMF}}=10~{\rm rad~m^{-2}}$ and with observational effects: top three panels are for the cases with noise amplitudes of 30, 50, and 100 \% of the polarized intensities, and the bottom two panels are for the cases with RFIs on sites X and Y, where unshadowed regions are wavelength coverages for SKA without strong RFIs.} \label{f9} \end{figure} Another simplification in this paper was to neglect observational effects. Particularly, it is true that there are significant noise on polarized intensities and some frequencies are probably missing due to radio frequency interferences (RFIs). We demonstrate these effects as follows. For effects of observational noise, we include the noise into the observable polarized intensity, $\tilde{P}(\lambda^2)$, and get the reconstructed FDF. We consider a random Gaussian noise in each $\lambda^2$ domain. The results with noise amplitudes of 30, 50, and 100 \% of the polarized intensities for representative cases are shown in top panels of Figures \ref{f9} and \ref{f10}. We see that the noise amplitude of 30 \% does not dramatically alter the overall profile of the FDF, and the reconstructed FDF would be useful up to the noise amplitude of $\sim 50~\%$. Such a requirement of the noise would limit the sample of radio sources that could be considered. \begin{figure}[tp] \begin{center} \FigureFile(80mm,40mm){f10.eps} \end{center} \caption{Same as Figure \ref{f7} but for the models with the RM of the cosmic web B, $\phi_{f,C}-\phi_{f,B}=$ 10 ${\rm rad~m^{-2}}$ and with observational effects: top three panels are for the cases with noise amplitudes of 30, 50, and 100 \% of the polarized intensities, and the bottom two panels are for the cases with RFIs on sites X and Y.} \label{f10} \end{figure} As for effects of RFIs, we discard the data in frequencies where significant RFIs exist, and get the reconstructed FDF. We refer to the recent assessment report\footnote{http://www.skatelescope.org/wp-content/uploads/2012/06/78g\_SKAmon-Max.Hold\_.Mode\_.Report.pdf} of RFIs for the SKA candidate sites, X and Y. The results for representative cases are shown in bottom panels of Figures \ref{f9} and \ref{f10}. We can see less-peaked profiles for compact sources due to lack of data in low frequencies around $\sim 86-108$ MHz and $\sim 170-270$ MHz. Such a broadening of the FDF for the compact source would produce uncertainties of a few ${\rm rad~m^{-2}}$ on the estimation of the gap. Reconstructed FDFs have significant skirts and side lobes originating from the RMSF. Such skirts and side lobes are a major ambiguity to probe the IGMF. The issues related to the RMSF could be, however, improved by using decomposition techniques such as RMCLEAN \citep{hea09} and calibrations of the RMSF by phase correction (BD05) and symmetry assumption \citep{fssb11}. Also, wavelet-based tomography \citep{fssb11,bfss12} would allow better representation of localized structures in the data unlike with decompositions with harmonic functions in the Fourier transform. QU-fitting (\cite{osu12,ide14a}) and compressive sampling/sensing \citep{don06,ct06,lbcd11,ast11} would be also promising to probe the gap caused by the IGMF. Another important improvement to get better reconstructions of the FDF is even sampling in $\lambda^2$ space. Although we have assumed even sampling in the simulations, observations sample the data evenly in $\lambda$ space so far. Such data produce unevenly-sampled data in $\lambda^2$ space, and cause large numerical artifacts in Fourier transform. In order to minimize numerical errors on the Fourier transform, development of flexible receiver systems which allow us to sample the data evenly in $\lambda^2$ space would be a key engineering task for future radio astronomy (e.g., CASPER/ROACH\footnote{https://casper.berkeley.edu/wiki/ROACH}). | 14 | 3 | 1403.0325 |
1403 | 1403.2927_arXiv.txt | { The X-ray luminosity function which is closely related to the cluster mass function is an important statistic of the census of galaxy clusters in our Universe. It is also an important means to probe the cosmological model of our Universe. Based on our recently completed {\sf REFLEX II} cluster sample comprising 910 galaxy clusters with redshifts we construct the X-ray luminosity function of galaxy clusters for the nearby Universe and discuss its implications. We derive the X-ray luminosity function of the {\sf REFLEX II} clusters on the basis of a precisely constructed selection function for the full sample and for several redshift slices from $z = 0$ to $z = 0.4$. In this redshift interval we find no significant signature of redshift evolution of the luminosity function. We provide the results of fits of a parameterized Schechter function and extensions of it which provide a reasonable characterization of the data. We also use a model for structure formation and galaxy cluster evolution to compare the observed X-ray luminosity function with the theoretical predictions for different cosmological models. The most interesting constraints can be derived for the cosmological parameters $\Omega_m$ and $\sigma_8$. We explore the influence of several model assumptions on which our analysis is based. We find that the scaling relation of X-ray luminosity and mass introduces the largest systematic uncertainty. From the statistical uncertainty alone we can constrain the matter density parameter, $\Omega_m \sim 0.27 \pm 0.03$ and the amplitude parameter of the matter density fluctuations, $\sigma_8 \sim 0.80 \pm 0.03$. Marginalizing over the most important uncertainties, the normalisation and slope of the $L_X - M$ scaling relation, we have larger error bars and a result of $\Omega_m \sim 0.29 \pm 0.04$ and $\sigma_8 \sim 0.77 \pm 0.07$ ($1\sigma$ confidence limits). We compare our results with those of the SZ-cluster survey provided by the {\sf PLANCK} mission and we find very good agreement with the results using {\sf PLANCK} clusters as cosmological probes, but we have some tension with {\sf PLANCK} cosmological results from the microwave background anisotropies, which we discuss in the paper. We also make a comparison with results from the SDSS cluster survey, several cosmological X-ray cluster surveys, and recent Sunyaev-Zel'dovich effect surveys. We find good agreement with these previous results and show that the {\sf REFLEX II} survey provides a significant reduction in the uncertainties compared to earlier measurements.} | Galaxy clusters as the largest, clearly defined objects in our Universe are interesting astrophysical laboratories and important cosmological probes (e.g. Sarazin 1986, Borgani et al. 2001, Voit 2005, Vikhlinin et al. 2009, Allen et al. 2011, B\"ohringer 2011). They are particularly good tracers of the large-scale structure of the cosmic matter distribution and its growth with time. While most of the precise knowledge on the galaxy cluster population has come from X-ray observations as detailed in the above references, recent progress has also been made by optical cluster surveys (e.g. Rozo et al. 2010) and millimeter wave surveys using the Sunyaev-Zel'dovich effect (Reichardt et al. 2012, Benson et al. 2013, Marriage et al. 2011, Sehgal et al. 2011, PLANCK-Collaboration 2011, 2013b). X-ray surveys for galaxy clusters are still most advanced providing statistically well defined, approximately mass selected cluster samples, since: (i) X-ray luminosity is tightly correlated to mass (e.g. Reiprich \& B\"ohringer 2002, Pratt et al. 2009), (ii) bright X-ray emission is only observed for evolved clusters with deep gravitational potentials, (iii) the X-ray emission is highly peaked and projection effects are minimized, and (iv) for all these reasons the survey selection function can be accurately modeled. The {\sf ROSAT} All-Sky Survey (RASS, Tr\"umper 1993) is the only existing full sky survey conducted with an imaging X-ray telescope, providing a sky atlas in which one can search systematically for clusters in the nearby Universe. The largest, high-quality sample of X-ray selected galaxy clusters is provided so far by the {\sf REFLEX} Cluster Survey (B\"ohringer et al. 2001, 2004, 2013) based on the southern extragalatic sky of RASS at declination $\le 2.5$ degree. The quality of the sample has been demonstrated by showing that it can provide reliable measures of the large-scale structure (Collins et al. 2000, Schuecker et al. 2001a, Kerscher et al. 2001), yielding cosmological parameters (Schuecker et al. 2003a, b; B\"ohringer 2011) in good agreement within the measurement uncertainties with the subsequently published WMAP results (Spergel et al. 2003, Komatsu et al. 2011). The {\sf REFLEX} data have also been used to study the X-ray luminosity function of galaxy clusters (B\"ohringer et al. 2002), the galaxy velocity dispersion - X-ray luminosity relation (Ortiz-Gil et al., 2004), the statistics of Minkowski functionals in the cluster distribution (Kerscher et al. 2001), and to select statistically well defined subsamples like the HIFLUGCS (Reiprich \& B\"ohringer 2002) and {\sf REXCESS} (B\"ohringer et al. 2007). The latter is particularly important as a representative sample of X-ray surveys to establish X-ray scaling relations (Croston et al. 2008, Pratt et al. 2009, 2010, Arnaud et al. 2010) and the statistics of the morphological distribution of galaxy clusters in X-rays (B\"ohringer et al. 2010). Recently we have completed an extension of {\sf REFLEX} apart from 11 missing redshifts, {\sf REFLEX II}, which about doubles the size of the cluster sample. The construction of this sample is described in B\"ohringer et al. (2013). In the present paper we describe the construction of the {\sf REFLEX II} X-ray luminosity function from the galaxy cluster data and the survey selection function derived in B\"ohringer et al. (2013). We fit parameterized functions to the data and we compare the observed luminosity function to the predictions of cosmological structure formation models. From the latter comparison we obtain constraints on cosmological parameters. The most sensitive of these parameters are the matter density parameter, $\Omega_m$, and the amplitude parameter of the matter density fluctuations, $\sigma_8$. We focus on the derivation of robust constraints on the two parameters in this paper, while leaving a comprehensive modeling of the combined uncertainty of all relevant cosmological and cluster parameters to a future publication. We study the errors introduced by various other uncertainties rather case by case and evaluate the overall systematic errors. The {\sf REFLEX II} cluster sample has also recently been used to construct the first supercluster catalog for clusters with a well defined selection function (Chon \& B\"ohringer 2013), showing among other results that the X-ray luminosity function of clusters in superclusters is top-heavy in comparison to that of clusters in the field. A preliminary sample of {\sf REFLEX II} which had 49 redshifts less than used here, has been applied to the study of the galaxy cluster power spectrum by Balaguera-Antolinez et al. (2011). The results show a very good agreement with the cosmological predictions based on cosmological parameters determined from WMAP 5 yr data. In a second paper (Balaguera-Antolinez et al. 2012), in which the construction of {\sf REFLEX} mock samples from simulations used in the earlier paper is described, a preliminary X-ray luminosity function of {\sf REFLEX II} has been determined. Here we use a completely new approach with updates on the cluster sample, the scaling relations, and the missing flux correction used in the sample construction, and the survey selection function based on the procedures described in B\"ohringer et al. (2013). Other previous determinations of the X-ray luminosity function of galaxy clusters include: Piccinotti et al. (1982), Kowalski et al. (1984), Gioia et al. (1984), Edge et al. (1990), Henry et al. (1992), Burns et al. (1996), Ebeling et al. (1997), Collins et al. (1997), Burke et al. (1997), Rosati et al. (1998), Vikhlinin et al. (1998), De Grandi et al. (1999), Ledlow et al. (1999), Nichol et al. (1999), Gioia et al (2001), Donahue et al. (2001) Allen et al. (2003), Mullis et al. (2004), B\"ohringer et al. (2007), Koens et al. (2013). The paper is organized as follows. In chapter 2 we introduce the REFLEX II galaxy cluster sample and the survey selection function. In section 3 we use the parameterized Schechter function fitted to our data to describe the resulting X-ray luminosity function. In section 4 we outline the cosmological modeling used for the theoretical prediction of the cluster mass and X-ray luminosity function. In section 5 we discuss the results of the model comparison to the data for different cosmological models. The effect of the uncertainties in the used cluster scaling relations on the results is discussed in section 6 and other systematic uncertainties of our analysis are discussed in section 7. In section 8 we compare our results to findings from other surveys and section 9 closes the paper with the summary and conclusions. If not stated otherwise, we use for the calculation of physical parameters and survey volumes a geometrically flat $\Lambda$-cosmological model with $\Omega_m = 0.3$ and $h_{70} = H_0/70$ km s$^{-1}$ Mpc$^{-1}$ = 0.7. All uncertainties without further specifications refer to 1$\sigma$ confidence limits. | We used the {\sf REFLEX II} catalog and the well defined selection function to construct the X-ray luminosity function for the sample at a median redshift of $z = 0.102$. For the flux limit of $F_X = 1.8 \times 10^{-12}$ erg s$^{-1}$ cm$^{-2}$, a luminosity limit of $L_X = 0.03 \times 10^{44}$ erg s$^{-1}$ (for 0.1 - 2.5 keV), a lower photon count limit of 20 source photons, and a redshift range of $z = 0 - 0.4$, some 819 clusters are included in the luminosity function determination. The cluster catalog is better than 90\% complete, with a best estimate of about 95\% as detailed in B\"ohringer et al. (2013) and we also expect a few percent contamination by clusters whose X-ray luminosity may be boosted by an AGN. Since this fraction is small and as an incompleteness even of the order of 10\% causes only little change in the derived cosmological parameters as detailed in section 7, we have not included any correction for incompleteness in the presented results. Inspecting the XLF in different redshift shells reveals no significant evolution of the XLF. We showed for our best fitting cosmological model that this undetectable change is consistent with the theoretical expectation. This does not imply that there is no evolution in the cluster mass function. There are two competing effects, the evolution of the mass function and the evolution of the $L_X - M$ relation. The relation of X-ray luminosity to mass evolves as clusters have been more compact on average in the past which increases the X-ray luminosity (through the square-law dependence on the density) and clusters of the same mass become brighter. This compensates for the loss of massive clusters at higher redshifts and suppresses the evolution in X-ray luminosity. In search of a good analytical description of the XLF, we found that the Schechter function does not capture our knowledge with sufficient precision. We therefore propose a modified Schechter function for a good description of the data. The most interesting application of the XLF is to test theoretical predictions of this function within the frame of different cosmological models. These tests are based on the theory of cosmic evolution and structure formation and rely among other things on the description of the transfer function of the power spectrum by Eisenstein \& Hu (1998), on the numerical simulation calibrated recipe for the cluster mass function by Tinker et al. (2008), and on scaling relations that enable the connection of cluster mass and X-ray luminosity. Apart from the scaling relations, the other theoretical framework has been intensively tested and is believed to be accurate at about the 5\% level. We find that we can get a very good match of the observed XLF with the theoretical predictions for a very reasonable cosmological model; in particular if we restrict the fitting to the luminosity range $L_X \ge 0.25 \times 10^{44}$ erg s$^{-1}$ (0.1 - 2.4 keV) where we have an observationally calibrated $L_X - M$ relation (note the good match of the predicted and observed XLF in Fig.~\ref{fig7}). In using the observational data to constrain cosmological parameters we have in this paper not pursued a comprehensive marginalization over all relevant parameters. We postpone this to later work. We rather wanted to gain an overview and an understanding of the effect of the different parameters by studying them individually. From this investigation it becomes clear that by far the largest uncertainty in the constraints of cosmological parameters is introduced by the $L_X - M$ scaling relation and specifically its slope and normalization. We give a detailed account on the influence of the other parameters and then concentrate on a marginalization study including these two most important input parameters, which provides a good account of the overall uncertainties. The important constraints that we derive from the {\sf REFLEX II} data are in the matter density parameter $\Omega_m = 0.29 \pm 0.04$ and the amplitude parameter of the matter density fluctuations $\sigma_8 = 0.77 \pm 0.07$. The currently most interesting comparison of our findings with other results is that with the recently published cosmological constraints from clusters detected with {\sf PLANCK} (Planck Collaboration 2013b). The {\sf PLANCK} results show a tension between the cosmological constraints on $\Omega_m$ and $\sigma_8$ from clusters and from the cosmic microwave background (CMB) anisotropies, which has caused a lively debate. We find that our results agree perfectly with the {\sf PLANCK} cluster data and it would be very hard to reconcile them with the CMB derived results. However, we find that our results are consistent with the constraints from the CMB study with WMAP (Hinshaw et al. 2013). Since there is also some tension between the implications from the CMB data of WMAP and {\sf PLANCK}, the source of which is currently under investigation, we are confident, that the solution of these problems will bring a closer agreement of all the data in the future without a significant change of our results. The good agreement of our results with the recent work on cluster cosmology in the literature is encouraging. But we should point out here that our new results provide tighter constraints on the two tested parameters than the previous studies and constitute a significant improvement. We have, however, reached a limit where a further increase of the clusters sample and of the overall statistics will not lead to much further improvements, if we cannot better calibrate the scaling relations. A major reason for our poor knowledge on the scaling relations originates in several facts. On one hand the cluster samples with very well defined selection criteria used in the scaling relation studies are still very small with typically 30 - 50 objects. Another source of uncertainty is revealed by the difference in results for mass, temperature, or X-ray luminosity determined for the same set of clusters by different authors (e.g. Reichert et al. 2011). And there are still some calibration uncertainties for the XMM-Newton and Chandra instruments for which there is an ongoing effort of their resolution (e.g. Kettula et al. 2013). Therefore one of the next major efforts of the authors will be to increase the sample size and the data reduction quality of the cluster samples to obtain better constraints on the important scaling relations. | 14 | 3 | 1403.2927 |
1403 | 1403.5670_arXiv.txt | The quest for the origin of matter in the Universe had been the subject of philosophical and theological debates over the history of mankind, but quantitative answers could be found only by the scientific achievements of the last century. A first important step on this way was the development of spectral analysis by Kirchhoff and Bunsen in the middle of the 19$^{\rm th}$ century, which provided first insight in the chemical composition of the sun and the stars. The energy source of the stars and the related processes of nucleosynthesis, however, could be revealed only with the discoveries of nuclear physics. A final breakthrough came eventually with the compilation of elemental and isotopic abundances in the solar system, which are reflecting the various nucleosynthetic processes in detail. This review is focusing on the mass region above iron, where the formation of the elements is dominated by neutron capture, mainly in the slow ($s$) and rapid ($r$) processes. Following a brief historic account and a sketch of the relevant astrophysical models, emphasis is put on the nuclear physics input, where status and perspectives of experimental approaches are presented in some detail, complemented by the indispensable role of theory. | } \subsection{Milestones and basic concepts \label{sec:1.1}} In 1938, the quest for the energy production in stars had been solved by the work of Bethe and Critchfield \cite{BeC38}, von Weizs\"acker \cite{Wei38}, and Bethe \cite{Bet39a}, but the origin of the heavy elements remained a puzzle for almost two more decades. It was finally the discovery of the unstable element technetium in the atmosphere of red giant stars by Merrill in 1952 \cite{Mer52b}, which settled this issue in favor of stellar nucleosynthesis, thus questioning a primordial production in the Big Bang. A stellar origin of the heavy elements was strongly supported by the increasingly reliable compilations of the abundances in the solar system by Suess and Urey \cite{SuU56} and Cameron \cite{Cam59a}, because the pronounced features in the abundance distribution could be interpreted in terms of a series of nucleosynthesis scenarios associated with stellar evolution models. This key achievement is summarized in the famous fundamental papers published in 1957 by Burbidge, Burbidge, Fowler and Hoyle (B$^2$FH) \cite{BBF57} and by Cameron \cite{Cam57,Cam57b}. While the elements from carbon to iron were found to be produced by charged particle reactions during the evolutionary phases from stellar He to Si burning, all elements heavier than iron are essentially built up by neutron reactions in the slow ($s$) and rapid ($r$) neutron capture processes as they were termed by B$^2$FH. The $s$ process, which takes place during He and C burning, is characterized by comparably low neutron densities, typically a few times 10$^8$ cm$^{-3}$, so that neutron capture times are much slower than most $\beta$ decay times. This implies that the reaction path of the $s$ process follows the stability valley as sketched in Figure~\ref{fig:1} with the important consequence that the neutron capture cross sections averaged over the stellar spectrum are of pivotal importance for the resulting $s$ abundances. Although the available cross sections under stellar conditions were very scarce and rather uncertain, already B$^2$FH could infer that the product of cross section times the resulting $s$ abundance represents a smooth function of mass number $A$. In the following decade, the information on cross section data was significantly improved by dedicated measurements \cite{MaG65}, leading to a first compilation of stellar ($n, \gamma$) cross sections by Allen, Gibbons and Macklin in 1971 \cite{AGM71}. Meanwhile, Clayton et al. \cite{CFH61} had worked out a phenomenological model of the $s$ process, assuming a seed abundance of $^{56}$Fe exposed to an exponential distribution of neutron exposures with the cross section values of the involved isotopes in the reaction path as the essential input. \begin{center} \begin{figure}[tb] \includegraphics[width=15cm]{fig1} \caption{The formation processes of the elements between iron and the actinides. The neutron capture path of the $s$ process follows the valley of stability and ends in the Pb/Bi region by $\alpha$-recycling. Due to the much higher neutron densities, the $r$-process path is shifted to the far neutron-rich region, from where the reaction products decay back to stability. The solar abundances are essentially composed of contributions from both processes, except for the $s$-only and $r$-only isotopes, which are shielded by stable isobars against the $r$-process region or lie outside the $s$-process path, respectively. An additional minor component is ascribed to the $p$ (or $\gamma$) process to describe the rare, stable proton-rich isotopes. The magic neutron number $N=50$ is shown to indicate the strong impact of nuclear structure effects, which give rise to pronounced maxima in the observed abundance distribution as indicated in the inset. \label{fig:1} } \end{figure} \end{center} As the cross section database was improved, this classical model turned out to be extremely useful for describing the $s$-process component in the solar abundance distribution. In fact, it turned out that the $s$ process itself was composed of different parts, i.e. the weak, main, and strong components as shown by Seeger et al. \cite{SFC65}. This $s$-process picture was eventually completed by the effect of important branchings in the reaction path due to the competition between neutron capture and $\beta^-$-decay of sufficiently long-lived isotopes \cite{ClW74}. The appealing property of the classical approach was that a fairly comprehensive picture of $s$ process could be drawn with very few free parameters and that these parameters are directly related to the physical conditions typical for the $s$ process environment, i.e. neutron fluence, seed abundance, neutron density, and temperature. Moreover, it was found that reaction flow equilibrium has been achieved in mass regions of the main component between magic neutron numbers, where the characteristic product of cross section and $s$ abundance, $\sigma N(A)$ is nearly constant. In spite of its schematic nature, the classical $s$ process could be used to reproduce the solar $s$ abundances within a few percent as illustrated in Figure~\ref{fig:2}. \begin{center} \begin{figure}[tb] \includegraphics[width=0.95\textwidth]{show_sigma_n_all} \hfill \caption{The characteristic product of cross section times $s$-process abundance plotted as a function of mass number. The thick solid line was obtained via the classical model for the main component, and the symbols denote the empirical products for the $s$-only nuclei. Some important branchings of the neutron capture chain are indicated as well. A second, weak component had to be assumed for explaining the higher $s$ abundances between Fe and $A\approx90$. Note that reaction flow equilibrium has only been achieved for the main component in mass regions between magic neutron numbers (where $\sigma N$ values are nearly constant). \label{fig:2} } \end{figure} \end{center} Nevertheless, the more accurate cross section data became available, particularly around the bottle-neck isotopes with magic neutron numbers and in $s$-process branchings, the more inherent inconsistencies of the classical model came to light \cite{KGB90,WVK98a}, indicating the need for a more physical prescription based on stellar evolution \cite{AKW99b}. This transition started with early models for stellar He burning by Weigert \cite{Wei66} and Schwarzschild and H{\"a}rm \cite{ScH67}, which were used by Sanders \cite{San67} to verify implicit $s$-process nucleosynthesis. The connection to the exponential distribution of neutron exposures postulated by the classical approach was ultimately provided by Ulrich \cite{Ulr73} who showed that this feature follows naturally from the partial overlap of $s$-process zones in subsequent thermal instabilities during the He shell burning phase in low-mass asymptotic giant branch (AGB) stars. Consequently, the classical approach had been abandoned as a serious $s$-process model, but continued to serve as a convenient approximation in the mass regions between magic neutron numbers with constant $\sigma$N$_s$ products. The second half of the solar abundances above iron is contributed by the $r$ process. In this case, the neutron densities are extremely high, resulting in neutron capture times much shorter than average $\beta$ decay times. This implies that the reaction path is shifted into the neutron-rich region of the nuclide chart until the ($n, \gamma$) sequence is halted by inverse ($\gamma, n$) reactions by the hot photon bath. Contrary to the $s$ process, where the abundances are dependent on the cross section values, the $r$ abundances are determined by the $\beta$-decay half lives of these waiting points close to the neutron drip line. As the consequence of the explosive supernova scenario suggested by B$^2$FH, prescriptions of the $r$-process abundances were severely challenged by the fact that the required nuclear physics properties for the short-lived, neutron-rich nuclei forming the comprehensive reaction network far from stability were essentially unknown. This information includes $\beta$-decay rates and nuclear masses, neutron-induced and spontaneous fission rates, cross section data, and $\beta$-delayed neutron emission for several thousand nuclei. First attempts to reproduce the $r$-process abundances that had been inferred by subtraction of the $s$ abundances from the solar values \cite{AGM71} started with a simplified static approximation, assuming constant neutron density and temperature ($n_n\geq10^{20}$ cm$^{-3}$, $T\geq10^9$ K) during the explosion and neglecting neutron-induced reactions during freeze-out \cite{SFC65}. Early dynamic $r$-process models were facing not only enormous computational problems, but had to deal with the many unknowns of the possible scenarios. In general, supernovae were preferred over supermassive objects and novae as potential $r$-process sites \cite{TAT68}, but the relevant features of such explosions, i.e. the temperature and density profiles, the velocity distribution during and shortly after the explosion, and the initial seed composition, were too uncertain to draw a plausible picture of the $r$ process by the end of the 1970ies \cite{Hil78}. As discussed by B$^2$FH about 35 proton-rich isotopes cannot be produced by neutron captures, because they are shielded from the reaction networks of the $s$ and $r$ processes as shown in Figure~\ref{fig:1}. The initial idea that these isotopes were produced by proton capture ($p$ process) in the hydrogen-rich envelope of massive stars during the supernova explosion \cite{BBF57} had to be abandoned because it led to unrealistic assumptions for densities, temperatures and timescales. In 1978, Woosley and Howard \cite{WoH78} suggested the shock-heated Ne/O shell in core-collapse supernovae as the site of the $p$ process, where temperatures are high enough for modifying a preexisting seed distribution by a sequence of photo-disintegration reactions. Therefore, this approach is sometimes also referred to as $\gamma$ process \cite{RPA90}. \subsection{Solar abundances \label{sec:1.2}} The abundance distribution in the solar system has served as an important source of information for the nucleosynthesis concepts \cite{BBF57,Cam57,Cam57b}. Following the pioneering work of Goldschmidt \cite{Gol37}, detailed abundance tables have been reported by Suess and Urey \cite{SuU56} and were then continuously improved by the combination of meteoritic isotope abundances, essentially based on C1 chondrites, and of elemental abundances from spectroscopy of the solar photosphere. This series started with Cameron \cite{Cam82}, Anders and Ebihara \cite{AnE82}, Anders and Grevesse \cite{AnG89} and continued until now with compilations by Lodders \cite{Lod03}, Grevesse et al. \cite{GAS07}, and Lodders, Palme, Gail \cite{LPG09}. The distribution plotted in Figure~\ref{fig:3} shows the solar system abundances as a function of mass number, which clearly exhibits the influence of nuclear effects characteristic of the various nucleosynthesis sites. The distribution is by far dominated by the primordial H and He abundances from the Big Bang followed by the rare elements Li, Be, and B. Because these are difficult to produce due to the stability gaps at $A=5$ and 8, but are easily burnt in stars, the present abundances are essentially produced via spallation by energetic cosmic rays \cite{Ree94}. Stellar nucleosynthesis of heavier nuclei starts with $^{12}$C and $^{16}$O, the products of He burning, which were partly converted to $^{14}$N by the CNO cycle in later stellar generations. \begin{center} \begin{figure}[tb] \includegraphics[width=0.95\textwidth]{show_abundance_data} \caption{The isotopic abundance distribution in the solar system with indications for the main production processes (data from \cite{LPG09}). The $s$ and $r$ maxima are reflecting the effect of magic neutron numbers $N=50, 82, 128$. \label{fig:3} } \end{figure} \end{center} The light elements up to mass 50 are the result of charged-particle reactions during the advanced C, Ne, and O burning phases. This mass region is characterized by the enhancement of the more stable $\alpha$ nuclei and by the exponentially decreasing abundances due to the increase of the Coulomb barrier with atomic number. The last stage dominated by charged-particle reactions is Si burning, where densities and temperatures are high enough to reach nuclear statistical equilibrium. Under these extreme conditions only the most stable nuclei survive, leading to the distinct abundance peak around $A=56$. Any further build-up of heavier elements were then provided by neutron capture reactions starting on these abundant isotopes, i.e. essentially on $^{56}$Fe, as a seed as discussed in the following section. | } Neutron reactions are of pivotal importance for our understanding of how the heavy element abundances are formed during the late stellar phases. Maxwellian averaged cross sections are particularly important to constrain $s$-process models related to the H/He shell burning in AGB stars and also for the He and C shell burning phases in massive stars. The fact that the respective $s$-process abundance distributions can be deduced in quantitative detail is crucial for defining the abundances produced by explosive nucleosynthesis and for deriving a reliable picture of Galactic chemical evolution. Continued improvements of laboratory neutron sources and of measurement techniques were instrumental for establishing a comprehensive collection of neutron-induced reaction rates in the astrophysically relevant energy range from a few up to about 300~keV. Apart from very few exceptions, experimental data are available for all stable isotopes between Fe and Pb, although not always with sufficient accuracy and in the entire energy range of interest. Such deficits are particularly found for the very small cross sections of the abundant light elements, which represent potential neutron poisons, and for neutron magic nuclei, which are the bottle necks in the $s$-process reaction flow. For the unstable species, which are needed under special $s$-process conditions characterized by high neutron densities and for explosive nucleosynthesis, experimental data are still very scarce and must so far be complemented by theory. The main role of theory, however, refers to corrections concerning the stellar environment, i.e. with respect to the effect of thermally populated excited states to the enhancement of weak interaction rates in the stellar plasma. The main challenges for the future will be related to the further improvement of the laboratory neutron sources and to the development of advanced experimental methods. Progress in these fields is mandatory for tackling the yet unsatisfactory situation with neutron reactions of unstable isotopes. It appears that promising developments in both areas are presently under way with the potential for innovative solutions. As a consequence, future experiments can be performed with much higher sensitivities, i.e. by using very small amounts of sample material. This is crucial for dealing with unstable isotopes, because the sample activity can be reduced and the stringent problem of sample preparation can be solved by using the intense radioactive beam facilities. Within the next decade, these options will provide ample opportunities to extend neutron reaction studies into the region of unstable isotopes. \begin{center} {\bf Acknowledgements} \end{center} The authors would like to thank C. Guerrero, F. Gunsing, V. Vlachoudis, as well as J. Heyse and P. Schillebeeckxs for providing neutron fluxes and further information about n\_TOF and GELINA. Thanks are also due to R. Gallino and S. Bisterzo for their permission to use Figure~\ref{fig:5}. This work was supported by the Helmholtz Young Investigator project VH-NG-327 and the BMBF project 05P12RFFN6 and the Helmholtz International Center for FAIR. | 14 | 3 | 1403.5670 |
1403 | 1403.2266_arXiv.txt | Mid-infrared (MIR) spectra observed with Gemini/Michelle were used to study the nuclear region of the Compton-thick Seyfert~2 (Sy~2) galaxy Mrk~3 at a spatial resolution of $\sim$200\,pc. No polycyclic aromatic hydrocarbons (PAHs) emission bands were detected in the N-band spectrum of Mrk~3. However, intense [Ar{\sc\,iii]}\,8.99\,$\mu$m, [S{\sc\,iv]}\,10.5\,$\mu$m and [Ne{\sc\,ii]}\,12.8\,$\mu$m ionic emission-lines, as well as silicate absorption feature at 9.7$\mu$m have been found in the nuclear extraction ($\sim$200\,pc). We also present subarcsecond-resolution Michelle N-band image of Mrk\,3 which resolves its circumnuclear region. This diffuse MIR emission shows up as a wings towards East-West direction closely aligned with the S-shaped of the Narrow Line Region (NLR) observed at optical [O{\sc\,iii]}\,$\lambda$5007\AA\,\,image with \emph{Hubble/FOC}. The nuclear continuum spectrum can be well represented by a theoretical torus spectral energy distribution (SED), suggesting that the nucleus of Mrk~3 may host a dusty toroidal structure predicted by the unified model of active galactic nucleus (AGN). In addition, the hydrogen column density (N$_H\,=\,4.8^{+3.3}_{-3.1}\times\,10^{23}$\,cm$^{-2}$) estimated with a torus model for Mrk~3 is consistent with the value derived from X-ray spectroscopy. The torus model geometry of Mrk~3 is similar to that of NGC~3281, both Compton-thick galaxies, confirmed through fitting the 9.7$\mu$m silicate band profile. This results might provide further evidence that the silicate-rich dust can be associated with the AGN torus and may also be responsible for the absorption observed at X-ray wavelengths in those galaxies. | The currently favoured unified models of active galactic nucleus (AGN) are ``orientation-based models''. They propose that the differences between different classes of objects arise because of their different orientations to the observer. These models propose the existence of a dense concentration of absorbing material in their central engine in a toroidal distribution, which blocks the broad line region (BLR) from the line of sight in Type~2 objects \citep[see][for a review ]{antonucci93,urry95}. However, a not well understood, but key issue in AGN physics is the composition and nature of this dusty torus. For example, several of these models predict that the silicate emission/absorption features at 9.7$\mu$m and 18$\mu$m are related to the observers viewing angle, in the framework of the AGN unified model \citep[e.g.][and references therein]{pier92,granato94,granato97,rowan95,nenkova02,nenkova08a,nenkova08b,schartmann05,schartmann08,dullemond05,fritz06,honig06,honig10,stalevski12,heymann12,efstathiou13}. The strengths of these features are sensitive to the dust distribution and could be a direct evidence of a connection between mid-infrared (MIR) optically-thick galaxies and Compton-thick AGNs \citep[see][]{shi06,mushotzky93,wu09,georgantopoulos11}. In fact, we fitted the silicate feature at 9.7$\mu$m of the Compton-thick NGC\,3281 \citep{sales11}, using the {\sc clumpy} torus models \citep{nenkova02,nenkova08a,nenkova08b}, and found that the hydrogen column density derived from silicate profile is similar to that derived from X-Ray spectrum originally employed to classify this galaxy as Compton-thick source \citep[see also][]{shi06,mushotzky93,thompson09}. Such result is further supported by the finding of \citet{shi06} which using observations of 9.7$\mu$m silicate features in 97 AGNs found that the strength of the silicate feature correlates with the H{\sc i} column density estimated from fitting the X-ray data, with high H{\sc i} columns corresponding to silicate absorption while low ones correspond to silicate emission. On the other hand, % \citet{thompson09}, for instance, suggested that even more informative than the 9.7$\mu$m feature alone the combination of it with the 18$\mu$m silicate feature reveals the geometry of the reprocessing dust around the AGNs, discriminating between smooth and clumpy distributions \citep[see also][]{sirocky08}, moreover, comparing 31 Sy~1 spectra obtained with IRS of 21 higher luminosity QSOs, these authors conclude that the weak emission lines observed are a consequence of clumpy AGN surroundings. In addition, \citet{goulding12} studying the 20 nearest bona fide Compton-thick AGNs with hard X-ray measurements, shows that only about half of nearby Compton-thick AGNs have strong Si-absorption features and conclude that the dominant contribution to the observed MIR dust extinction observed is not solely related to the compact dusty obscuring structure surrounding the central engine but it can be originated from the host galaxy instead. This paper is a part of a project that investigate the possibility that the presence of a silicate absorption feature at 9.7$\mu$m would be a signature of an heavily obscured AGN, we present here a study of the nuclear spectrum of Mrk\,3, a SB0 galaxy hosting an optically classified Seyfert~2 (Sy~2) nucleus with a BLR detected in polarized light \citep{adam77,miller90,tran95,collins05}. \citet{capetti95} show that the narrow line region (NLR) of Mrk\,3 has a S-shaped morphology, extended over nearly 2\arcsec, with a large number of resolved knots. They suggest that this morphology may be a consequence of the strong interaction between the NLR with the radio emission plasma \citep[see][]{capetti95,ruiz01,schmitt03}. It is also known that Mrk\,3 has a complex X-ray spectrum with heavily-absorbed, and cold reflection components, accompanied by a strong iron K$\alpha$ line at $\sim$6.4\,keV \citep{awaki91,awaki08,cappi99,turner97,sako00}. \citet{awaki08} obtained an intrinsic 2-10\,keV luminosity of $\sim1.6\,\times\,10^{43}$ erg s$^{-1}$, and suggested it was direct emission from Mrk\,3. In addition, they found that there is a heavily absorbed dust/gas component of N$_{H}\sim1.1\,\times\,10^{24}$\,cm$^{-2}$ obscuring the direct line of sight to the nucleus, and lead them to classify Mrk\,3 as a Compton-thick galaxy \citep[see also][]{awaki90,awaki91,iwasawa94,sako00}. Nevertheless, the derived value of the hydrogen column density of Mrk\,3 is arguable, in the light of studies developed by \citet{winter09}, who have shown that this target reveals a complex X-ray spectrum and a peculiar position in the color-color diagram of $F_{0.5-2keV}/F_{2-10keV}$ versus $F_{14-195keV}/F_{2-10keV}$, suggesting that its has high column density with a complex changin-look \citep[see][for more details]{winter09}. In this paper we present ground based, high spatial resolution, MIR spectra of the Compton-thick galaxy Mrk\,3. Such observation allowed the dust distribution to be studied in the central $\sim$200\,pc of this galaxy. As stated above the main goal is to investigate if the presence of a silicate absorption feature at 9.7$\mu$m of Mrk\,3 can be interpreted as a signature of an heavily obscured AGN caused by the dusty torus of the unified model. In addition, we briefly discuss the connection of dusty torus material with Compton-thick scattering material found in Sy~2 Mrk\,3 and NGC\,3281. This paper is organized as follows: in Section~\ref{observation}, we briefly describe the observations and data reduction; in Section~\ref{results}, we discuss the results. Concluding remarks are given in Section~\ref{conclusions}. | In this work we present a study using high spatial resolution ($\sim$193\,pc) spectra of the N-band wavelength (8--13$\mu$m) of the well known Compton-thick galaxy Mrk~3 in order to investigate the correlation between the Compton-thick material seen at X-ray wavelength and the silicate grain signature at 9.7$\mu$m. We also compare the results find here to Mrk~3 with those of Compton-thick galaxy NGC~3281, where the silicate absorption properties could be linked to Compton-thick material from X-ray spectra. Our main conclusions are: \begin{enumerate} \item No polycyclic aromatic hydrocarbons (PAHs) emissions were detected in the Mrk~3 spectra. However, strong [Ar{\sc\,iii]}\,8.9\,$\mu$m, [S{\sc\,iv]}\,10.5\,$\mu$m and [Ne{\sc\,ii]}\,12.8\,$\mu$m ionic emission-lines as well as a silicate absorption feature at 9.7$\mu$m have been detected at the nuclear spectrum.% \item By analysis of the N-band image of Mrk\,3 we are able to detected two emitting region, which the brightest one is dominated by the unresolved central source that might emerge from the dusty torus of the unified model. However, we should note that the spatial resolution of our Gemini/Michelle spectrum could not actually resolve the nuclear torus emission. The second component is an extended MIR emission from the circumnuclear region of Mrk\,3. This diffuse dust emission shows up as a wings towards E-W direction mimicking the same S-shaped of the Narrow Line Region as has been seeing in the optical image of [O{\sc\,iii]}\,$\lambda$5007\AA. \item The nuclear spectrum was compared with $\sim10^6$ SEDs of {\sc clumpy} torus models, the result suggests that the nuclear region of the Mrk~3 hosts a dusty toroidal structure with an angular cloud distribution of $\sigma = 50^{+11}_{-15}$ degree, observer's view angle $i = 66^{+4}_{-13}$ degree, and an outer radius of R$_{0}\sim$7$^{+5}_{-2.2}$\,pc. The hydrogen column density along the line of sight, derived from Nenkova's torus models, is N$_H\,=\,4.8^{+3.3}_{-3.1}\,\times\,10^{23}$\,cm$^{-2}$. The torus models also provide an estimate for the X-ray luminosity (L$_{X-ray}$ $\approx\,1.35\,\times\,10^{43}\,$erg s$^{-1}$) of the AGN in Mrk\,3 and this value is comparable to that derived from observed X-ray spectra, L = 6.2\,$\times10^{43}\,$erg s$^{-1}$. \item By comparing the torus properties of Mrk~3 and NGC~3281 Compton-thick Sy~2 galaxies it turns out similar torus model geometries. This result perhaps indicates further evidence that the silicate dust is associated with the torus predicted by the unified model of AGN, and could also be responsible for the absorption observed at X-ray wavelengths of those galaxies classified as Compton-thick sources. However, it is necessary better spatial resolution in order to address this assumption. \end{enumerate} | 14 | 3 | 1403.2266 |
1403 | 1403.7396_arXiv.txt | Following earlier authors, we re-examine constraints on the radial velocity anisotropy of generic stellar systems using arguments for phase space density positivity, stability, and separability. It is known that although the majority of commonly used systems have an maximum anisotropy of less than half of the logarithmic density slope \emph{i.e.} $\beta < \gamma/2$, there are exceptions for separable models with large central anisotropy. Here we present a new exceptional case with above-threshold anisotropy locally but with an isotropic center nevertheless. These models are non-separable and we maintain positivity. Our analysis suggests that regions of above-threshold anisotropy are more related to regions of possible secular instability, which might be observed in self-consistent galaxies in a short-lived phase. | Real and simulated stellar systems are often anisotropic as the lack of two-body collisions allows anisotropy from the initial configuration of phase space to persist in equilibrium. Radial anisotropy is difficult to measure observationally because of the lack of 3D velocity information. This, in turn, widens the uncertainty of our estimates of the mass and gravity of galaxies and black holes using the traditional Jeans equation approach. It is thus desirable to set some limits on this anisotropy from arguments such as positivity, stability, and even separability of the underlying phase space density. Usually a particular potential or density profile will be chosen to model a particular system of interest. The most effective and powerful presentation of such a system is the phase-space distribution function (DF) which is connected to observable, real-space quanitites of a system via various integral relations. Because the DF is a probability distribution that describes the phase-space of a system there are some fundamental requirements for a DF that produces a viable system. The most basic constraint is the positivity of the DF over the entire permitted domain of the system as while a system with a positive DF may not be stable, but a system with a negative DF cannot even be created. The relationship between a density profile and a DF is complicated and is not even one-to-one \citep{Dejonghe1987}. Since the DF describes the full six-dimensional shape of the system there are multiple possible DFs that can produce the same the density profile that only differ through, for example, their anisotropy profiles. Accordingly, it is very important to be able to derive unambiguous analytical expressions for a system of interest so that the positivity can be known precisely. The main problem here is that the process of finding an expression for the DF of an arbirtrary system is highly non-trivial and can usually not be done analytically. The most reliable method of finding a DF is through Eddington's formula \citep{Eddington1916} which inverts the integral relationship between the density and the DF, however even this is only analytic for a selection of density profiles and parameters. So in general, while specific models and schemes to produce analytical DFs for a given density do exist, there is a pressing need for simple, fundamental relationships between the quantities of a system that can constrain the positivity of a DF. A way to look at a particular model and know, without having to work through the inversions, whether or not the DF is likely to be positive would be ideal. One particularly important result was that of \citet{Ciotti1992} who found a simple criteria for the consistency of models using an Osipkov-Merritt anisotropy scheme. This paved the way for a dramatic expansion in the scope of such relations, bringing us to the birth of the result we will be examining. The first major step towards a completely general analytical constraint was made by \citet{Hansen2004} in the form of hard constraints on the conditions in the centre of a dark halo under reasonable assumptions of spherical symmetry, a power law phase-space density \citep{Taylor2001}, and the requirement for physical solutions to the Jeans Equations. They found that any system with an inner density profile $\rho \propto r^{-\gamma}$ would obey $1+\beta\leq\gamma\leq 3$ where $\beta$ is the velocity anisotropy parameter. This was subsequently improved until the relation could constrain a non-negative DF \citep{An2006} in multi-component systems \citep{Ciotti2009} and Cuddeford models \citep{Cuddeford1991,Ciotti2010a} which contain the Osipkov-Merritt models as a special case. After the discovery that the constraints held even for system outside these model groups \citep{Ciotti2010b} there was an effort made to define exactly how universal such constraints could be. This lead to the significant result of \citet{Ciotti2010} where it was proven that a large class of models obey the relation: \begin{equation} \label{eqn:GDSAI} \gamma\geq 2\beta \end{equation} This relationship was termed the Global Density Slope-Anisotropy Inequality (GDSAI) and was shown to be strongly connected to the positiity of the DF in this broad class of multi-component models Cuddeford models as well as in a variety of other anisotropic systems. Specifically, the work of \citet{vanhese2011} showed that obeying the GDSAI is a necesary condition for DF postivity in models where the central anisotropy was $\beta_0\leq0$ but did demonstrate counter-examples for larger anisotropies. All the systems that had been investigated and had a proven relationship to the GDSAI fall into the category of models with separable augmented density. An augmented density is one that can be described only in terms of a potential as a function of radius and the radius itself. A separable model of this kind can be described thusly: \begin{equation} \rho(r)=\rho_{aug}(\psi(r),r)=f(\psi)g(r)\quad0\leq\psi\leq\psi_0 \end{equation} where we alter the usual notation for the augmented density to avoid later confusion with our dimensionless variables. Since the GDSAI has been proved for all separable augmented systems with $\beta_0\leq0$ and is understood in such systems with $\beta_0>0$, we will investigate the behaviour of augmented systems which are non-separable. We accomplish this by using mono-energy DFs that produce non-separable density profiles which, whle highly artificial, are also comparatively easy to understand and analyse. We present a simple spherical model that significantly violates the GDSAI over a range of radii, produces systems with $\beta_0=0$, and has a globally positive DF. The DF is a mono-energy halo that is separable in E and $\text{L}^2$. We suggest that this is evidence that the GDSAI cannot be extended to all non-separable systems and cannot be used to constrain the positivity of their DFs. We instead suggest that, since our DF is not guaranteed to be dynamically stable, system stability is still the principle measure that can confirm whether such non-separable systems can be created and kept in equilibrium. In \S2 we briefly confirm the inadequacies of a purely Jeans Equation-based approach, \S3 shows our construction of a simple system that does not follow the inequality, \S4 examines the practical implications of the system, \S5 examines the stability of the system, \S6 describes the generalisation of our model, and \S7 concludes. | We have managed to construct an non-separable, equilibrium system with $\beta_0<1/2$ using a globally positive DF which demonstrates behaviours inconsistent with an application of the GDSAI. The magnitude of the departure from the GDSAI is dependent on the value of the angular momentum threshold $\text{L}^2_{cut}$, which is also the parameter that controls the stability of the model. It is possible to pick values of this parameter where the majority of the system fails to agree with an extension of the GDSAI but is also unstable, or where the failure is highly local and the instability is negligible. This is a significant expansion on previous work proving the efficacy of the GDSAI in separable systems \citep{Ciotti2010,vanhese2011} and is suggestive that the GDSAI may not be applicable to models with non-separable augmented densities. We conclude this shows that whether or not a non-separable system obeys the GDSAI does not constitute proof of the positivity or otherwise of the system's DF. We do, however, note that there is a non-trivial relationship between disagreement with an extended GDSAI and the stability of the system. We suggest that GDSAI may not imply phase-space consistency in such systems but may be able to make some predictions of model stability. Exploring generalisations of the simple system have shown that this approach will not be able to yield a system that is stable under the H\'{e}non criteria. Future work will therefore focus on mechanisms beyond the removal of high angular momentum orbits. In conclusion, we feel that while the GDSAI remains a useful guide for non-separable systems, it should not be considered a definitive criterion in discussions of DF positivity in such systems. | 14 | 3 | 1403.7396 |
1403 | 1403.0055_arXiv.txt | We present a physical model for the evolution of the ultraviolet (UV) luminosity function of high redshift galaxies taking into account in a self-consistent way their chemical evolution and the associated evolution of dust extinction. {Dust extinction is found to increase fast with halo mass. A strong correlation between dust attenuation and halo/stellar mass for UV selected high-$z$ galaxies is thus predicted.} The model yields good fits of the UV and Lyman-$\alpha$ (Ly$\alpha$) line luminosity functions at all redshifts at which they have been measured. {The weak observed evolution of both luminosity functions between $z=2$ and $z=6$ is explained as the combined effect of the negative evolution of the halo mass function, of the increase with redshift of the star formation efficiency, due to the faster gas cooling, and of dust extinction, differential with halo mass. The slope of the faint end of the UV luminosity function is found to steepen with increasing redshift, implying that low luminosity galaxies increasingly dominate the contribution to the UV background at higher and higher redshifts. } The observed range of UV luminosities at high-$z$ implies a minimum halo mass capable of hosting active star formation $M_{\rm crit}\lesssim 10^{9.8}\,M_\odot$, consistent with the constraints from hydrodynamical simulations. {From fits of Ly$\alpha$ line luminosity functions plus data on the luminosity dependence of extinction and from the measured ratios of non-ionizing UV to Lyman-continuum flux density for samples of $z\simeq 3$ Lyman break galaxies and Ly$\alpha$ emitters, we derive a simple relationship between the escape fraction of ionizing photons and the star formation rate. It implies that the escape fraction is larger for low-mass galaxies, which are almost dust-free and have lower gas column densities. Galaxies already represented in the UV luminosity function ($M_{\rm UV}\lesssim -18$) can keep the universe fully ionized up to $z\simeq 6$. This is consistent with (uncertain) data pointing to a rapid drop of the ionization degree above $z\simeq 6$, such as indications of a decrease of the comoving emission rate of ionizing photons at $z\simeq 6$, of a decrease of sizes of quasar near zones, and of a possible decline of the Ly$\alpha$ transmission through the intergalactic medium at $z>6$}. On the other side, the electron scattering optical depth, $\tau_{\rm es}$, inferred from Cosmic Microwave Background (CMB) experiments favor {an ionization degree close to unity up to $z\simeq 9$--10. Consistency with CMB data can be achieved if $M_{\rm crit}\simeq 10^{8.5}\,M_\odot$, implying that the UV luminosity functions extend to $M_{\rm UV}\simeq -13$, although the corresponding $\tau_{\rm es}$ is still on the low side of CMB-based estimates.} | \label{sec:intro} One of the frontiers of present day astrophysical/cosmological research is the understanding of the transition from the ``dark ages'', when the hydrogen was almost fully neutral, to the epoch when stars and galaxies began to shine and the intergalactic hydrogen was almost fully re-ionized. {Observations with the Wide Field Camera 3 (WFC-3) on the Hubble Space Telescope \citep[HST,][]{Finkelstein2012,Bouwens2012b,Oesch2013a,Oesch2013b,Ellis2013,Robertson2013} have substantially improved the observational constraints on the abundance and properties of galaxies at cosmic ages of less than 1\,Gyr. Determinations of the ultraviolet (UV) luminosity function (LF) of galaxies at $z=7$--8 have been obtained by \citet{Bouwens2008,Bouwens2011b}, \citet{Smit2012}, \citet{Oesch2012}, \citet{Schenker2013}, \citet{McLure2010,McLure2013}, \citet{Yan2011,Yan2012}, \citet{Lorenzoni2011,Lorenzoni2013}, and \citet{Bradley2012}. Estimates over limited luminosity ranges were provided by \citet{Bouwens2008}, \citet{Oesch2013a}, and \citet{McLure2013} at $z=9$, and by \citet{Bouwens2011b} and \citet{Oesch2013a,Oesch2013b} at $z=10$.} Constraints on the UV luminosity density at redshifts up to 12 have been presented by \citet{Ellis2013}. Since galaxies at $z\geqslant 6$ are the most likely sources of the UV photons capable of ionizing the intergalactic hydrogen, the study of the early evolution of the UV luminosity density is directly connected with the understanding of the cosmic reionization. Several studies \citep{Robertson2013,KuhlenFaucherGiguere2012,Alvarez2012,HaardtMadau2012} have adopted {\it parameterized} models for the evolving UV luminosity density. These models are anchored to the observed high-$z$ LFs and are used to investigate plausible reionization histories, consistent with other probes of the redshift-dependent ionization degree and primarily with the electron scattering optical depth measured by the Wilkinson Microwave Anisotropy Probe \citep[WMAP,][]{Hinshaw2013}. {There are also several theoretical models for the evolution of the LFs of Ly$\alpha$ emitters (LAEs) and of Lyman break galaxies (LBGs), using different approaches. These include various semi-analytic galaxy formation models \citep{Tilvi2009,LoFaro2009,Kobayashi2010,Raicevic2011,Garel2012,Mitra2013,Lacey2011,GonzalezPerez2013}, smoothed particle hydrodynamics (SPH) simulations \citep{Dayal2009,Dayal2013,Nagamine2010,Salvaterra2011,Finlator2011b,Jaacks2012a,Jaacks2012b} as well as analytic models \citep{Mao2007,Dayal2008,Samui2009,MunozLoeb2011,Munoz2012,SalimLee2012,Tacchella2013}. Each approach has known strengths and weaknesses. Given the complexity and the large number of variable parameters in many models, an analytic physical approach is particularly useful in understanding the role and the relative importance of the ingredients that come into play. We adopt such approach, building on the work by \citet{Mao2007}. As mentioned above, spectacular advances in direct determinations of the UV LFs of galaxies at epochs close to the end of reionization have been recently achieved. These imply much stronger constraints on models, particularly on the analytic ones, that need to be revisited. Further constraints, generally not taken into account by all previous studies, have been provided by far-IR/(sub-)millimeter data, that probe other phases of the early galaxy evolution, concurring to delineate the complete picture.} {A key novelty of the model is that it includes a self-consistent treatment of dust absorption and re-emission, anchored to the chemical evolution of the interstellar medium (ISM). This allows us to simultaneously account for the demography of both UV- and far-IR/(sub-)millimeter-selected high-$z$ star-forming galaxies. In addition, the model incorporates a variety of observational constraints that are only partially taken into account by most previous studies. Specifically, we take into account constraints on: the escape fraction of ionizing photons coming from both continuum UV and Ly$\alpha$ line LFs and from measurements of the ratio of non-ionizing to ionizing emission from galaxies; the luminosity/stellar mass--metallicity \citep{Maiolino2008,Mannucci2010} and the stellar mass--UV luminosity \citep{Stark2009,Stark2013} relations; the amplitude of the ionizing background at several redshifts up to $z=6$ \citep{DallAglio2009,WyitheBolton2011,Calverley2011,KuhlenFaucherGiguere2012,BeckerBolton2013,Nestor2013}. The successful tests of the model against a wide variety of data constitute a solid basis for extrapolations to luminosity and redshift ranges not yet directly probed by observations.} The plan of the paper is the following. In Section\,\ref{sect:model}, we outline the model describing its basic ingredients. In Section\,\ref{sect:UV_LFs}, we exploit it to compute the cosmic-epoch dependent UV LF, allowing for the dust extinction related to the chemical evolution of the gas. In Section\,\ref{sect:Ionizing_luminosity}, we compute the production rate of ionizing photons and investigate their absorption rates by both dust and neutral hydrogen (HI), as constrained by measurements of the Ly$\alpha$ line LFs at various redshifts and of the ratios of non-ionizing UV to Lyman-continuum luminosities. We then derive the fraction of ionizing photons that can escape into the intergalactic medium (IGM) and use the results to obtain the evolution with redshift of the volume filling factor of the intergalactic ionized hydrogen. The main conclusions are summarized in Section\,\ref{sect:conclusions}. Throughout this paper we adopt a flat $\Lambda \rm CDM$ cosmology with matter density $\Omega_{\rm m} = 0.32$, $\Omega_{\rm b} = 0.049$, $\Omega_{\Lambda} = 0.68$, Hubble constant $h=H_0/100\, \rm km\,s^{-1}\,Mpc^{-1} = 0.67$, spectrum of primordial perturbations with index $n = 0.96$, and normalization $\sigma_8 = 0.83$ \citep{PlanckCollaborationXVI2013}. All the quoted magnitudes are in the AB system \citep{OkeGunn1983}. | \label{sect:conclusions} We have worked out a physical model for the evolution of the UV LF of high-$z$ galaxies and for the reionization history. The LF is directly linked to the formation rate of virialized halos and to the cooling and heating processes governing the star formation. For the low halo masses and young galactic ages of interest here it is not enough to take into account SN and AGN feedback, as usually done for halo masses $M_{\rm vir}\gtrsim 10^{11}\,M_\odot$, because other heating processes, such as the radiation from massive low-metallicity stars, stellar winds, and the UV background, can contribute to reducing and eventually quenching the SFR. We have modeled this by increasing the efficiency of cold gas removal and introducing a lower limit, $M_{\rm crit}$, to halo masses that can host active star formation. Another still open issue is the production rate of UV photons per unit halo mass at high-$z$, which is influenced by two competing effects. On one side, the expected increase with redshift of the Jeans mass, hence of the characteristic stellar mass, entails a higher efficiency in the production of UV photons. On the other side, more UV photons imply more gas heating, i.e., a decrease of the SFR. We find that the observed UV LFs up to the highest redshifts are very well reproduced with the SFRs yielded by the model and the extinction law of Equation~(\ref{eq|extgigi}) for a production rate of UV photons corresponding to a \citet{Chabrier2003} IMF. The observed UV LFs (Figure~\ref{fig:LF_LBGs_1350}) constrain $M_{\rm crit}$ to be $\lesssim 10^{10}\,M_\odot$, consistent with estimates from simulations. Figure~\ref{fig:LF_LBGs_1350} highlights several features of the model: i) dust extinction is higher for higher luminosities, associated to more massive halos which have a faster metal enrichment; ii) the higher feedback efficiency in less massive halos makes the slope of the faint end of the LF flatter than that of the halo formation rate; yet the former reflects to some extent the steepening with increasing $z$ of the latter; this has important implications for the sources of the ionizing background at high $z$; iii) the evolution of the LF from $z=2$ to $z=6$ is weak because the decrease with increasing redshift of the halo formation rate in the relevant range of halo masses is largely compensated by the increase of the star formation efficiency due to the faster gas cooling {and by the increase of dust extinction with increasing halo mass}. Another key property of the model (Figure~\ref{fig:sph_evol_oqs1}) is the fast metal enrichment of the more massive galaxies that translates into a rapid increase of dust obscuration. Therefore these galaxies show up mostly at far-IR/(sub-)millimeter wavelengths, a prediction successfully tested against observational data (Figures~\ref{fig:SFRF_z} and \ref{fig:SFRD_8d5_18d0}). {The model thus predicts a strong correlation between dust attenuation and halo/stellar mass for UV selected high-$z$ galaxies.} The ratio of dust-obscured to unobscured star formation has a broad maximum at $z\simeq 2$--3. The decrease at lower redshifts is due to the decreasing amount of ISM in galaxies; at higher redshifts it is related to the fast decrease of the abundance of massive halos where the metal enrichment and, correspondingly, the dust extinction grow fast. Similarly, good fits are obtained for the Ly$\alpha$ line LFs (Figure~\ref{fig:LF_LAEs_c}) that provide information on the production rate of ionizing photons and on their absorption by neutral interstellar hydrogen. Further constraints on the attenuation by dust and HI are provided by recent measurements \citep{Nestor2013,Mostardi2013} of the observed ratios of non-ionizing UV to Lyman-continuum flux densities for LAEs and LBGs. These data have allowed us to derive a simple relationship between the optical depth for HI absorption and SFR. Taking this relation into account, the model reproduces the very weak evolution of the Ly$\alpha$ line LF between $z=2$ and $z=6$, even weaker than in the UV. {The derived relationships linking the optical depths for absorption of ionizing photons by dust and HI to the SFR and, in the case of dust absorption, to the metallicity of the galaxies, imply higher \textit{effective} escape fractions for galaxies with lower intrinsic UV luminosities or lower halo/stellar masses, and also a mild increase of the escape fraction with increasing redshift at fixed luminosity or halo/stellar mass. Redshift- or mass-dependencies of the escape fraction were previously empirically deduced by, e.g., \citet{Alvarez2012} and \citet{Mitra2013}. Our model provides a physical basis for these dependencies.} At this point we can compute the average injection rate of ionizing photons into the IGM as a function of halo mass and redshift. To reconstruct the ionization history of the universe we further need the evolution of the clumping factor of the IGM, {for which we have adopted, as our reference, the model $C_{\rm HII,T_b, x_{\rm HII}>0.95}$ by \citet{Finlator2012}, but also considered alternative models, discussed in the literature. With our recipe for the escape fraction of ionizing photons we find that galaxies already represented in the observed UV LFs, i.e., with $M_{\rm UV}\lesssim -18$, hosted by halo masses $\gtrsim 10^{10}\,M_\odot$, can account for a complete ionization of the IGM up to $z\simeq 6$. To get complete ionization up to $z\simeq 7$ the population of star-forming galaxies at this redshift must extend in luminosity to $M_{\rm UV}\sim -13$ or fainter, in agreement with the conclusions of other analyses \citep[e.g.,][]{Robertson2013}. The surface densities of $M_{\rm UV}\sim -13$ galaxies would correspond to those of halo masses of $\sim 10^{8.5}\,M_\odot$, not far from the lower limit on $M_{\rm crit}$ from hydrodynamical simulations.} A complete IGM ionization up to $z\simeq 7$ is disfavoured by some (admittedly uncertain) data at $z\simeq 6$--7 collected by \citet{Robertson2013}, that point to a fast decline of the ionization degree at $z\gtrsim 6$. {However, an even more extended ionized phase is implied by the determinations of electron scattering optical depths, $\tau_{\rm es}$, from CMB experiments. Our model adopting the critical halo mass $M_{\rm crit} = 10^{8.5}\ M_\odot$, yielding complete ionization up to $z\simeq 7$, gives a $\tau_{\rm es}$ consistent with determination by \citet{PlanckCollaborationXVI2013} and less than $2\sigma$ below those by \citet{Hinshaw2013} and \citet{PlanckCollaborationXVI2013}. Raising $M_{\rm crit}$ to $10^{10}\ M_\odot$ limits the fully ionized phase to $z \lesssim 6$ and decreases $\tau_{\rm es}$ to a value almost $\simeq 3\,\sigma$ below the estimates by \citet{Hinshaw2013} and \citet{PlanckCollaborationXVI2013} and $2\,\sigma$ below that by \citet{PlanckCollaborationXV2013}. } Since all these constraints on the reionization history are affected by substantial uncertainties, any firm conclusion is premature. Better data are needed to resolve the issue. | 14 | 3 | 1403.0055 |
1403 | 1403.2044_arXiv.txt | Motivated by reported claims of the measurements of a variation of the fine structure constant $\alpha$ we consider a theory where the electric charge, and consequently $\alpha$, is not a constant but depends on the Ricci scalar $R$. We then study the cosmological implications of this theory, considering in particular the effects of dark energy and of a cosmological constant on the evolution of $\alpha$. Some low-red shift expressions for the variation of $\alpha(z)$ are derived, showing the effects of the equation of state of dark energy on $\alpha$ and observing how future measurements of the variation of the fine structure constant could be used to determine indirectly the equation of state of dark energy and test this theory. In the case of a $\Lambda CDM$ Universe, according to the current estimations of the cosmological parameters, the present value of the Ricci scalar is $\approx 10\%$ smaller than its future asymptotic value determined by the value of the cosmological constant, setting also a bound on the future asymptotic value of $\alpha$. | 14 | 3 | 1403.2044 |
||
1403 | 1403.0870_arXiv.txt | In this work UV and white light (WL) coronagraphic data are combined to derive the full set of plasma physical parameters along the front of a shock driven by a Coronal Mass Ejection. Pre-shock plasma density, shock compression ratio, speed and inclination angle are estimated from WL data, while pre-shock plasma temperature and outflow velocity are derived from UV data. The Rankine-Hugoniot (RH) equations for the general case of an oblique shock are then applied at three points along the front located between $2.2-2.6$~R$_\odot$ at the shock nose and at the two flanks. Stronger field deflection (by $\sim 46^\circ$), plasma compression (factor $\sim 2.7$) and heating (factor $\sim 12$) occur at the nose, while heating at the flanks is more moderate (factor $1.5-3.0$). Starting from a pre-shock corona where protons and electrons have about the same temperature ($T_p \sim T_e \sim 1.5 \cdot 10^6$ K), temperature increases derived with RH equations could better represent the protons heating (by dissipation across the shock), while the temperature increase implied by adiabatic compression (factor $\sim 2$ at the nose, $\sim 1.2-1.5$ at the flanks) could be more representative of electrons heating: the transit of the shock causes a decoupling between electron and proton temperatures. Derived magnetic field vector rotations imply a draping of field lines around the expanding flux rope. The shock turns out to be super-critical (sub-critical) at the nose (at the flanks), where derived post-shock plasma parameters can be very well approximated with those derived by assuming a parallel (perpendicular) shock. | The study of interplanetary shocks accelerated by Coronal Mass Ejections (CMEs) is very important to provide a better understanding of fundamental plasma physics processes involved, like the acceleration of energetic particles at the shock and wave-particle interactions replacing binary collisions in collisionless plasmas. After a long debate in the scientific community, it is now widely accepted that Solar Energetic Particles (SEPs - electrons and ions propagating at energies ranging from a few keV up to some GeV) are accelerated by two different sources (involving different acceleration physical mechanisms): solar flares (producing the so-called impulsive SEP events) and CME-driven shocks \citep[producing the so-called gradual events; see recent review by][]{reames2013}. Nevertheless, SEPs accelerated by interplanetary shocks are much more important regarding their space weather implications: particles accelerated in gradual events reach the highest energies and stronger fluxes, and due to the extension of interplanetary shock waves, these particles are also injected over a much broader region of the interplanetary space with respect to SEP accelerated in flares. Thus, the interaction of these particles with the Earth environment as they propagate along the interplanetary Parker spiral is much more common with respect to SEP associated with impulsive events, whose sources are clearly concentrated on the western half of the Sun magnetically connected with the Earth. Nevertheless, the acceleration of SEPs by CME-driven shocks as well as their propagation in the interplanetary medium are still not well understood and one of the main open problems is the location in the corona of seed particles being accelerated. Over the last decades, much information on SEPs and associated interplanetary shock waves were derived from in situ data acquired from many different spacecrafts located at many different heliocentric distances, with the closest approach ever reached to the Sun around 0.29~AU, thanks to data acquired by Helios 1 and Helios 2 spacecrafts \citep[see e.g.][]{kallenrode 1993}. It has been pointed out that CME-driven shocks are likely most efficient in accelerating electrons in the heliocentric distance range between $1.5-4.0$~R$_\odot$ \citep[e.g.][]{gopalswamy2009a}, hence quite close to the Sun, in a region so far unexplored by in situ data. This could be due to a combination of the CME speed and the characteristic speeds of the medium crossed by the CME, leading to the production of strong shocks only closer to the Sun, while shocks became too weak or decayed by the time the CME reached the outer corona. This idea is in agreement with the observational evidence that type-II radio bursts (due to $\sim 10$~keV electron beams accelerated by the shocks and able to generate plasma waves at the local plasma frequency $f_{pe} \propto \sqrt{n_e}$) are excited only when CMEs are closer to the Sun. In fact, the theory of piston-driven shock waves induced in collisionless plasmas requires that the driver (i.e. the CME in this case) propagates in the medium (i.e. the solar corona) relatively faster than the local Alfv\'en or magnetosonic speeds. The Alfv\'en speed $v_A$ of a plasma with mass density $\rho$ permeated by a magnetic field $B$ is given by $v_A = B/\sqrt{\mu_0 \rho}$, while the magnetosonic speed $v_{ms}$ depends on the wave propagation angle $\theta$ with respect to the magnetic field and is given by $v_{ms} = \sqrt{v_A^2 + c_s^2}$ (with $c_s$ sound speed) only in the special case of perpendicular shock ($\theta = 90^\circ$). As CMEs propagate and expand from the lower corona to the interplanetary medium, they are expected to meet a plasma with a local minimum of $v_A$ (hence of $v_{ms}$) around $1.2 - 1.4$ R$_\odot$ and a local maximum around 3.5~R$_\odot$ \citep[see e.g.][]{mann2003}, and then $v_A$ progressively decays at higher distances (mainly because of the magnetic field radial decay) allowing the shock wave to survive very far from the Sun. The fundamental parameter controlling the strength of the shock is the Alfv\'enic shock Mach number $M_A$, given by the ratio of the upstream flow speed along the shock normal $v_{un}$ (in a reference frame at rest with the shock) to the upstream Alfv\'en speed $M_A = v_{un}/v_A$. It is well known that when the Mach number exceeds a certain (angular dependent) critical value $M_A^*$ the shock cannot be sustained by purely resistive dissipation like anomalous resistivity and viscosity alone. The excess energy is then rejected from the shock by reflecting part of the incoming plasma back up-stream. The up-stream plasma can thus cross multiple times the shock surface, being in turn accelerated up to SEP energies \citep[see e.g.][]{edminstonkennel1984}. Hence, the supercriticality of shocks is considered as a good indicator of their ability to accelerate particles and accurate methods for deducing shock strengths are indispensable, even if other authors pointed out that a determining parameter could also be the existence of seed supra-thermal particles located in the coronal regions crossed by the shock \citep[see e.g.][]{mason1999,lee2007}. Because, as mentioned, no in situ data are available close to the Sun, shock properties in the lower corona have, so far, only been explored with remote sensing data. Over the last decade, unique information on CME-driven shocks propagating in the corona have been derived from the analysis of White-Light (WL) coronagraphic images \citep[e.g.][]{vourlidas2003,rouillard2011}. These data have proven to be very useful to derive shock speeds, shock compression ratios $X = \rho_d / \rho_u$ \citep[i.e. the ratio between the up-stream $\rho_u$ and the down-stream $\rho_d$ densities; e.g.][]{ontiverosvourlidas2009}, and strengths of coronal magnetic fields crossed by the shocks \citep[e.g.][]{gopalswamyyashiro2011}, and allowed also statistical studies, for instance, on the correlation between the peak SEP intensities and associated CME speeds \citep[e.g.][]{kahler2001}. The study of decameter-hectometric to kilometric type-II radio bursts was also of fundamental importance, providing information about the shock compression ratios, shock speeds and strengths \citep[e.g.][]{mann2003,vrsnak2004,mancusoabbo2004}. Moreover, unique information (not available from the analysis of WL and radio data) on post-shock plasma heating and acceleration were provided by the analysis of UV spectra \citep[e.g.][]{raymond2000,mancuso2002}. Nevertheless, many information on shocks have been derived only when comparative analyses were performed by using remote sensing data acquired with very different wavelengths, like radio and UV \citep[e.g.][]{mancuso2002,mancusoavetta2008}, radio and WL \citep[e.g.][]{reiner2003}, and more recently UV and WL \citep[e.g.][]{bemporadmancuso2010}. In this work we demonstrate how UV and WL data can be used to derive the full set of plasma physical parameters all along the front of a shock, including not only the strength of pre- and post-shock magnetic and velocity fields, but also the rotation induced by the transit of the shock in the magnetic and velocity field vectors. These information are of fundamental importance for our understanding of the physical processes occurring during the propagation of interplanetary shock waves and possibly to reveal the location of SEP acceleration in the corona. The CME-driven shock associated with the event studied here has already been analyzed by \citet[][]{bemporadmancuso2011} by using SOHO/LASCO WL data alone. In this work we extend the results previously obtained by also taking advantage of UV spectra acquired by the UV Coronagraph Spectrometer \citep[UVCS; see][]{kohl1995}. This paper is the first one in a sequence of two dealing with CME-driven shocks: the second one will focus on MHD simulations and comparison with observations. The paper is organized as follows: first (\S~2) we summarize for the reader's convenience the relevant techniques we developed and results we obtained in previous works and applied here, then we describe how data have been analyzed to derive the required up-stream plasma parameters from WL (\S~3.1) and UV (\S~3.2) data. In what follows, we explain how from these parameters the full set of down-stream plasma parameters has been derived with Rankine-Hugoniot (RH) equations (\S~4). Then, the obtained results are summarized and discussed (\S~5). | In this work we demonstrate that UV and WL data can be combined to derive unique information on the interaction between the coronal plasma and shock waves. This analysis allows us to derive not only the strength of the pre-shock magnetic field at the shock nose \citep[as done by][through the analysis of WL data alone]{gopalswamyyashiro2011}, but also the strength of the pre-shock field at the flanks, the strength of the post-shock field at the nose and at the flanks, together with the rotation of the field vector induced by the transit of the shock itself. In fact, this analysis can be performed not only at the center of the shock, but all along its front, thus allowing the determination of coronal fields at different latitudes and altitudes at the same time. The main results of this work can be summarized as follows: \begin{itemize} \item Analysis of WL coronagraphic images (SOHO/LASCO) can be employed to derive not only the up-stream coronal plasma density and the shock compression ratio $X$, but also the angle $\theta_{sh}$ between the normal to the shock front and the radial direction, and the shock speed $v_{sh}$, together with an approximate estimate of the shock Mach number $M_{A\angle}$ for the general case of an oblique shock. All these parameters have been derived here (and in previous Paper~1 and Paper~2) all along the shock front, and hence at different latitudes and altitudes in the corona. \item Resulting shock speed $v_{sh}$ is larger as expected at the center of the shock ($v_{sh} \sim 1570 - 1580$~km~s$^{-1}$), and then it decreases towards the shock flanks ($v_{sh} \sim 1340 - 1350$~km~s$^{-1}$ about $15^\circ - 25^\circ$ away from the center). Shock compression ratio $X$ and Mach number $M_A$ also are maxima at the shock nose (where $X \simeq 3.0$, $M_{A\angle} \simeq 1.8$) and decrease towards the shock flanks (where $X \simeq 1.2$, $M_{A\angle} \simeq 1.1$ about $15^\circ - 25^\circ$ away from the center). \item Analysis of UV data (SOHO/UVCS) can be employed to derive the plasma physical parameters missing from the analysis of the WL data: the pre-shock plasma temperature $T$ and outflow velocity $v_{out}$. This work focused on the three coronal points where UV and WL data were available at the same locations in the pre-shock corona: two points at the northward and southward shock flanks and one point at the shock nose. \item Resulting pre-shock temperatures and velocities are around $T \sim 1.0-1.7 \cdot 10^6$~K and $v_{out} \sim 40-70$~km~s$^{-1}$, consistent with values expected in the analyzed range of heliocentric distances ($2.2-2.6$~R$_\odot$) and latitudes ($30^\circ - 70^\circ$ N). These parameters have been derived from the observed UV (\ovi\ $\lambda$ 1031.91 \AA\ and \hi\ \lya\ $\lambda$ 1215.67 \AA) integrated line intensities alone. Hence, no spectroscopic information are required (e.g. line FWHM and line centroid) in order to repeat this analysis. \item The above results from the WL and UV data can be combined in order to derive (with MHD RH equations) the full set of post-shock plasma parameters, including the pre- ($B_u$) and post-shock ($B_d$) magnetic field strengths, post-shock outflow velocity, together with the magnetic and velocity field vector rotation angles across the shock surface. \item Resulting pre-shock coronal magnetic field is around $B_u \simeq 0.2-0.5$~G, hence compatible with values expected in the analyzed range of heliocentric distances ($2.2-2.6$~R$_\odot$), with a latitudinal variation by a factor $\simeq 2$ between the northward and southward coronal point. The Alfv\'en speed $v_A$ is around $v_A \simeq 810-820$~km~s$^{-1}$ at 2.2~R$_\odot$ and increases up to $v_A \simeq 1090$~km~s$^{-1}$ at 2.6~R$_\odot$, with a plasma $\beta \sim 0.01-0.04$. \item The shock transit corresponds to a magnetic field compression by a factor $\simeq 1.6-1.7$ at the northward flank and the nose, while a weaker compression by a factor $\simeq 1.2$ occurs at the southward flank. Nevertheless, the stronger field rotation occurs at the shock nose, where the field is deflected by $\simeq 46^\circ$ and stronger plasma compression (factor $\sim 2.7$) and heating (factor $\sim 12$) occur. Weaker deflections by $\simeq 14-16^\circ$ occur at the flanks, where more moderate, but still significant (factor $1.5-3.0$) plasma heatings occur. Magnetic field deflections along the shock front are plotted in Figure~\ref{fig06}. \item Shock Mach numbers $M_A$ measured from the combined WL and UV data analysis are in good agreement with $M_{A\angle}$ values estimated from WL data alone with an empirical formula (Equation~1), which is then validated here. Shock Mach numbers at the nose (flanks) are very close to those expected for a parallel (perpendicular) shock. Hence, shock conditions at the nose (flanks) are very well approximated by a parallel (perpendicular) shock. We also confirm that the shock is super-critical ($M_A \sim M_{A\angle} > M_A^*$) at the nose and sub-critical ($M_A \sim M_{A\angle} < M_A^*$) at the flanks. \item The shock transit induces a clockwise rotation of the magnetic field vector at the southward flank and at the nose, while a counter-clockwise rotation occurs at the northward flank. This results in a draping of magnetic field lines around the expanding CME. On the other hand, the clockwise rotation of the velocity field vector occurs at both the flanks and also at the nose, resulting in an asymmetric post-shock velocity field being met by the expanding CME (Figure~\ref{fig06}). \end{itemize} In order to better describe the above results, we have drawn a cartoon (Figure~\ref{fig07}, left panel) showing the overall possible distribution of pre- and post-shock magnetic and velocity fields all along the shock front. The cartoon is drawn starting from the vector rotations derived for the three points considered in this analysis (Figure~\ref{fig06} and blue filled circles in Figure~\ref{fig07}), and then by assuming continuity all over other latitudes. The resulting magnetic field deflections due to the shock transit correspond to a draping of field lines around the expanding flux rope. Very interestingly, this result is in good agreement with post-shock magnetic field rotations recently obtained by \citet[][]{liu2011} with a 3D MHD numerical simulation. In addition, these authors found that in the CME sheat regions closer to the shock surface (what they call layer~1) ``the magnetic field lines remain in the coplanarity layer as if they are unaffected by the draping field line''. This means that if the pre-shock field lines were lying mainly on POS, the same would also be true for post-shock field lines, hence strongly supporting our assumption that measured magnetic field deflections occur mainly on that plane. It is also interesting to note that the asymmetry in the deflection of velocity vectors (Figure~\ref{fig07}, left panel) is in agreement with the asymmetry of the shock front shape which is also expanding northward in latitude (Figure~\ref{fig02}, left panel). This cartoon evidences, in general, how the physical parameters of post-shock plasma strongly depend not only on the pre-shock magnetic and velocity fields inclination with respect to the shock surface, but also on the shock compression ratio. Moreover, Figure~\ref{fig07} (right panel) shows where the shock is sub- and super-critical, hence the expected coronal regions where seed particles for SEP acceleration could be located. Because, as determined in Paper~1 and Paper~2, the fraction of the front where the shock is super-critical decreases as the shock expands in the corona (see Figure 3 in Paper~1), broader or narrower coronal regions could serve as SEP sources at different times. As anticipated in the description of the data analysis, plasma temperatures $T$ derived from UV line intensities are representative of electron temperatures $T_e$, mainly because collisions with electrons are responsible for both the determination of the atomic ionization stage and for the collisional excitation of coronal ions. Electron temperatures have been used here (and in previous Paper~3) as input plasma temperatures for the MHD RH equations, thus assuming that electrons and protons are both heated across the shock discontinuity. Nevertheless, as recently pointed out by \citet[][]{manchester2012}, the thermodynamics for the protons and electrons are expected to be different because ``the shock is only supersonic relative to the proton fast-mode speed and not that of the electrons'', hence ``protons receive the kinetic energy dissipated at the shock, while electrons are only heated by their adiabatic compression at the shock''. This is expected to be true also in the special case of the event reported here, because the thermal speed of electrons ($v_e \sim 6740$~km~s$^{-1}$ for $T_e =1.5\cdot 10^6$~K) is much larger than the measured shock speed $v_{sh}$, while the proton thermal speed ($v_p \sim 160$~km~s$^{-1}$ for $T_p =1.5\cdot 10^6$~K) is much smaller than $v_{sh}$ (see Figure~\ref{fig02}, right panel), hence we expect only protons to be directly heated by the shock. This means that, even if the pre-shock corona is close to thermodynamic equilibrium with $T_p \sim T_e \sim 1.5 \cdot 10^6$ K, the transit of the shock will cause a decoupling between electron and proton temperatures, with $T_p > T_e$ after the transit of the shock. For this reason, in Table~\ref{tab03} we also provided the expected values for down-stream plasma temperatures $T_{d\gamma}$ one could expect from simple adiabatic compression of the considered particles. According to \citet[][]{manchester2012}, we suggest that the down-stream plasma temperatures derived here with RH equations ($T_d$ in Table~\ref{tab03}) could be more representative of post-shock proton temperatures ($T_p \simeq T_d$), while temperatures given by adiabatic compression ($T_{d\gamma}$ in Table~\ref{tab03}) could be more representative of post-shock electron temperatures ($T_{d\gamma} \sim T_e$). In this interpretation, not only our proton temperature increases (by a factor $\sim 12$ at the shock nose and by a factor $1.5-3.0$ at the flanks), but our electron temperature also increases (by a factor $\sim 2$ at the shock nose and by a factor $1.2-1.5$ at the flanks) across the shock are in good agreement with simulation results by \citet[][]{manchester2012} above 1.5~R$_\odot$. After the transit of the shock, electrons and protons are only weakly coupled by collisions: their energy equipartition time $\tau_{pe}$ \citep[as we estimated with formula given by][]{spitzer1962} is on the order of $\tau_{pe} \sim$ 18~hours at the shock nose, hence the post-shock plasma located in the CME sheat and being met by the expanding flux rope will have protons and electrons with significantly different temperatures. To our knowledge, this is the first time that such a deep knowledge of plasma physical parameters across an interplanetary shock is provided from remote sensing data. This work (and previous analysis in Paper~3) also demonstrates that UV and WL data relative to interplanetary shocks can be combined in order to derive reliable measurements of coronal magnetic fields. In particular, field values derived here are in very good agreement with those provided in the same range of heliocentric distances ($2.6-2.2$~R$_\odot$) with empirical models by \citet[][]{dulkmclean1978} ($\sim 0.25-0.38$~G), \citet[][]{patzold1987} ($\sim 0.51-0.81$~G) and more recently by \citet[][]{mancusogarzelli2013} ($\sim 0.48-0.75$~G). As recently shown by \citet[][]{gopalswamyyashiro2011}, coronal fields can be inferred simply from the analysis of WL coronagraphic images of CME-driven shocks, but when this technique is applied to intensities observed above heliocentric distances of $\sim 2$~R$_\odot$, it requires the assumption of the unknown solar wind velocity, otherwise negligible at lower altitudes \citep[e.g.][]{gopalswamy2012}. On the other hand, we showed here how pre-shock plasma parameters were derived from UV data (electron temperature, density and outflow velocity), and other shock parameters were derived independently from WL data (shock compression ratio, shock velocity and inclination of shock front surface); no coronal physical parameters were assumed. As mentioned in the Introduction, before this work, information on shock heating of heavy ions was derived from the analysis of UVCS data by measuring the broadening of spectral lines whose emission is also due to collisional excitation (in particular the \ovi $\lambda$ 1031.91 \AA\ line), because significant dimming of the radiative components is expected after the shock transit. A direct measurement of the post-shock proton temperatures is not straightforward: neutral H atoms do not directly feel the transit of the shock wave, but only indirectly through collisions with post-shock accelerated and heated electrons and proton populations. This will significantly increase the ionization rates due to collisions with electrons and to resonant charge transfer with accelerated protons, thus significantly reducing the fraction of neutrals and producing H atoms traveling with the velocity of post-shock plasma, whose \lya\ emission (due to radiative excitation alone) is thus subject to severe Doppler dimming. Hence the detection of this post-shock faint emission is possible \citep[e.g.][]{mancuso2002}, but not simple. Moreover (as we pointed out) the reliability of $T_p$ measurements from \hi\ \lya\ profiles was also questioned \citep[][]{labrosse2006}. For this analysis only the pre-shock integrated intensities of UV lines were employed, hence no additional information that could be provided by UV spectroscopic data (e.g. spectral line broadening, line Doppler shifts, etc...) was required. Hence, this technique seems very promising for future application to UV (\hi\ \lya) and WL coronagraphic images that will be provided by the METIS coronagraph \citep[][]{antonucci2012} onboard the ESA--Solar Orbiter mission, due to launch in 2017-2018. The technique described here could also be combined with that proposed by \citet[][]{gopalswamyyashiro2011} in order to provide (by imposing the same magnetic field values) an interesting new method to estimate the solar wind velocities at different heliocentric distances in the corona being crossed by CME-driven shocks. The observational evidence reported here will be used in a validation effort of simulation results in a future paper. The observed shock is closer to the Sun than the most commonly used codes for space weather forecast \citep[e.g. ENLIL initiates its computational domain at 21.5 or 30 R$_\odot$,][]{xie2004}. The present results instead allow one to test the validity of models of CME eruption and production of shocks at closer distances to the Sun. The needed model must be inward of the sonic point in a stratified atmosphere where the density, magnetic field and pressure are chosen according to observational determinations of the average properties of the solar corona. The CME can then be initiated and the ensuing shock can be modeled. The results of the model can then be compared with observational evidence reported here, focusing especially on the dependence of the shock speed on the relative angle with the magnetic field (assumed initially radial). | 14 | 3 | 1403.0870 |
1403 | 1403.2758_arXiv.txt | We have used {\it XMM-Newton's Optical Monitor} (OM) images to study the local environment of a sample of 27 Ultraluminous X-ray Sources (ULXs) in nearby galaxies. UVW1 fluxes were extracted from 100~pc regions centered on the ULX positions. We find that at least 4 ULXs (out of 10 published) have spectral types that are consistent with previous literature values. In addition the colors are similar to those of young stars. For the highest-luminosity ULXs, the UVW1 fluxes may have an important contribution from the accretion disk. We find that the majority of ULXs are associated with recent star-formation. Many of the ULXs in our sample are located inside young OB associations or star-forming regions (SFRs). Based on their colors, we estimated ages and masses for star-forming regions located within 1~kpc from the ULXs in our sample. The resolution of the OM was insufficient to detect young dense super-clusters, but some of these star-forming regions are massive enough to contain such clusters. Only three ULXs have no associated SFRs younger than $\sim$50~Myr. The age and mass estimates for clusters were used to test runaway scenarios. The data are in general compatible with stellar-mass binaries accreting at super-Eddington rates and ejected by natal kicks. We also tested the hypothesis that ULXs are sub-Eddington accreting IMBHs ejected by three-body interactions, however this is not supported well by the data. | Ultraluminous X-ray Sources - extremely bright X-ray sources with bolometric luminosities exceeding the Eddington limit for a 20\,M$_{\sun}$ object - continue to puzzle astronomers. Due to their distance, located in external galaxies, it is difficult to identify optical counterparts, even with the Hubble Space Telescope \citep{ptak06,ram06, rob08,tao11,glad13}. Indeed very few objects have been detected in the optical with high confidence $-${\bf this is compared with the hundreds detected in the X-rays}\footnote{{\bf The following is a list of references for some of the most famous objects with optical counterparts: NGC~5204~X-1 \citep{liu04}, NGC~1313~X-1 \citep{yang11}, ESO~243-49 HLX-1 \citep{sor12} NGC~1313~X-2 \citep{zam04,liu07,rip11,zam12}, M81 X-6 \citep{liu02b,swa03,moon11}, Holmberg~IX X-1 \citep{gris06,moon11,gris11}, Holmberg~II~X-1 \citep{kaa04,tao12b}, NGC~5408 X-1 \citep{lang07,gris12}, M101 X-1 \citep{kun05,liu09M101}, NGC~4559 \citep{sor05}, two ULXs in M51 \citep{ter06}, NGC~2403 X-1 \citep{rob08}, IC~342 X-1 \citep{feng08}, the ULX in NGC~247 \citep{tao12a}, NGC 6946 X-1 \citep{kaa10} and ULX P13 in NGC 7793 \citep{pak10,motch11}}.}. In addition, only three objects have established optical periods that have been measured: M82 \citet{kaa06,kaa07}, for NGC~1313 X-2 \citet{liu09,zam12} and NGC~5408 X-1 \citep{stro09,han12}. {\bf Multiple authors have shown that ULXs are associated with star formation in their host galaxies \citep{ran03, gri03}.} Indeed such a correlation has been observed for high-mass X-ray binaries (HMXBs) in our own Galaxy \citep{gri03}. \citet{bod12} found an average offset of $\sim$400~pc between galactic HMXBs and nearby SFRs \citep[see also][]{col13}. Many authors have found evidence that ULXs are generally associated with large clusters and that the ULX is nearby these clusters. For instance \citet{kaa04} found that in starburst galaxies, X-ray sources are, in general, located near star clusters but also that X-ray sources with luminosities $>$~10$^{38.0}$~erg~s$^{-1}$ tend to be even closer to clusters. Similar results were found by \citet{ran11} in the starburst galaxy NGC~4449. Eleven X-ray binaries were found to be located nearby or inside very young clusters. In the Antennae galaxy, 10 out of 14 ULXs seem to be associated with young stellar clusters \citep{zez02}. \citet{cla07} found 7 ULXs associated with clusters in the Antennae galaxy using infrared images. These authors found that, in general, X-ray sources tend to be close to large clusters \citep[see also][]{clark11}. \citet{swa09} used photometric data from the Sloan Digital Sky Survey to look for possible associations of 47 ULXs with SFRs or young superclusters. They found that statistically ULXs are indeed associated with recent star formation (within 100~pc distance), but no superclusters were detected given the poor spatial resolution of the instrument. \citet{pou12} performed spectral and photometric analyses of clusters associated with the Antennae ULXs and found that almost all are very young (2.4 to 3.2 Myr), and that only one resides inside a cluster \citep[see also][]{ran12}.\ Originally M82 X-1 was thought to be located inside a supercluster, until \citet{voss11} showed that it is actually offset from the cluster. However, in the same paper, the authors found that a ULX in NGC~7479 was associated with a young supercluster, so such objects are known to exist. {\bf In theory the location of a ULX in relation to its surrounding star clusters can tell something about the environment in which the black hole was born as well as constrain some of the properties of the black hole. For instance, if a small black hole is born in a cluster of stars, the initial explosion can be asymmetric enough to kick the black hole out of the cluster. This is known as the runaway binary scenario \citep{zefa02}. This theoretical scenario does not work for larger black holes, such as intermediate black holes \citep[IMBH][]{col99}, since a) the black hole is too big to be susceptible to such kicks and b) even if such a black hole were kicked it would return to the cluster on very short timescales because of gravitational pull \citep[ i.e. v$_{kick}$ $<$ v$_{escape}$]{por99}. However, there are ways to kick intermediate black holes out of clusters using 3-body interactions \citep[e.g.][]{pou12}. In this scenario the intermediate mass black hole and donor star can be kicked out of the cluster by another young massive interloper star. Assuming this theory is correct, we would expect to find most potential intermediate mass black holes inside young, dense clusters of stars. Such, an environment would also readily explain the growth of an intermediate mass black hole through stellar collisions with the stars in the parent cluster \citep[e.g.][]{gur06}. Thus, in summary, all of this information can be combined to estimate characteristics of the ULX including the age and mass of a stellar companion assuming the black hole originated in a nearby cluster of stars or parent cluster \citep{zefa02,kaa04}.} The observational evidence for a possible association between ULXs and the star-formation in the galaxy is still controversial. Some starburst galaxies such as the Antennae galaxy, the Cartwheel galaxy and M82 contain an unusually large number of very bright ULXs. On the other hand, there are many star-forming galaxies without any known ULXs. In addition there are dwarf galaxies (such as Holmberg~II and Holmberg~IX) and even some ellipticals that contain bright ULXs. The ULXs found in elliptical galaxies seem to be fainter. Some of the bright ULXs that appear may actually be interlopers \citep{irw04}. Interestingly, most of the galaxies mentioned here (Antennae, Cartwheel, M82, Holmberg II, and Holmberg IX) are either merging or interacting with other galaxies. More recent examples are the colliding galaxy pair NGC~2207/IC~2163 where 21 ULXs were detected \citep{min13}, Arp~147 with 9 ULXs \citep{rap10}, and NGC~922 with 12 ULXs \citep{pre12}. The latter two are drop-through ring galaxies, similar to the Carthwheel. In this paper we use {\it XMM-Newton} archival images taken with the Optical Monitor (OM) to explore ULX environments in nearby galaxies. Ultraviolet (UV) emission is well suited to study star-forming regions (SFRs) and young clusters around ULXs. The ULX sample presented here was extracted from the XMM sources analyzed in \citet[][hereafter, WMR]{win06}. We selected all the ULXs with unabsorbed luminosities L$_X$~$\ge$~2.7$\times$10$^{39}$~erg~s$^{-1}$, as estimated by the authors. This limit corresponds to a 20~M$_{\odot}$ black hole radiating at the Eddington limit. Our ULX sample consists of all the sources for which OM UV data was found in the archives. We added to this sample the ULX associated with the MF16 nebula in NGC~6946 as a standard, since it is well-studied in the literature. The original ULX sample in WMR was selected from galaxies closer than 8~Mpc. The goal was first to examine whether ULXs are located inside clusters or SFRs or are possibly related to nearby such regions. Second was to use photometry to impose constraints on the nature of the optical companions and the accretion mechanism of the ULX. We measured fluxes for 100~pc regions at ULX positions and for bright sources (SFRs) detected nearby. Section~2 describes the data found in the archives and the photometry procedure. We present the photometry results for the 100~pc region centered on ULXs in Section~3. The results for the SFRs found close to ULXs are presented in Section~4, together with the population synthesis modeling used to estimate ages and masses. These are used in Section~5 to test runaway binary scenarios. Finally, in Sections~6 and 7 we discuss our results and present the main conclusions. | The summary of the main results presented in this paper is as follows: \begin{itemize} {\bf \item Using UVW1 photometry, we have found that 7 sources with UVW1 magnitudes measured from 100~pc regions have emission that is consistent with a single star (see also Table~\ref{table2}, col.~11 and Figure~\ref{bh}). Indeed for four of these objects our UVW1-predicted spectral types are identical to those found in the literature using other methods. We note that all of these objects (even those with consistent published data in the literature) may be contaminated by or dominated by an accretion disk component. Indeed for at least two sources the UV emission likely comes predominantly from an accretion disk (see also bullet~2). \item By comparing the theoretical UV fluxes expected from accretion disks with the UVW1 measurements, we found that, for most of the ULXs in our sample (esp. NGC~1313 XMM3 and Holmberg~IX XMM1), a significant part of the measured UV flux could come from accretion disk emission (Fig.~\ref{bh}). However, even if the accretion disk is at its brightest (e.g. irradiated), it is not bright enough in most cases to be responsible for all of the UV emission. In those cases (the cases in Table~2, column~11 where the flux is consistent with multiple sources), it is likely that a star forming region, an accretion disk, and a donor star contaminates the aperture, none of which can be isolated using the UVW1 photometry alone. } \item We looked at 3-5 SFRs around most ULXs. We derived statistics for the closest SFRs to the ULX, the youngest SFRs around the ULX, and the most massive SFRs around the ULX. Roughly half of the closest SFRs actually overlap with the ULX. Of the youngest star forming regions, 17 out of 21 are less than 10~Myr in age, which might imply that young regions are intrinsically related to ULXs. Finally there are also 17 sources with at least one region that is more massive than 10$^5$ M$_{\odot}$. \item OM color-color plots show relatively little reddening for most of the SFRs close to our ULXs, implying that they are likely not IMBHs accreting from molecular clouds. The extinction is also, in general, much less than suggested by the absorption columns obtained from X-ray modeling, indicating that the X-ray absorption is located close to the ULX. \item We tested runaway scenarios for both stellar-mass BHs and IMBHs. Specifically the two scenarios are: 1) stellar-mass black hole binaries emitting at ten times the Eddington limit ejected by natal kicks, or 2) ULXs with IMBHs accreting within the Eddington limit, ejected by three-body interactions in dense environments. We have found that the first scenario fits the data best (Fig.~\ref{runaway1}). On the other hand, that does not completely rule out that some ULXs in this sample are IMBHs, though the masses we found in this analysis were somewhat small. \end{itemize} | 14 | 3 | 1403.2758 |
1403 | 1403.6721_arXiv.txt | This letter expands the stability criterion for radially stratified, vertically {unstratified} accretion disks incorporating thermal relaxation. We find a linear amplification of epicyclic oscillations in these disks that depends on the effective cooling time, i.e.\ an overstability. % The growth rates of the overstability vanish for both extreme cases, e.g.\ infinite cooling time and instantaneous cooling, i.e.\ the adiabatic and fully isothermal cases. However, for thermal relaxation times $\tau$ on the order of the orbital frequency, $\tau\Omega \sim 1$, modes grow at a rate proportional to the square of the \BV frequency. The overstability is based on epicyclic motions, with the thermal relaxation causing gas to heat while radially displaced inwards, and cool while radially displaced outwards. This causes the gas to have a lower density when moving outwards compared to when it moves inwards, so it feels the outwards directed pressure force more strongly on that leg of the journey. We suggest the term ``Convective Overstability" for the phenomenon that has already been numerically studied in the non-linear regime in the context of amplifying vortices in disks, under the name ``Subcritical Baroclinic Instability". The point of the present paper is to make clear that vortex formation in {three-dimensional} disks is neither subcritical, i.e.\ does not need a finite perturbation, nor is it baroclinic in the sense of geophysical fluid dynamics, which requires on vertical shear. We find that Convective Overstability is a linear instability that will operate under a wide range of physical conditions for circumstellar disks. | The hydrodynamical stability of circumstellar accretion disks has been a long-standing problem in astrophysics because turbulence appears to be needed to drive observed accretion flows \citep{SS73}. However, these disks are strongly stabilized by rotation and vertically stable stratification. While the identification of the role of Magneto-Rotational Instability in accretion disks (MRI, \citealt{Vel59,BH91}) finally provided an incontrovertible linear instability, it was soon noted \citep{Gammie96} that significant portions of circumstellar disks are too poorly ionized to allow the MRI to act. The upper and outer edges for MRI turbulence appear to be set by ambipolar diffusion \citep{2011ApJ...727....2P, 2011ApJ...739...50B} and by the stiffness of the fields at high magnetic pressure \citep{2000ApJ...534..398M, 2000ApJ...540..372K, 2010ApJ...708..188T}. These obstacles of the otherwise robust MRI continue to motivate studies of hydrodynamical instabilities. The radial temperature structure of modestly accreting circumstellar disks are largely controlled by irradiation from the central star, although any accretion flow will lead to increased heating and dependent on the local opacity to a non trivial radial temperature and density profile \citep{bell95, D'Alessio05}. The radial temperature gradient can overpower the expected density gradient, leading to a negative radial entropy gradient \citep{KRL13}, which can drive the formation of vortices through a baroclinic mechanism, e.g. the non-vanishing baroclinic term in the vorticity equation \citep{KB03}. Disks without thermal relaxation process appear to be linearly stable \citep{K04}, but \citet{Petersen07} showed that vortices will form and grow if, on top of the radial entropy gradient, there is also sufficiently fast thermal relaxation and a strong enough initial perturbation. \citet{LP2010} and \citet{LK2011} presented 3D results of vortex amplification for vertically unstratified disks with imaginary radial \BV (buoyancy) frequencies and short relaxation times on the order of the orbital period. \citet{LP2010} referred to this process as a Subcritical Baroclinic Instability (SBI) because it appeared that one needs finite size perturbations to create the first vortices, which can then be amplified by a convective radial entropy flux. Thus, while the problem of amplifying vortices to sizes and strengths where they could drive significant accretion flows even in MRI-inactive regions was solved, the question of the origin of vortices remained open. In the present paper we perform a linear stability analysis for radially stratified accretion disks. {The results are not suited to explain the results for 2D vertically integrated accretion disks as presented in \citet{Petersen07} and \citet{RLK13} because we need the vertical dimension to achieve pressure equilibrium. However, this paper explains the behavior of the 3D yet vertically unstratified disk models \citep{LP2010,LK2011}.} Vertical stratification, {on the other hand,} is accompanied by vertical shear, which introduces additional potential sources of instability in a flow, e.g. the Goldreich-Schubert-Fricke instability. A full 3D stratified analysis is under way and shall be presented in a separate paper. We will start in section 2 with the linear analysis for an anelastic ansatz, e.g. a disk without pressure fluctuations in which density is a pure function of the background pressure and the local temperature. {In this approximation,} the continuity equation {is incorporated in the equation for the specific entropy.} % In section 3 we compare with numerical and previous results. Our code and numerical setup is described in the Appendix. We conclude in section 4. | We were able to put forward a linear theory for instability in accretion disks which are radially stratified and subject to radiatively driven thermal relaxation. This is the first analysis to our knowledge that incorporates finite thermal relaxation into the Solberg Hoiland Criteria for rotating fluids as are accretion disks. For realistic parameter ranges of the radial temperature gradient (see \citet{andrews10}) and thermal relaxation \citep{KRL13}, we find the amplification of radial epicyclic oscillations on a time scale of 100 to 1000 orbits. We tested our analytic approximations by comparing the results to numerical simulation of the growth of small perturbations in cylindrical unstratified and axis symmetric accretion disks. {Most importantly we tested our assumption about the fixed pressure background a posteriori. Pressure fluctuation are measured to be an order of magnitude lower than adiabatic pressure variations following from local compression, e.g. density fluctuations.} The compliance of numerical and analytical results is striking, especially when one takes the numerical viscosity of the code into account. Saturation values for the axissymetric and non-axissymmetric cases shall be obtained in future simulations at higher resolution studies, which hopefully will be less hampered by numerical dissipation. As a result we have shown that the Subcritical Baroclinic Instability is {in 3D simulations} not necessarily subcritical, nor a baroclinic instability in the traditional sense, nor any kind of stationary instability, but an overstability. As an excerpt from Chandrasekhar's book (1961) we quote: `Eddington explains this choice of terminology as follows: ``In the usual kinds of instability, a slightly displacement provokes restoring forces tending away from equilibrium; in an overstability it provokes restoring forces so strong, as to overshoot the corresponding position on the other side of the equilibrium.''' In stable non-dissipative, conservative systems, all perturbations lead to undamped oscillations. Yet in dissipative systems oscillations can get amplified and, for the convective overstability, the relevant criterion is that the Prandtl number is significantly smaller than 1 but sufficiently larger than 0. In other words, one needs thermal conductivity or equivalently heat transport by radiation that occurs on timescales of the dynamical system, while at the same time acting much more efficient than the viscosity of the underlying fluid. {The destabilizing influence of a finite thermal time on epicyclic oscillations we are describing in this paper is analogous to the usual heat engine explanation of the $\kappa$ and $\epsilon$ mechanisms in stars \citep{Eddington_1926,Cox_1980}. In the anelastic approximation, if the radial pressure gradient is negative, a fluid element undergoing epicyclic oscillations experiences a negative Lagrangian temperature perturbation ($\delta T$, measured with respect to the element's initial temperature) when it is displaced outward, and a positive one when displaced inward: i.e.~$\delta T \delta R < 0$. If the radial gradient of entropy is also negative, however, then the Eulerian temperature perturbation ($T_1$, measured with respect to the background gas) has the opposite sign to $\delta T$, so that the element loses heat to its surroundings when $\delta T < 0$ and gains it when $\delta T > 0$. Thus, if the entropy of the element returns to its original value after a complete oscillation, the element rejects less heat during the outward half of the cycle than it absorbs during the inward part. The difference appears as an increase in the mechanical energy of the oscillation.} As a linear overstability, radial convection offers a route to angular momentum transport and vortex formation even in disks without adequate ionization to support the MRI. The vortices require a negative radial entropy gradient, and their growth rates are quite small. {Yet based on the two dimensional {radial-vertical} runs presented in this paper it is not obvious how the sheetlike (vertically thin) modes could develop nonlinearly into vortices that are coherent structures across a vertical pressure scale height, nor can we right now determine whether these modes should be effective in transporting angular momentum outward: the instability feeds off the entropy gradient rather than orbital energy, so the direction of angular momentum transport is not constrained by energy considerations. Both questions will have to be addressed in 3D simulations currently in preparation.} At the same time other hydrodynamical instabilities may occur in dead zones of accretion disks. One class of instability has its sweet spot for very short cooling times or vertically adiabatically stratified disk, e.g.\ the Goldreich-Schubert-Fricke (G.S.F.) instability named after the work by \citet{Goldreich67} and \citet{Fricke68}. For a recent discussion of the role of the GSF in accretion disks see the work by \citet{Nelson13}. Another instability prefers vertically isothermal disks with as little cooling as possible, e.g.\ ``critical-layer-instability'' \citep[C.L.I.,][]{Marcus13}. Thus the G.S.F. and the C.L.I are mutually exclusive, whereas the convective observability falls in the middle of the parameter range with respect to the cooling time, and has no preference on the vertical stratification of the disk. One can now easily extrapolate that, much like the case in complicated climate systems such as the Earth's atmosphere, there is not just one single instability responsible for all weather phenomena, but rather a zoo of instabilities that operate both independently and hand-in-hand. Future work on hydrodynamical (as opposed to magneto-hydrodynamical) instabilities and overstabilities in protoplanetary disks has to focus on two questions: 1.) How do the three above mentioned instabilities interact and how far from their sweet spot can they still operate; and 2.) What is the actual occurring range of vertical and radial stratification plus cooling efficiencies in protoplanetary disks. | 14 | 3 | 1403.6721 |
1403 | 1403.4809_arXiv.txt | The ``holy grail'' in planet hunting is the detection of an Earth-analog: a planet with similar mass as the Earth and an orbit inside the habitable zone. If we can find such an Earth-analog around one of the stars in the immediate solar neighborhood, we could potentially even study it in such great detail to address the question of its potential habitability. Several groups have focused their planet detection efforts on the nearest stars. Our team is currently performing an intensive observing campaign on the $\alpha$ Centauri system using the {\sc Hercules} spectrograph at the 1-m McLellan telescope at Mt John University Observatory (MJUO) in New Zealand. The goal of our project is to obtain such a large number of radial velocity measurements with sufficiently high temporal sampling to become sensitive to signals of Earth-mass planets in the habitable zones of the two stars in this binary system. Over the past years, we have collected more than 45,000 spectra for both stars combined. These data are currently processed by an advanced version of our radial velocity reduction pipeline, which eliminates the effect of spectral cross-contamination. Here we present simulations of the expected detection sensitivity to low-mass planets in the habitable zone by the {\sc Hercules} program for various noise levels. We also discuss our expected sensitivity to the purported Earth-mass planet in an 3.24-d orbit announced by Dumusque et al.~(2012). | The search for a true Earth-analog planet is one of the boldest scientific and intellectual endeavors ever undertaken by humankind. If such a planet can be found orbiting a nearby Sun-like star, it will constitute an ideal target for extensive follow-up studies from the ground and with future space missions. These follow-up studies can include a detailed characterization of the planetary system and, ultimately, a search for bio-signatures in the atmosphere of an Earth-like planet in the habitable zone. For the far future, many decades to centuries from now, one can even imagine that the first interstellar probe will be launched to travel to one of the systems, where we found evidence for a nearby Earth twin. The discovery of such a planet will have an unprecedented cultural as well as scientific impact. NASA's {\it Kepler} mission (Borucki et al.~2010) has been extremely successful in finding small, possibly rocky planets orbiting stars in the {\it Kepler} search field. Some of them even reside within the circumstellar habitable zones: e.g. Kepler-22b (Borucki et al.~2012) and two planets in the Kepler-62 system (Borucki et al.~2013). One of the most significant result from {\it Kepler} is the planet occurrence rates, which show that small radius planets are quite frequent and outnumber the giant planets to a large extent (e.g. Howard et al.~2012, Fressin et al.~2013, Dressing et al.~2013, Petigura et al.~2013). We are tempted to extrapolate from these high {\it Kepler} planet frequencies to the immediate solar neighborhood and conclude that many nearby stars, possibly even the nearest star to the Sun, are orbited by one or more Earth-like planets. The precision of stellar radial velocity (RV) measurements has steadily improved from a modest 15\,m\,s$^{-1}$ (Campbell \& Walker~1979), more than three decades ago, to a routine 3\,m\,s$^{-1}$ (Butler et al.~1996), and in the best case, with the highly stabilized HARPS spectrograph (Mayor et al.~2003), even 1\,m\,s$^{-1}$ or better. The discovery space of the RV method was, therefore, extended from the giant planet domain down to Neptunes and super-Earths (with minimum masses between 2 and 10~M$_{\oplus}$). In terms of RV precision, we are still more than an order of magnitude from the 0.09\,m\,s$^{-1}$ RV amplitude of an Earth at 1~AU orbiting a G-type star. Future projects aim for RV precision of $0.1$\,m\,s$^{-1}$, but they are still years away from being operational. ESPRESSO (Pepe et al.~2014) is currently under construction for the ESO Very Large Telescope and G-CLEF (Szentgyorgyi et al.~2012) has been selected as a first-light instrument for the Giant Magellan Telescope (GMT). However, there is an alternative to extreme precision: with a large enough number of measurements, even signals with amplitudes orders of magnitude below the individual measurement uncertainties can be detected with high significance. Instead of waiting for the new instruments to be deployed, several groups have started ambitious RV programs that observe a small sample of suitable stars in the solar neighborhood with high temporal cadence. Owing to the extreme observational effort, these searches have to be dedicated to a few systems rather than to include as many targets as possible. The need of RV searches to focus on nearby bright stars has an attractive side-effect: the targets are all very close to the Sun, unlike the {\it Kepler} targets or microlensing systems, which are at typical distances of several hundreds or thousands of parsecs. The HARPS team has focused on ten solar-type stars and reported very low-mass planets around HD~20794, HD~85512 and HD~192310 (Pepe et al.~2011). In order to properly perform such an RV search for low-mass planets, it is important to pay careful attention to RV signals that are {\it intrinsic} to the star and that can have a larger amplitude than a planetary signal. Special attention has been given to our closest neighbor in space, $\alpha$~Centauri, and several groups (see Table\,\ref{groups}) have chosen this star system as the prime target of their planet detection efforts. Recently, Dumusque et al.~(2012) presented the case for the presence of a low-mass planet in a 3.2~d orbit around $\alpha$~Cen~B using 459 highly precise RV measurements with HARPS obtained over a time span of 4 years. And, Tuomi et al.~(2012) discussed the possible existence of a 5-planet system around $\tau$~Ceti, another very close Sun-like star for which no planets have yet been reported. These important results need to be confirmed by independent data and analysis. Indeed, Hatzes~(2013) re-analysed the HARPS RV results for $\alpha$~Cen~B using a different approach to filter out the stellar activity signals than that of Dumusque et al. and cast serious doubt on the reality, or at least on the planetary nature of the 3.2~d signal. In this paper we describe our $\alpha$~Cen program with the {\scshape Hercules} spectrograph at the McLellan 1\,m telescope at Mt~John University Observatory (MJUO) in New Zealand. \begin{deluxetable}{lll} \tablecolumns{3} \tablewidth{0pt} \tablecaption{Precise radial velocity surveys that target the $\alpha$\,Cen system. \label{groups}} \tablehead{ \colhead{Site} & {Spectrograph/Telescope} & {Reference/Project website} } \startdata La Silla & HARPS / ESO 3.6\,m & Pepe et al.(2011), Dumusque et al.~(2012)\\ CTIO & CHIRON / SMARTS 1.5\,m & {\tiny http://exoplanets.astro.yale.edu/instrumentation/chiron.php} \\ MJUO & HERCULES / 1\,m McLellan & {\tiny http://www2.phys.canterbury.ac.nz/$\sim$physacp/index.html}\\ \hline \enddata \end{deluxetable} | The field of exoplanets has steadily moved forward to allow the detection of planets with masses similar to Earth and with orbital periods inside the circumstellar habitable zone of their host star. Owing to its proximity, the $\alpha$~Centauri system is a very attractive target for such an intensive search for rocky planets using the RV technique. Any planets around the nearest stars to the Sun would allow a large variety of follow-up investigations to study these planets for their potential habitability. If the planet even happens to transit, we would be able to use JWST and the next generation of extremely large telescopes to probe the atmospheres of such a planet using transmission spectroscopy. Recently, Dumusque et al.~(2012) announced the discovery of a very low-mass planet in a short 3.2~day orbit around $\alpha$~Cen~B using 4 years of HARPS data. However, Hatzes (2013) casts doubt on the existence of this planet. Clearly, an independent falsification or confirmation is needed. We are performing a concentrated observing campaign on the $\alpha$~Centauri system with the {\sc Hercules} spectrograph at the 1\,m McLellan telescope at Mt~John University Observatory in New Zealand. As of January 2014 we have observed over 26,000 spectra of $\alpha$~Cen~A and over 19,000 spectra of $\alpha$~Cen~B. The goal of our program is to achieve sensitivity to RV signals of rocky planets with orbital periods inside the circumstellar habitable zone. Since 2010 we see the effect of cross-contamination in the spectra by the second star. We have developed an advanced version of our RV code that can include and compensate for contamination in the data modeling to compute the RV of the main target. The $45,000$ {\sc Hercules} $\alpha$~Cen spectra are currently in the process of being re-reduced with this new pipeline. We explored the expected sensitivity of our program as a function of final noise level (after the removal of the systematic noise of contamination). These simulations demonstrated that we should be able to confirm the purported Earth-mass planet with a period of 3.24~d even with a high noise level of 5\,m\,s$^{-1}$. To be sensitive to super-Earths in the habitable zone we require the noise budget to be below 3\,m\,s$^{-1}$. | 14 | 3 | 1403.4809 |
1403 | 1403.1271_arXiv.txt | { Markov Chain Monte Carlo (MCMC) sampler is widely used for cosmological parameter estimation from CMB and other data. However, due to the intrinsic serial nature of the MCMC sampler, convergence is often very slow. Here we present a fast and independently written Monte Carlo method for cosmological parameter estimation named as Slick Cosmological Parameter Estimator (SCoPE), that employs delayed rejection to increase the acceptance rate of a chain, and pre-fetching that helps an individual chain to run on parallel CPUs. An inter-chain covariance update is also incorporated to prevent clustering of the chains allowing faster and better mixing of the chains. We use an adaptive method for covariance calculation to calculate and update the covariance automatically as the chains progress. Our analysis shows that the acceptance probability of each step in SCoPE is more than $95\%$ and the convergence of the chains are faster. Using SCoPE, we carry out some cosmological parameter estimations with different cosmological models using WMAP-9 and Planck results. One of the current research interests in cosmology is quantifying the nature of dark energy. We analyze the cosmological parameters from two illustrative commonly used parameterisations of dark energy models. We also asses primordial helium fraction in the universe can be constrained by the present CMB data from WMAP-9 and Planck. The results from our MCMC analysis on the one hand helps us to understand the workability of the SCoPE better, on the other hand it provides a completely independent estimation of cosmological parameters from WMAP-9 and Planck data. } | Precision measurements in the cosmological experiments have improved dramatically in the past few decades. Several ground based and space based high precision cosmological experiments have been undertaken and many other future experiments have been proposed. After WMAP-9 and Planck data release, an ample amount of data is now available in the hands of cosmologists. The goal of cosmologists is to extract the maximum amount of information from these data in about the different cosmological parameters. Thus, techniques for robust and efficient estimation of cosmological parameters is one of the most important tools needed in the cosmologist's arsenal. Markov Chain Monte-Carlo (MCMC) methods are widely used to sample the multi-dimensional space of the parameters and estimate the best-fit parameters from the cosmological dataset. One of the most widely used MCMC algorithm for sampling the posterior is Metropolis-Hastings (MH) sampler \cite{Hestings1970,Lewis2002,Metropolis1953,Lewis2013}. However, MH samplers typically require several thousands of model evolutions and only a fraction of them get accepted. Hence, it is challenging to apply the algorithm to problems where the model evaluation is computationally time consuming. Also due to the intrinsic serial nature of the MH chains, it often takes long time to map the posterior. Therefore, even if the multi-processor parallel compute clusters are available they are not utilized efficiently. In this paper, we present an efficient implementation of the MCMC algorithm, dubbed, SCoPE (Slick Cosmological Parameter Estimator), where an individual chain can also be run in parallel on multiple processors. Another major drawback of the MH method is the choice of step-size. If the step-size in not chosen properly then the rejection rate increases and the progress of the individual chain becomes slower. The step size of the MCMC method is chosen using trial and error method. However, for the cases where the model evolutions are computationally time consuming, such as in cosmology, this type of trial and error method is computationaly uneconomical. Therefore, several authors have proposed different statistical methods for choosing the optimum step-size. An adaptive proposal Monte Carlo method is proposed by Haario et al.\cite{Haario1999} that uses the history of the chains to predict the next movement of the chains to improve the acceptance of the steps. The concept of inter-chain adaptation has been proposed in \cite{Craiu2009}. Several other theoretical proposals % for choosing the optimal step size are also available in literature \cite{Dunkley2005}. There are several codes available for cosmological parameter estimation. Publicly available CosmoMC \cite{Lewis2002,Lewis2013}, AnalizeThis \cite{Doran2004} codes are MCMC code, widely used for posterior mapping of the cosmological parameters. There are other codes such as CosmoPSO \cite{Prasad2012}, CosmoHammer \cite{Akeret2012} which can find the optimum cosmological parameters very fast, however they failed to sample the posterior fairly. Hence, the statistical quantities (mean, variance and covariance etc.) derived from the sample cannot readily yield unbiased estimates of the population mean, variance etc. Also CosmoMC uses the local MH algorithm \cite{Doran2004}, % fairly samples the posterior only asymptotically, i.e. practically for 'sufficiently' long run. Hence, if the samples runs are not long enough the posteriors may not get sampled fairly. In this work we devise and implement methodological modifications to the MCMC technique that lead to better acceptance rate. The algorithm proposed in this paper is a standard global MCMC algorithm combined with \begin{itemize} \item A delayed rejection method that allows us to increase the acceptance rate. \item Pre-fetching is incorporated to make the individual chains faster by computing the likelihood ahead of time. \item An adaptive inter-chain covariance update is also added to allow the step-sizes to automatically adapt to the optimum value. \end{itemize} As a demonstration, we use SCoPE to carry out parameter estimation in different cosmological models including the `standard' 6-parameter $\Lambda$CDM model. There are many reasons to explore well beyond the simple 6-parameter $\Lambda$CDM model. Comprehensive comparison calls for an ability to undertake efficient estimation of cosmological parameters, both owing to increase parameters or the increased computational expense for each sample evolution. For example recent data from WMAP-9 and Planck comfirm that the power at the low multipoles of the CMB angular power spectrum is lower than that predicted in the best-fit standard $\Lambda$CDM model and unlikely to cause by some observational artefact. This has motivated the study of broader class of inflationary models that have infra-red cutoff or lead to related desired features in the primordial power spectrum \cite{Sinha2006,Jain2009}. Another interesting cause for this power deficiency at the low multipoles can be the ISW effect in a modified the expansion history of the universe \cite{Das2013a}. Then, it is important to check if any scenerio in the vast space of dark energy models provides a better fit to the observational data\cite{Das2013}. In this paper, we analyze as illustration, two standard dark energy models $\--$ The first one is the constant equation of state dark energy model \cite{Lewis2002,Hannestad2005,Bean2004} with constant sound speed. The second one is the CPL dark energy parametrization proposed in \cite{Chevallier2001,Linder2003} with a linearly varying equation of state. Our analysis shows that both the dark energy models provide marginally better fits to the data than the standard $\Lambda$CDM model. Another important subject in cosmology is the primordial Helium abundance, denoted by $Y_{He}$. A number of researchers have attempted to pin down the Helium fraction using different data sets. Though the primordial Helium abundance % does not directly affect the perturbations spectrum, it affects the recombination and re-ionization processes and consequently changes the CMB power spectrum. The theoretical prediction of primordial Helium abundance from the standard Big Bang nucleosysthesis (BBN) is $Y_{He}\thickapprox0.24$ \cite{Ade2013,Trotta2004}. We have carried out the parameter estimation for $Y_{He}$ together with other standard $\Lambda$CDM cosmological parameters to asses the constraint from current CMB data and check if the allowed range is consistent with the BBN prediction.% Our analysis shows that the data from WMAP-9 and Planck can put a fairly tight constraint on the cosmological Helium fraction, which matches with the theoretical BBN value. SCoPE is a C code written completely independently from scratch. The mixing of multiple chains in SCoPE is better and convergence is achieved faster. The paper is organized as follows. The second section provides a brief overview of the standard Metropolis Hastings algorithm. In the third section, we discuss the modifications to the MCMC algorithm incorporated in SCoPE to make it more efficient and economical. In the fourth section of the paper, we provide illustrative results from our analysis of different cosmological models with WMAP-9 and Planck data. Our work also provides a complete independent parameter estimation analysis of the data using an independent MCMC code. The final section is devoted to conclusions and discussions. | We develop a new MCMC code named as SCoPE that can sample the posterior probability distribution more efficiently and economically than the conventional MCMC codes. In our code, the individual chains can run in parallel and a rejected sample can be used to locally modify the proposal distribution without violating the Markovian property. The latter increases the acceptance probability of the samples in chains. The prefetching algorithm allows us to increase the acceptance probability as much as required, provided requisite number of multiple cores are available in the computer. Apart from these, due to the introduction inter-chain covariance update the code can start without specifying any input covariance matrix. The mixing of the chains is also faster in SCoPE. The workability of the code is proved by analyzing different cosmological models. A 19 dimensional parameter estimation using SCoPE shows that the method can be used to estimation the high dimensional cosmological parameters extremely efficiently. | 14 | 3 | 1403.1271 |
1403 | 1403.6667_arXiv.txt | A compact cryogenic calibration target is presented that has a peak diffuse reflectance, $R \le 0.003$, from $800-4,800\,{\rm cm}^{-1}$ $(12-2\,\mu$m). Upon expanding the spectral range under consideration to $400-10,000\,{\rm cm}^{-1}$ $(25-1\,\mu$m) the observed performance gracefully degrades to $R \le 0.02$ at the band edges. In the implementation described, a high-thermal-conductivity metallic substrate is textured with a pyramidal tiling and subsequently coated with a thin lossy dielectric coating that enables high absorption and thermal uniformity across the target. The resulting target assembly is lightweight, has a low-geometric profile, and has survived repeated thermal cycling from room temperature to $\sim4\,$K. Basic design considerations, governing equations, and test data for realizing the structure described are provided. The optical properties of selected absorptive materials -- Acktar Fractal Black, Aeroglaze Z306, and Stycast 2850 FT epoxy loaded with stainless steel powder -- are characterized and presented. | Introduction} Absorptive targets and light traps find widespread use in flux calibration, termination of residual reflections in optical systems, and establishing the zero point in reflectance spectrometry.~\cite{Palchetti2008} Approximating a near-ideal absorber in a finite volume presents a challenge for such applications -- the absence of reflected light or how ``black" an object appears is not a unique material property or treatable as a boundary condition within the framework of electromagnetic theory~\cite{Sommerfeld1950} -- but an object's reflectance and absorptance are intimately tied to the surface's material properties and the underlying geometry of the structure.~\cite{Hultst1957} Thus to implement a low reflectance calibrator design, the absorber material's dielectric and magnetic properties need to be either known or experimentally determined in order to suitably tailor the target's geometry. At microwave through sub-millimeter wavelengths, precision absorptive standards find use in radiometric flux calibration and low reflectance targets for remote sensing,~\cite{Gaidis1999} astrophysics,~\cite{Gush1992,Mather1999} and other metrology applications.~\cite{Janz1987} Elements of the design techniques used in these microwave structures can be applied to realize compact, light-weight, and low-reflectance absorbers while maintaining the overall manufacturability for far infrared use at cryogenic temperatures. In this work, a calibration target for cryogenic applications is explored and its performance is described in detail. The design, manufacture and optical characterization of the target structure and selected coatings of potential interest are summarized in Sections~\ref{sec:design}, \ref{sec:fabrication}, and \ref{sec:characterization} respectively. | Discussion} From the observed diffuse reflectance of the witness samples in Figure~\ref{Fig4_Reflectance_Diffuse} one notes both Acktar FB coatings have a slightly lower reflection than Aeroglaze Z306 paint in the spectral range of interest. The calibrator constructed with $56\,\mu$m of Acktar FB was noted to have the best overall optical performance of the structures tested with an observed diffuse reflectance of $R<0.003$ between $800-4,800\,{\rm cm}^{-1}$ ($12-2\,\mu$m). The minimum in the diffuse reflectance, $R\simeq 0.002$, occurs at $2000\,{\rm cm}^{-1}$ ($5\,\mu$m). This represents a reduction in the diffuse reflectance by a factor of $\sim 50$ over a simple flat with an identical coating. The low frequency response of the calibrator samples is set by the pyramidal tiling's pitch and finite coating thickness. At greater than $\sim10^\circ$ from normal incidence the grey lines seen in Figure~\ref{Fig2_Disk_Photo_SEM} that originate from spectral reflectance off the bottom of the calibrator's slots are no longer discernible. This artifact can be further mitigated by gradually turning the slot and removing its endpoint from view as indicated in Figure~\ref{Fig1_Disk_crossection}. Incorporation of this fabrication detail is recommended to improve the overall emittance. The calibrator is broadband, mechanically robust, and has survived multiple cool downs to $\sim4\,$K. Due to the structure's low reflectance it has been adopted as the preferred absorptive light trap for general laboratory use. \vspace{0pt} | 14 | 3 | 1403.6667 |
1403 | 1403.4986_arXiv.txt | Obtaining lensing time delay measurements requires long-term monitoring campaigns with a high enough resolution ($<1$$^{\prime\prime}$) to separate the multiple images. In the radio, a limited number of high-resolution interferometer arrays make these observations difficult to schedule. To overcome this problem, we propose a technique for measuring gravitational time delays which relies on monitoring the total flux density with low-resolution but high-sensitivity radio telescopes to follow the variation of the brighter image. This is then used to trigger high-resolution observations in optimal numbers which then reveal the variation in the fainter image. We present simulations to assess the efficiency of this method together with a pilot project observing radio lens systems with the Westerbork Synthesis Radio Telescope (WSRT) to trigger Very Large Array (VLA) observations. This new method is promising for measuring time delays because it uses relatively small amounts of time on high-resolution telescopes. This will be important because instruments that have high sensitivity but limited resolution, together with an optimum usage of followup high-resolution observations from appropriate radio telescopes may in the future be useful for gravitational lensing time delay measurements by means of this new method. | The strong gravitational lensing effect occurs when light from a background source (a galaxy or a quasar) is deflected by the gravitational field of an intervening mass, such as a galaxy or cluster of galaxies, forming multiple images of the background source \citep{1992book1,2006book2}. This phenomenon is widely used in astrophysics and cosmology as a tool because it provides information about mass distributions in the lensing object \citep [e.g.][]{1991new36,2002new37,2007new42} as well as magnified views of the sources \citep [e.g.][]{2007new9,2011ref00}. \cite{1964ref3} demonstrated that lensing time delays can be used to measure cosmological distances, in particular the Hubble constant $H_{0}$. This can be done if the background source is variable by measuring time delays between variations of the images, thereby deducing an absolute distance scale provided the redshifts of the source and lens, and the mass model of the lens potential, can be determined. The time delay in a lens system scales with the size of the Universe and inversely with $H_{0}$; in a given system, it also depends on other cosmological parameters such as the matter density $\Omega_{m}$ and dark energy density $\Omega_{\Lambda}$, although this dependence is relatively weak. Consequently, large-scale time delay studies in future may allow these parameters to be determined as well \citep{2009new29,2010time13,2013ref77}. It is worthwhile to note that these parameters affect the $H_0$ determination at a relatively low level, and in principle gravitational lensing is therefore a useful one-step method for $H_0$ determination on cosmological scales. A number of groups are currently carrying out monitoring campaigns to determine time delays for lenses in the optical \citep[e.g.][]{2005new22,2006time34,2007new23,2008new27,2008new24,2011new25,2013ref76,2013ref79}. Measured time delays by means of these projects generally suggest $63<H_{0}<82$ km s$^{-1}$Mpc$^{-1}$. See e.g. \cite{2007ref7} and \cite{2010ref64} for more general reviews of measurements of the Hubble constant. A difficulty with obtaining lensing time delay measurements is that it requires monitoring campaigns of months to years with a high enough resolution ($<1$$^{\prime\prime}$) to separate the multiple images. The four-image lens system B1608+656 \citep{1995ref58,1995ref59}, for instance, required observations for multiple seasons with the VLA. After almost 3 years' monitoring of B1608+656, the accuracy of the time delays improved by factors of 2-3 due to an increase of the flux density of the background source by 25$\%$ \citep{1999time3,2002time4}. To minimise the problems mentioned above, we propose a new method for gravitational lens time delay measurements. In asymmetric double image and long-axis quadruple image lens systems we can take advantage of the fact that the brighter image(s) varies first and dominates the total flux. This method builds on a suggestion by \cite{1996method1} who proposed using low-resolution observations only. Low-resolution but high sensitivity observations are used which are sufficient to recognise the variation of the brighter image. Afterwards, observations with a high-resolution interferometer array are triggered to see the variation of the fainter images. In order to assess the efficiency of our technique we performed cross-correlation simulations using the Pelt dispersion statistic \citep{1996method9} and artificial light curves. We also used the Pelt dispersion statistic to evaluate the results of our pilot project. This paper is organised as follows. A description of our proposed technique, together with results from simulations performed to assess its efficiency, are presented in Section 2. As a pilot project, a flux monitoring campaign was carried out with the WSRT at 5 GHz including 39 epochs of observations. VLA observations at 5 GHz giving 1$^{\prime\prime}$ resolution were triggered to resolve the images of the system B1030+074 which showed a possible variability feature during the flux monitoring. These results are shown in section 3 and 4. Finally, in section 5 we discuss this technique and the results. \begin{table*} \begin{tabular}{cccccccc} \hline \hline Object&Type&Separation&Flux&Flux&Likely&Phase&References\\ &&(arc-sec)&Brighter&Fainter&delay&Calibrators&\\ &&&Image&Image&&\\ &&&(mJy)&(mJy)&(days)&\\ \hline \hline CLASS B0445+123&D&1.2&25&4&30&3C138&\cite{2003ref1}\\ CLASS B0631+519&D&1.2&34&5&15&3C147&\cite{2005ref2}\\ CLASS B0850+054&D&0.7&55&9&18&J0907+037&\cite{2003ref4}\\ CLASS B0739+366&D&0.6&27&5&10&J0736+331&\cite{2001ref3}\\ JVAS B1030+074&D&1.6&200&13&110&J1015+089&\cite{1998ref5}\\ CLASS B1152+199&D&1.6&50&18&30&J1142+185&\cite{1999ref6}\\ JVAS B1422+231&Q&1.2&500&5&25&J1429+218&\cite{1992ref7}\\ CLASS B2319+051&D&1.4&56&11&25&J2398+034&\cite{2001ref8}\\ \hline \end{tabular} \caption[Features of the lens systems.]{The table shows the features of the target lenses. The lenses are selected among double or long-axis quadruple CLASS lenses with the highest flux ratios. D and Q refer to $\textit{Double lenses (2-image lenses)}$ and $\textit{Quadruple lenses (4-image lenses)}$, respectively. Separation between the images of the sources are given in column 3. Column 4 gives the flux density of the component which varies first. Column 5 gives the flux density of the delayed component and column 6 the time delay if $H_{0}$ = 70 km s$^{-1}$ Mpc$^{-1}$ (assuming an isothermal mass profile for the lens galaxy). The calibrator sources used during the total flux monitoring can be seen in column 7.} \label{lens} \end{table*} | In this work we have proposed an alternative method for gravitational lens time delay measurements. This technique does not rely only on high-resolution observations which are typically required for lensing time delay measurements. It primarily uses low-resolution observations and this enables us to utilise high-resolution observations at an optimum level. The efficiency of this technique, defined as the number of high-resolution observations that it requires, was evaluated by performing cross-correlation simulations using the Pelt dispersion statistic. Our results show that, for typical lightcurves, the true time delay can be covered with 5-8 high-resolution observations, an order of magnitude fewer than required in traditional approaches. As a pilot project, we used the WSRT to perform total flux monitoring for 8 radio lens systems and triggered VLA observations for the one object, B1030+074, that showed variability during the total flux monitoring. For this object, the expected trend of decreasing flux density with time was not seen convincingly in the fainter component's light curve. Analysis of the possible time delay concluded that a wide range of time delays are consistent with the available data. Despite the lack of a clear result on an initial trial, this new method is potentially useful because it predominantly uses time on low-resolution telescopes. This is important because new, highly sensitive but low-resolution instruments are under construction such as MeerKAT (an RMS noise level of $\sim$ 7$\mu$Jy/beam in 24 hours with 500 MHz) and ASKAP (an RMS noise level of $\sim$ 37$\mu$Jy/beam in an hour with 300 MHz). Since these arrays are not linear, confusion due to neighbouring sources will not be a big problem. Such instruments, together with a modest amount of high-resolution observational followup, may in future be useful for gravitational lensing time delay measurements by means of this new method. | 14 | 3 | 1403.4986 |
1403 | 1403.5272_arXiv.txt | {{The opacity due to grains in the envelope of a protoplanet $\kappa_{\rm gr}$ regulates the accretion rate of gas during formation, meaning that the final bulk composition of planets with a primordial H/He envelope is a function of it. Observationally, for extrasolar planets with known mass and radius it is possible to estimate the bulk composition via internal structure models.} } {{We want to study the global effects of $\kappa_{\rm gr}$ {as a poorly known, but important quantity} on synthetic planetary populations.}} {{We first determine the reduction factor of the ISM grain opacity $\fopa$ that leads to a gas accretion timescale consistent with grain evolution models {for specific cases}. In the second part we compare the mass-radius relationship of low-mass planets and the heavy element content of giant planets for different values of the reduction factor with observational constraints.}} {{For $\fopa$=1 (full ISM opacity) the synthetic super-Earth and Neptunian planets have too small radii (i.e., too low envelope masses) compared to observations, {because at such high opacity, they can not efficiently accrete H/He during the formation phase.} At $\fopa$=0.003, the value calibrated with the grain evolution models, the synthetic and actual planets occupy a similar mass-radius domain. Another observable consequence is the metal enrichment of giant planets relative to the host star, $\zp/\zstar$. We find that the mean enrichment of giant planets as a function of mass $M$ can be approximated as $\zp/\zstar = \beta(M/\mj)^{\alpha}$ both for synthetic and actual planets. The decrease of $\zp/\zstar$ with mass follows $\alpha$$\approx$-0.7 independent of $\fopa$ in synthetic populations, in agreement with the value derived from observations (-0.71$\pm$0.10). The absolute enrichment level $\beta$ decreases from 8.5 at $\fopa$=1 to 3.5 at $\fopa$= 0. At $\fopa$=0.003, one finds $\beta$=7.2 which is similar to the result derived from observations (6.3$\pm1$.0).}} {{We find observational {hints} that the opacity in protoplanetary atmospheres is much smaller than in the ISM {even if the specific value of $\kappa_{\rm gr}$ can not be constrained in this first study as $\kappa_{\rm gr}$ is found by scaling the ISM opacity}. Our results for the enrichment of giant planets are also important {to distinguish} core accretion and gravitational instability. In the simplest picture of core accretion, where first a critical core forms, and afterwards only gas is added, $\alpha$$\approx$-1. If a core accretes all planetesimals inside the feeding zone also during runaway gas accretion $\alpha$$\approx$-2/3. The observational result (-0.71$\pm$0.10) lies between these two values, pointing to core accretion as the likely formation mechanism.}} | \label{sect:grainopa} It is well known (e.g., Ikoma et al. \cite{ikomanakazawa2000}) that a reduction of the opacity $\kappa$ in the gaseous envelope of a forming giant planet leads to a reduction of the formation timescale. {In simulations of concurrent core and envelope accretion} (e.g., Pollack et al. \cite{pollackhubickyj1996}, hereafter P96) the reason is that a lower opacity during the so-called phase II (where gas must be accreted in order to allow further core growth) leads to a more efficient transport of released potential energy out of the envelope. {Phase II is an intermediate phase that occurs for in situ formation between the moment the core has reached the isolation mass, and the moment when rapid gas accretion starts. This happens when the core has reached the crossover mass when envelope and core mass are equal.} For in situ calculations the overall formation timescale of the planet is dominated by the duration $\t2$ of phase II, which means that at low $\kappa$ the overall formation timescale is reduced, too. For example, P96 find that an (arbitrary) reduction of the grain opacity $\kappa_{\rm gr}$ to 2\% of the interstellar medium (ISM) value leads to $\t2=2.2$ Myrs for Jupiter formation, while with the full ISM value, $\t2=6.97$ Myrs, longer than the mean disk lifetime. Obviously, one is therefore interested in knowing an {estimate} for the effective grain opacity in the envelope, instead of having to use arbitrary scalings like the 2\%. The grain opacity ({with opacity we mean in this work always the Rosseland mean}) can be calculated from the microphysics of grain growth via coagulation, grain settling, and evaporation at high temperatures. Podolak (\cite{podolak2003}) presented such a numerical model, finding that grain growth leads to opacities up to three orders of magnitude smaller than in the ISM because grains grow efficiently. The limitation of this work was a lack of self-consistency as the envelope structure in which the grain growth was studied was calculated with pre-specified different opacities. Only recently Movshovitz et al. (\cite{mbpl2010}) (hereafter MBPL10), presented the first self-consistently coupled calculations of grain evolution and giant planet growth. In their work, the envelope structure is used to {calculate} at each radius the evolution of the grains at each time step. From the grain size distribution the Rosseland mean opacity is calculated, which is then {fed} back into the envelope calculation. The main result of these calculations is that the grain opacity is much reduced. For Jupiter formation, this leads to a duration of phase II of only 0.52 Myrs. This corresponds to a reduction by a factor 13.4 relative to the P96 full opacity case, and still a factor 4.2 to the P96 ``low opacity case'' with a 2\% ISM opacity. As forming a giant planet within the typical lifetime is a timing issue (at least if migration is not taken into account cf. Alibert et al. \cite{alibertmordasini2004}), these factors matter. The complex {(and computationally heavy)} calculations of grain evolution made by MBPL10 are {not} the scope of this first paper. Instead, we here follow a practical approach, and determine in {the first part of} this work the reduction factor $\fopa$ by which interstellar opacities must be reduced in order to obtain the same duration of phase II as found by MBPL10. As will be shown, a grain opacity of only about 0.3\% the ISM value leads to the best reproduction of the MBPL10 results. {Using one uniform reduction factor of the ISM opacity can of course not reproduce the complex structure of the opacity {which depends on planetary properties like the core or envelope mass} as found in grain evolution models (Movshovitz \& Podolak \cite{movshovitzpodolak2008}). {The simple ISM scaling approach therefore has some important limitations (see the discussion in Sect. \ref{sect:generality})}. Our interest in still deriving a global, uniform $\fopa$ is to have an {intermediate value for the opacity between the two extremes (full ISM opacity vs. grain free)} at a low computational cost for population synthesis simulations.} {The goal of this work is rather to study the global effects of $\kappa$ on planetary populations. For this, we compare in the second part of the paper important statistical properties of synthetic planets found with different values of $\fopa$ with observational constraints from extrasolar planets.} {The opacity controls the rate at which a core of given mass accretes gas during the formation phase, therefore different $\fopa$ lead to different bulk compositions that manifest in the mass-radius relationship that we can observe after a few Gyrs of evolution. Population synthesis calculations that are built on self-consistently coupled planet formation (Alibert et al. \cite{alibertmordasini2005}) and evolution models (Mordasini et al. \cite{mordasinialibert2011}) can predict the synthetic mass-radius relationship. While there are other observational constraints on $\fopa$ that can be derived from the mass distribution of exoplanets alone (in particular the frequency of giant planets), transiting exoplanets are therefore of special interest in this study. What is needed for the comparison are transiting extrasolar planets with a well-defined mass and radius, and a rather large semimajor axis $\gtrsim0.1$ AU to minimize the impact of stellar irradiation. The extrasolar planets of this class must also have a mass-radius relationship that implies that they contain significant amounts of primordial H/He, since this is the envelope type considered here. It also means that we avoid the part of the mass-radius relationship that is most degenerate (e.g., Valencia et al. \cite{valenciaikoma2010}, Rogers \& Seager \cite{rogersseager2010}). Instead, for such planets it is possible to infer, at least in a rough way, the bulk composition (global heavy element content) as has been demonstrated by, e.g., Guillot et al. (\cite{guillotsantos2006}), Burrows et al. (\cite{burrowshubeny2007}), Guillot (\cite{guillot2008}), or Miller \& Fortney (\cite{millerfortney2011}). The results from the latter study are used in the second part of this study. Our simulation therefore also {aims} at establishing a bridge between physical processes like grain evolution that govern gas accretion during formation, and observable quantities. This information can be used in the ideal case to feed back into the microphysical models for the grains (or into specialized models in general). The structure of the paper is as follows: in Section \ref{updatedopacity} we show the modifications of our giant planet formation model to take into account reduced grain opacities. The formation model itself was first described in Alibert et al. (\cite{alibertmordasini2005}), while the version used here is described in Mordasini et al. (\cite{mordasinialibert2011}). In Section \ref{sect:determinationfopa}, we determine $\fopa$ by comparison with MBPL10. {We then turn to the population synthesis calculations (Sect. \ref{sect:obsconstr}) and use them to study the envelope mass as a function of core mass (Sect. \ref{sect:menveofmcore}). The associated mass-radius relationship mainly of low-mass planets is addressed in Sect. \ref{sect:MRR}. In Section \ref{sect:zpzstar} we compare the enrichment of giant planets relative to the host star for different $\fopa$ with the results derived by Miller \& Fortney (\cite{millerfortney2011}) for actual extrasolar planets.} Finally, in Sect. \ref{sect:conclusion} we give a summary and present our conclusions. {In Appendix \ref{sect:semianalyticalsolution} we derive a semi-analytical model for the core and envelope mass in phase II and determine its parameters in Appendix \ref{sect:paramstkh}. } {In the second paper of this series (Mordasini \cite{mordasini2014}), hereafter Paper II, we present a simple analytical model for the opacity due to grains in protoplanetary atmospheres. It calculates the grain opacity based on the comparison of the timescales of the governing microphysical processes like grain settling, coagulation and evaporation (Rossow \cite{rossow1978}). It therefore takes into account that the grain opacity is a dynamically changing quantity that depends on planetary properties like core and envelope mass, or the accretion rates. In a subsequent work, we will couple this model with our upgraded population synthesis code that can now simulate the concurrent formation of many embryos in one disk during the formation phase (Alibert et al. \cite{alibertcarron2013}) and includes atmospheric escape during the long term evolution (Sheng et al. \cite{shengmordasini2014}).} | \label{sect:conclusion} {In this series of papers we investigate the impact of the opacity due to grains in protoplanetary atmospheres on the predicted bulk composition (H/He fraction) of extrasolar planets. In this first paper, we study the impacts found by scaling the ISM opacity. In Paper II, we present an analytical model to calculate $\kappa_{\rm gr}$. In future work, we will couple the analytical model to our updated population synthesis calculations.} The results of this first paper are summarized as follows: \begin{itemize} \item In the context of the core accretion paradigm, we studied the duration $\t2$ of phase II for in situ formation of Jupiter as a function of the reduction factor of grain opacity $\fopa$ relative to ISM grain opacity. We found that over an important range of $\fopa$, there is a linear relationship between $\fopa$ and $\t2$ as expected from theoretical considerations (Sect. \ref{durationphase2fopa}). \item We compared the duration of $\t2$ as function of $\fopa$ with the corresponding duration found by Movshovitz et al. (\cite{mbpl2010}) who conducted simulations of combined giant planet formation and grain evolution. We found that the ISM grain opacity must on average be reduced by a factor $\fopa$$\approx$0.003 to reproduce their results (Sect. \ref{sect:finalresfopa}). As a caveat one must keep in mind that one uniform reduction factor cannot reproduce the complex radial opacity structure found in grain evolution calculations, and also that the calibration was only made in a small part of the parameters space of possible core masses, luminosities, and outer boundary conditions (Sect. \ref{sect:generality}). \item Without migration and planetesimal drift, there is a unique relation between isolation, core, and total mass during phase II if the planet accretes all planetesimals in the feeding zone. The reason is that the core mass $\mz$ is given by the size of the feeding zone that depends via the Hill sphere on the total mass $M$. This means that for a given core and isolation mass, the envelope mass $\mxy$ during this phase is independent of $\fopa$. The opacity merely determines how quickly a planet evolves through the different $\mz-\mxy$ states (Sect. \ref{sect:relationmzmxyphaseII}). \item We studied the global consequences of a scaled ISM opacity on planets forming via core accretion with population synthesis calculations (Sect. \ref{populationsynthesis}). {In reality, the grain opacity in protoplanetary atmospheres can not be obtained as the ISM opacity simply reduced by one general $\fopa$. This is because the grain dynamics depend on a planet's properties (Movshovitz \& Podolak \cite{movshovitzpodolak2008}; Paper II). Therefore, no constraints for the value of $\kappa_{\rm gr}$ for a specific planet can be derived from our calculations. Due to this limitation, we} considered besides the nominal value $\fopa$=0.003 also populations with full ISM grain opacity ($\fopa$=1) and grain-free opacity ($\fopa$=0) {as limiting cases. This allows to see if the opacity leads at all to potentially observable imprints, which is the main goal of the paper}. \item Grain opacity leaves clear imprints in the planetary bulk composition (H/He envelope mass as a function of core mass). For sub-critical cores, the envelope mass for a given core mass increases with decreasing $\kappa_{\rm gr}$. In the synthesis with $\fopa=0.003$, the envelope masses of sub-critical cores ($\mz\lesssim10\mearth$) are about four times more massive than for full ISM opacity. There is a large spread of at least one order of magnitude in $\mxy$ for a given $\mz$ (Sect. \ref{sect:menveofmcore}). \item The critical core mass for runaway gas accretion decreases with $\fopa$. The lowest core mass $M_{\rm Z,min}$ in giant planets (envelope mass $\mxy\geq 100 \mearth$) are 6, 16 and 29 $\mearth$ for $\fopa$=0, 0.003, and 1, respectively. These results are for populations where orbital migration is included. In a synthesis without orbital migration and $\fopa=0.003$, $M_{\rm Z,min}=7 \mearth$. This difference is due to the lower mean luminosity without migration and the dependency of the critical mass on both the luminosity and opacity (Sect. \ref{giantsandmcrit}). \item If planets continue to accrete planetesimals in the disk limited gas accretion phase, {and if the planetesimal random velocities stay low}, then the maximal heavy element content in giant planets we find can be up to $\sim$400 $\mearth$. If in contrast no planetesimals are accreted in the disk limited phase, then the maximal heavy element content is about 50 to 100 $\mearth$ depending on the semimajor axis (Sect. \ref{sect:maximalcoremass}). \item Grain opacity leaves a clear imprint in the mass-radius relationship of low-mass planets. At a low opacity, planets in the super-Earth and Neptunian mass domain have radii for a given mass that are much larger than for ISM opacity because of higher envelope mass fractions. At $M=10 \mearth$, for example, the maximal radii at 5 Gyrs in the syntheses with migration are 4, 5.5 and 7.5 $\rearth$ for $\fopa$=1, 0.003, and 0, respectively (Sect. \ref{sect:MRR}). \item We compared the envelope covered by actual and synthetic planets in the mass-radius plane assuming an age of 5 Gyr for the synthetic planets. One finds that for $\fopa=1$, the synthetic super-Earth and Neptunian planets have too small radii (i.e., too low envelope masses) to cover the same domain as the observations. At $\fopa=0.003$, the value calibrated with a grain evolution model, the synthetic and actual planets occupy a similar loci in the mass-radius plane. This is a {hint} that the opacity in protoplanetary atmospheres is much smaller than in the ISM (Sect. \ref{sect:compmrrobs}). \item A second consequence of grain opacity that can be compared with observational data is the bulk composition of giant planets. We compared the bulk composition of synthetic planets as expressed in the enrichment of a planet relative to its host star $\zp/\zstar$ ($\zp=\mz/M$, $\zstar=Z_{\odot} 10^{\rm [Fe/H]}$) with the results derived from observations (Miller \& Fortney \cite{millerfortney2011}). Miller \& Fortney (\cite{millerfortney2011}) derived the heavy element content of 14 transiting extrasolar planets obtaining low irradiation fluxes by internal structure modeling (Sect. \ref{sect:enrichmenrobs}). \item We found that there is a clear imprint of opacity on the bulk composition of giant planets. The mean relative enrichment $\zp/\zstar$ is about 2.4 times higher for full ISM grain opacity compared to the grain-free case (\ref{zpzstarmass}). \item The mean relative enrichment of giant planets as a function of planet mass $M$ can be approximated as $\zp/\zstar =\beta (M/\mj)^{\alpha}$. The decrease of $\zp/\zstar$ with mass follows $\alpha\approx$-0.7 independently of $\fopa$ for all three populations with orbital migration. This slope is in good agreement with the value derived from observations ($-0.71\pm0.10$). In the simplest picture of core accretion, where first a critical core forms, and afterwards only gas is added, $\alpha\approx-1$, while in the simplest picture of gravitational instability $\alpha\approx0$. The $\alpha\approx-0.7$ found in the simulations indicates a weak positive correlation of $\mz$ with total mass. A similar exponent ($\alpha\approx$-2/3) is expected if giant planets efficiently accrete all solids in their feeding zone (Sect. \ref{zpzstarmass}). \item The absolute level of enrichment $\beta$ decreases significantly from 8.5 at $\fopa$=1 to 3.5 at $\fopa$=0. When we compare these values with the result derived from observations ($\beta=6.3\pm1.0$) we see that a full ISM grain opacity leads to too enriched planets, while a grain-free opacity leads to a too low enrichment. At $\fopa=$0.003, $\beta=7.2$, which is similar to the observational result. The bulk composition of giant planets thus seems to {hint} that the opacity in protoplanetary envelopes is much smaller than in the ISM, but that there is possibly still a non-zero contribution from the grains (Sect. \ref{zpzstarmass}). This result is similar as the one found for the mass-radius relationship of low-mass planets. \item Giant planets with masses less than $\sim$10 $\mj$ formed by core accretion are enriched due to the accretion of solids, with the relative enrichment $\zp/\zstar$ following roughly $6 (M/\mj)^{-0.7}$ in the observational sample and similarly in the nominal synthetic population. The spread around this relation at a given mass is large, about one order of magnitude. For more massive giant planets the composition of the gas becomes important in determining the enrichment, and both enriched and depleted planets are possible. In the limit that the accreted gas is free of solids, planets more massive than $\sim$10-15 $\mj$ are depleted relative to the host star (Sect. \ref{sect:effectgascomp}). \item We derived a semi-analytical expression for the core and envelope mass as a function of time in phase II using a two parameter expression for the gas accretion timescale $\tg$ (Appendix \ref{sect:semianalyticalsolution}). We determined the parameters of $\tg$ for different core masses and $\fopa$ by comparison with numerical results (Appendix \ref{sect:paramstkh}). \end{itemize} We have found in this work that the ISM grain opacity must be reduced by a factor $\fopa$$\approx$0.003 to reproduce the formation timescales of Movshovitz et al. (\cite{mbpl2010}). This value is almost an order of magnitude lower than the previously considered ``low opacity case'' with 2\% ISM opacity (P96, Lissauer et al. \cite{lissauerhubickyj2009}). It is however clear that uniform reduction factors {can not capture the physical} effects of grain evolution that in reality lead to complex opacity profiles in the envelope (Movshovitz \& Podolak \cite{movshovitzpodolak2008}, {Paper II}) {which differ from the ones found by scaling the ISM opacity}. The interest in a calibrated $\fopa$ is therefore merely to have an {intermediate} case for the magnitude of $\kappa_{\rm gr}$ in simulations that study the global impact of opacity in a parametrized way {between the limiting $\fopa$=0 and 1 cases}. With this grain reduction factor, cores with a low mass can trigger gas runaway accretion during the typical lifetime of a protoplanetary nebula, the exact value depending on luminosity. As discussed by Movshovitz et al. (\cite{mbpl2010}) and Hori \& Inaba (\cite{horiikoma2010}), the lower critical core masses found with more realistic opacities are an important result for the core accretion paradigm, as several studies (e.g., Fortier et al. \cite{fortierbenvenuto2007}; Ormel \& Kobayashi \cite{ormelkobayashi2012}) indicate that building up a 10 $\mearth$ core at 5.2 AU within the typical lifetime of a disk is a critical timing issue. These lower opacities contribute substantially in making this timing issue less stringent, even without taking into account additional mechanisms like migration or envelope pollution (Alibert et al. \cite{alibertmordasini2004}; Levison et al. \cite{levisonthommes2010}; Hori \& Ikoma \cite{horiikoma2011}). {Also, at such low $\fopa$, cores of just a few $\mearth$ can accrete quite significant H/He envelopes, making them potential progenitors of the recently detected low-mass, low-density planets.} The global consequences of grain opacity on planetary populations are strong and multiple. We have studied these imprints by running population synthesis calculations with a wide range of $\fopa$. We focused on two possibly observable consequences, namely the mass-radius relationship of low-mass, subcritical planets, and the bulk composition of giant planets. Additional imprints also exist in the planetary mass and radius distributions, and in particular in the frequency of giant planets. This frequency increases in the synthetic populations approximately by a factor three when reducing $\fopa$ from 1 to 0. The resulting comparison of synthetic and (statistical) observational results establishes a connection between the opacity in protoplanetary envelopes and observable quantities. {In this work, we cannot directly derive constraints on the value of $\kappa_{\rm gr}$ because we find it by scaling the ISM opacity instead of calculating it based on planet properties.} {However, in future, a similar approach should allow} to test microphysical models of grain evolution that are otherwise difficult to test observationally. The result of this first paper that the observed mass-radius relationship can not be reproduced with a full ISM opacity is interesting. It could be an observational hint that the prediction of grain growth models {(Podolak \cite{podolak2003}, Movshovitz \& Podolak \cite{movshovitzpodolak2008}, MBPL10, and Paper II)} that the opacity in protoplanetary atmospheres is much smaller than in the ISM, is correct. {However, it is clear that our results for the predicted bulk composition of extrasolar planets are preliminary.} Too complex and too numerous are the actual physical {mechanisms} occurring during formation and too simple is the model in comparison that uses, e.g., one uniform reduction factor, neglects the impact of the chemical composition of the gas, and only handles one embryo per disk. {For a better understanding, it is necessary to couple a physically motivated grain opacity model like the analytical model of Paper II with planet formation codes, and to take into account additional important factors like the pollution of the envelope (Hori \& Ikoma \cite{horiikoma2011}), the concurrent formation of many planets in one disk (Alibert et al. \cite{alibertcarron2013}), or the loss of the H/He envelope during evolution due to atmospheric escape (Sheng et al. \cite{shengmordasini2014}).} On the observational side, a significant extension of the sample of relatively cold, transiting planets with well-constraint mass and radius would be very important, e.g., with CHEOPS (Broeg et al. \cite{broegfortier2013}) {and PLATO (Rauer et al. \cite{rauercatala2013})}. Then it should become possible to understand the processes that determine the opacity in protoplanetary atmospheres much better than today. The associated statistical results for the enrichment of the planets will allow to distinguish better different formation mechanisms like core accretion and gravitational instability, or to understand from a theoretical point of view the transition from solid to gas dominated planets (Marcy et al. \cite{marcyetal2014}). | 14 | 3 | 1403.5272 |
1403 | 1403.0727_arXiv.txt | {} {We present a deep multiwavelength imaging survey ($UGR$) in 3 different fields, Q0933, Q1623, and COSMOS, for a total area of $\sim$1500arcmin$^2$. The data were obtained with the Large Binocular Camera on the Large Binocular Telescope. } {To select our Lyman break galaxy (LBG) candidates, we adopted the well established and widely used color-selection criterion (U-G vs. G-R). One of the main advantages of our survey is that it has a wider dynamic color range for U-dropout selection than in previous studies. This allows us to fully exploit the depth of our R-band images, obtaining a robust sample with few interlopers. In addition, for 2 of our fields we have spectroscopic redshift information that is needed to better estimate the completeness of our sample and interloper fraction. } {Our limiting magnitudes reach 27.0(AB) in the R band (5$\sigma$) and 28.6(AB) in the U band (1$\sigma$). This dataset was used to derive LBG candidates at z$\approx$3. We obtained a catalog with a total of 12264 sources down to the 50\% completeness magnitude limit in the R band for each field. We find a surface density of $\sim$3 LBG candidates arcmin$^{-2}$ down to R=25.5, where completeness is $\ge$95\% for all 3 fields. This number is higher than the original studies, but consistent with more recent samples.} {} | Lyman-break galaxies (LBGs) are star-forming galaxies that emit very little flux in the observed UV when they are at redshifts higher than z=2.5. This is because the stellar radiation with energy beyond the Lyman limit (912\AA) is absorbed by the surrounding neutral hydrogen and by the intervening neutral clouds between the galaxy and the observer. Thus the SEDs of these galaxies are characterized by a sharp drop at wavelengths shorter than the 912\AA~rest frame \citep{Mad95} and by a steep increase between the 912\AA~and the 1216\AA~rest frame. Such features have been used extensively during past decades to create substantial samples of LBGs at high redshifts. More specifically, the filters $UGR$ have been used for selecting U dropouts that are candidate LBGs at z$\approx$3 \citep[e.g.,][]{Steidel96,Giava02,Steidel03,Capak04,Sawicki05,Noni09}. \citet{Steidel03} applied this method to 17 high Galactic-latitude fields and presented a sample of 2347 photometrically selected LBG candidates down to a magnitude limit of 25.5 in the R band, corresponding to $\sim$1500$\AA$ rest frame at z$\approx$3, in an area of $\sim$3200arcmin$^2$. After a spectroscopic follow-up \citep{Steid04}, the success rate for LBGs at redshift z$\sim$3 was on the order of 78\%. Thus the adopted color selection provides samples with low contamination that can be used for spectroscopic follow up to do statistical analyses of the LBG population and to study the physical properties of LBGs, such as stellar masses and the UV slope. A statistically significant LBG sample, associated with deep U-band imaging, can also be used to derive stringent upper limits for the escape fraction of UV ionizing radiation from LBGs. In addition, such a dataset is suitable for studying clustering by applying a two-point correlation function analysis, as well as for deriving the fraction of AGNs embedded in such galaxies by combining it with X-ray observations. After the first effort by \citet{Steidel03}, similar surveys have been conducted that reach different magnitude limits. \citet{Sawicki05} covered a relatively small area (169arcmin$^2$), but reached a deeper magnitude limit of R=27.0 (50\% point sources detected). More recently, \citet{Raf09} presented a sample of LBGs at z$\sim$3, using both photometric redshifts and color selection. Their color-selection criterion uses a filter set that is slightly different (u-V vs. V-z) from the one established by \citet{Steidel03} (U-G vs. G-R), and their sample is complete up to V$\approx$27.0, which corresponds to R$\approx$26.5 for this type of sources. \citet{Noni09} present deep imaging in the GOODS area (630arcmin$^2$), with 50\% completeness in LBG selection at R$\approx$26.0. \citet{ly11} used Subaru images, covering an area of 870arcmin$^2$, with 5$\sigma$ limiting magnitudes of R=27.3, but limited the search for LBG candidates at R=25.5. An extended survey has been presented by \citet{VDBurg10}, who used data from the Deep CFHT survey, which covers 4~sq. deg and reaches R=27.9 at 5$\sigma$, although their U-dropout number counts only seem to be complete up to R=26.0. The most recent and extended survey is the one conducted by \citet{Bian13}, which covers 9~sq. deg in the NOAO Bo$\ddot{o}$tes Field, although it is rather shallow, selecting LBG candidates down to R=25.0. Because of the variety of instruments and filters used to select these LBG candidates at z$\sim$3, the selection biases in the various samples are difficult to quantify and at times they lead to diverging results. For example, \citet{LeFev05} find that the number density of galaxies between z=1.4 and z=5 is 1.6 to 6.2 times higher than earlier estimates based mainly on the work of \citet{Steid04}. Such discrepancies in the number density also lead to discrepancies in the derived LFs \citep[e.g.,][]{Iwat07,Sawic06,Reddy08}. We used the Large Binocular Camera \citep[LBC,][]{gial08} at the Large Binocular Telescope (LBT) to obtain a multiwavelength dataset ($UGRIZ$) on three different fields to derive a new sample of LBG candidates at z$\sim$3 through photometric selection. One of the main advantages of our survey is that we have spectroscopic redshifts for two fields (Q0933 and Q1623) and accurate photometric redshifts for COSMOS. In this third field (COSMOS), spectroscopic redshifts are also available, but in a different redshift range than the one we are targeting in this study. These are useful, nonetheless, since we can use them to assess the interloper fraction of our selected candidates. Thus, this is one of the few surveys that combine deep data in a large area with spectroscopic data, giving us a direct way of assessing the completeness and contamination of our sample. The LBC is a wide field binocular imager which gives us the opportunity to probe large areas with deep imaging, particularly in the U band, where it is extremely efficient. In fact, the total area covered by our survey is $\sim$1500arcmin$^2$. Moreover, LBC also includes a custom-made U-band filter, U$_{Special}$, that is particularly efficient and centered on bluer wavelengths ($\lambda_{central}$=355nm), making it more suitable for selecting LBG candidates compared to standard U band. According to the standard color-selection criterion, established by \citet{Steidel03}, for an average color of G-R=0.5, LBG candidates should have U-R$\ge$2.1. This means that for selecting LBG candidates, at R$\leq$26.5 we need a 1$\sigma$ magnitude limit in U band of at least 28.6, in order to exploit the full dynamic color range. For fainter R-band magnitudes, the candidates could show up as upper limits because of incompleteness effects in the U band and not because of their intrinsic SED. The main purpose of this work is to present the full catalog of LBG candidates, selected in the three fields, down to a 50\% completeness magnitude limit (R=26.1-27.0). This catalog will serve as the database for future works that will estimate the UV slope and stellar mass of LBG candidates, especially in the COSMOS field where additional photometry is available. The presented U-band magnitudes can be used to improve photometric redshift estimates in the COSMOS field. The brightest part of this sample can be used for spectroscopic follow-up, and this new spectroscopic sample would help refine the color-selection criterion for LBGs further. Part of this dataset has already been used by \citet{Grazian09} to present deep U-band counts in the Q0933 field, while the extended dataset was used by \citet{Bout11} to derive a stringent upper limit to the escape fraction of ionizing photons of LBGs at z$\sim$3.3. By adding new spectroscopically confirmed candidates, an even more accurate estimate of the escape fraction could be obtained based on this sample. In the following we focus on the number counts of galaxies in the $UGR$ bands and the counts of LBG candidates at z$\sim$3 in the R band, selected using the traditional color-color criteria, along with the slopes derived by the double power law fit of the galaxy number counts. More precisely, in Section 2 we describe how the imaging data were obtained. In Section 3 we present the multiband photometry and the galaxy counts. In Section 4, we present the selection criteria for deriving the LBG candidates and the number counts. In Section 5 we discuss completeness and the effect of interlopers, while in Section 6 we summarize our results. Throughout the paper we adopt the AB magnitude system. | We presented a deep multiband imaging survey with the LBC, covering an area of $\sim$1500arcmin$^2$. We reobserved two fields used in Steidel's original survey \citep[Q0933 and Q1623,][]{Steidel03}, where we obtained deeper R- and U-band imaging. A similar dataset was also obtained for the COSMOS field, where there is available public multiband photometry as well as photometric and spectroscopic redshifts. We reached 50\% completeness at R magnitude of 27.0, 26.1, and 26.5 for Q1623, Q0933, and COSMOS, respectively. The 1$\sigma$ magnitude limit in the U band is between 28.5-28.7 on the whole area, which is a good compromise between depth and total area, compared to other surveys, so far. A significant advantage of our sample is that the U band is much deeper than previous samples. For a limiting magnitude of R=27.0 (50\% completeness) in our deepest field and an average magnitude in the U band of 28.6 at 1$\sigma$ (2$\times$fwhm apertures), this is the only survey with a wide dynamic range in the color selection, allowing us to robustly select LBG candidates with minimum contamination. In comparison, the CFHT survey \citep{VDBurg10}, although reaching 27.9 in the R band (5$\sigma$ for point sources), is shallower in the U band than ours. At the bright end we are fairly consistent with the new survey presented by \citet{Bian13} in the NOAO Bo$\ddot{o}$tes Field, but the two surveys start diverging after R$>$24.0, since the latter one is two magnitudes shallower than our survey, although it is covering a much larger area ($\sim$9~sq. deg). Comparing our candidates with existing spectroscopy in the Steidel fields, where spectroscopic redshifts are available, we show that the deeper U-band dataset allows us to better separate confirmed LBGs at z$\approx$3 from lower redshift interlopers. Although we have less contamination by low-redshift sources, we can see in Fig. 8 that the slope of our LBG counts is actually steeper than previous studies, suggesting that there are more LBGs at faint magnitudes. The slopes we derived are $\alpha$=1.04 at the bright end and $\beta$=0.13 at the faint end, with a break at m*=25.01 and $\log$N*=0.39. We find an average surface density of 3.15 LBG candidates per arcmin$^2$ down to R=25.5, which rises to 10.8 LBG candidates per arcmin$^2$ when we go as faint as R=27.0. This dataset will be the benchmark for a series of future analysis. We intend to obtain spectroscopic follow-up for our brightest candidates, to verify our contamination by interlopers. This extended spectroscopic sample, complemented with deep ULTRA-VISTA images in the COSMOS field will be used for determining stellar masses, ages, and dust content of faint LBGs at z$\approx$3. It will also be possible to measure the UV slope of galaxies in the wavelength range from 1500\AA~to 3000\AA~(rest frame), using a method similar to the one we applied at z$\approx$4 \citep{Cast12}. In addition, based on this sample, we will update our measurement of the escape fraction of Ly$\alpha$ continuum, attributed to LBGs at this redshift, in an effort to understand their contribution to the reionization of the Universe. | 14 | 3 | 1403.0727 |
1403 | 1403.5334_arXiv.txt | By digitizing astronomical photographic plates one may extract full information stored on them, something that could not be practically achieved with classical analogue methods. We are developing algorithms for variable objects search using digitized photographic images and apply them to 30\,cm ($10^\circ \times 10^\circ$ field of view) plates obtained with the 40\,cm astrograph in 1940--90s and digitized with a flatbed scanner. Having more than 100 such plates per field, we conduct a census of high-amplitude ($>0.3m$) variable stars changing their brightness in the range $13<m<17$ on timescales from hours to years in selected sky regions. This effort led to discovery of $\sim 1000$ new variable stars. We estimate that $1.2 \pm 0.1$\,\% of all stars show easily-detectable light variations; $0.7 \pm 0.1$\,\% of the stars are eclipsing binaries ($64 \pm 4$\,\% of them are EA type, $22 \pm 2$\,\% are EW type and $14 \pm 2$\,\% are EB type); $0.3 \pm 0.1$\,\% of the stars are red variable giants and supergiants of M, SR and L types. \\ \\ \noindent \textbf{Keywords}: variable stars, photographic photometry | Historical sky photographs present a record of positions and brightness of astronomical objects. They are used to study behaviour of objects as diverse as Solar system bodies \cite{2011MNRAS.415..701R,2013arXiv1310.7502K}, binary stars \cite{2011PZ.....31....1S,2012AJ....144...37Z}, and active galactic nuclei \cite{2010AJ....139.2425N,2013A&A...559A..20H} on timescales not accessible with CCD imaging data. A few authors used digitized photographic plates to identify previously unknown variable objects \cite{2001A&A...373...38B,2004A&A...428..925V,2008A&A...477...67H,2012ApJ...751...99T}. The Moscow collection contains about 60000 photographic plates (mostly direct sky images) dating back to 1895. The most important part of the collection, known as the ``A'' series, are 22300 plates taken in 1948--1996 with the 40\,cm astrograph \cite{2010ASPC..435..135S}. These are blue-sensitive 30\,cm by 30\,cm plates covering $10^\circ \times 10^\circ$ field on the sky down to the limiting magnitude of $B\sim17.5$. The typical exposure time is 45\,min. The original aim of obtaining the ``A'' series plates was to study variable stars. We decided to extend this work using modern image analysis techniques. The first tests confirmed that it is possible to find variable objects using small sections of plates digitized with a flatbed scanner \cite{2006PZP.....6...18S,2006PZP.....6...34M,2007PZP.....7....3K,2007PZP.....7...24K} and we went ahead to process a series of full-sized $10^\circ \times 10^\circ$ plates \cite{2008AcA....58..279K,2010ARep...54.1000K}. Below we describe the current state of the project. For the original tests we used a pair of CREO/Kodak EverSmart Supreme~II scanners operating at 2540\,dpi resolution ($1.\!\!^{\prime\prime}2$/pix). While showing good photometric performance (typically $<0.1m$ accuracy of an individual measurement), the scanners were suffering from problems common to many flatbed scanners including poor out-of-the-box astrometric performance caused by irregular motion of the scanner drive (Fig.~\ref{fig:saw}) and stitches between image parts digitized during different passes of the scanning array over a photographic plate. It takes about 40~minutes to digitize a half of the 30\,cm plate with the Supreme~II scanner. The time it takes to clean a plate and manually place it into a scanner is small compared to the scanning time. The original Supreme~II scanners were recently replaced by the new Epson Expression~11000XL which provides a factor of two increase in scanning speed operating at 2400\,dpi resolution ($1.\!\!^{\prime\prime}4$/pix). The Supreme~II and Expression~11000XL scanners provide comparable results in terms of photometric and astrometric accuracy. Still, because the scanning process is so slow, we consider it to be more of a technology development tool and an opportunity to investigate a few individual fields rather than a practical way to digitize all the Moscow plate collection in reasonable time. | 14 | 3 | 1403.5334 |
|
1403 | 1403.2208_arXiv.txt | The accretion of matter onto a compact object (a neutron star onto which gas flows from the companion star in X-ray sources; a black hole that is the ``central engine'' in active galactic nuclei and quasars) is a classical problem of modern astrophysics (see Shapiro and Teukolsky 1983; Lipunov 1992; Bisnovatyi-Kogan 2011; and references therein). Beginning in the 1980s, the analytical approach whose foundation was laid back in the mid-twentieth century (Bondi and Hoyle 1944; Bondi 1952) began to be supplanted for natural reasons by numerical simulations (Hunt 1979; Petrich et al. 1989; Ruffert and Arnett 1994; Toropin et al. 1999; Toropina et al. 2012). Analytical solutions were found only in exceptional cases (Bisnovatyi-Kogan et al. 1979; Petrich et al. 1988; Anderson 1989; Beskin and Pidoprygora 1995; Beskin and Malyshkin 1996; Pariev 1996). It should be emphasized that the focus of research has been shifted to magnetohydrodynamics, within which framework it has become possible to properly take into account the turbulent processes associated with magnetic reconnection, magnetorotational instability, etc. (Balbus and Hawley 1991; Brandenburg and Sokoloff 2002; Krolik and Hawley 2002). However, in our opinion, some of the important accretion regimes, which are simple enough for their main properties to be described analytically in terms of ideal hydrodynamics, still remain inadequately explored. These include the effects associated with the presence of angular momentum in the subsonic settling regime and for Bondi-Hoyle accretion. Such additional rotation naturally arises in binary systems when, for example, a neutron star interacts with the stellar wind from its companion, and when the gravitating center moves in a turbulent medium with significant vorticity. This paper is devoted to investigating such flows. In the first part, we formulate the basic equations of ideal steady-state axisymmetric hydrodynamics, which are known to be reduced to one second-order equation for the stream function. Then, in the second part, the subsonic settling accretion is considered. We show that in the presence of angular momentum, the nonradial velocity perturbations grow fairly rapidly as the gravitating center is approached, so that the flow in the inner regions can no longer be considered quasi-spherical. Finally, the third part is devoted to the Bondi-Hoyle accretion. We show that in the presence of axial rotation, a vacuum cylindrical cavity is formed at large distances from the gravitating center near the flow axis. The flow velocity outside this cavity is virtually independent of the distance to the rotation axis. | 14 | 3 | 1403.2208 |
||
1403 | 1403.2722_arXiv.txt | The search for diffuse non-thermal inverse Compton (IC) emission from galaxy clusters at hard X-ray energies has been undertaken with many instruments, with most detections being either of low significance or controversial. Because all prior telescopes sensitive at $E > 10$~keV do not focus light and have degree-scale fields of view, their backgrounds are both high and difficult to characterize. The associated uncertainties result in lower sensitivity to IC emission and a greater chance of false detection. In this work, we present 266 ks \nustars observations of the Bullet cluster, which is detected in the energy range 3--30~keV. \nustar's unprecedented hard X-ray focusing capability largely eliminates confusion between diffuse IC and point sources; however, at the highest energies the background still dominates and must be well understood. To this end, we have developed a complete background model constructed of physically inspired components constrained by extragalactic survey field observations, the specific parameters of which are derived locally from data in non-source regions of target observations. Applying the background model to the Bullet cluster data, we find that the spectrum is well -- but not perfectly -- described as an isothermal plasma with $kT = 14.2 \pm 0.2$~keV. To slightly improve the fit, a second temperature component is added, which appears to account for lower temperature emission from the cool core, pushing the primary component to $kT \sim 15.3$~keV. We see no convincing need to invoke an IC component to describe the spectrum of the Bullet cluster, and instead argue that it is dominated at all energies by emission from purely thermal gas. The conservatively derived 90\% upper limit on the IC flux of $1.1 \times 10^{-12}$ erg s$^{-1}$ cm$^{-2}$ (50--100~keV), implying a lower limit on $B \ga 0.2$ $\mu$G, is barely consistent with detected fluxes previously reported. In addition to discussing the possible origin of this discrepancy, we remark on the potential implications of this analysis for the prospects for detecting IC in galaxy clusters in the future. | \label{sec:intro} A number of observations, mainly at radio frequencies, have established that relativistic particles and magnetic fields are part of the intracluster medium (ICM) of galaxy clusters \citep[e.g.,][]{GF04}. The large ($\sim$Mpc) scale, diffuse structures known as radio halos and relics are produced by relativistic electrons spiraling around $\sim$$\mu$G magnetic fields. The synchrotron emission is a product of both the particle and magnetic field energy densities, the latter of which is not well constrained globally from these or other observations. However, the electron population can be independently detected through inverse Compton (IC) scattering off of ubiquitous Cosmic Microwave Background (CMB) photons, which are up-scattered to X-ray energies and may be observable if the electron population is sufficiently intense \citep{Rep79}. For single electrons or populations with power law energy distributions, the ratio of IC to synchrotron flux gives a direct, unbiased measurement of the average magnetic field strength $B$ in the ICM of a cluster. The magnetic field plays a potentially important role in the dynamics and structure of the ICM, such as in sloshing cool cores where $B$ may be locally amplified so that the magnetic pressure is comparable to the thermal pressure \citep{ZML11}. Detections of IC emission, therefore, probe whether the non-thermal phase is energetically important or, particularly if the average magnetic field is large, it is sizable enough to affect the dynamics and structure of the thermal gas. The quest for the detection of IC emission associated with galaxy clusters began with the launch of the first X-ray sensitive sounding rockets and satellites, although the origin of extended, $\sim$~keV X-rays from clusters was soon recognized to be thermal \citep[e.g.,][]{ST72,MCD+76}. Even so, in clusters with radio halos or relics, IC emission {\it must} exist at some level, since the CMB is cosmological. Thermal X-ray photons are simply too numerous at $E \la 10$~keV for a reliable detection of the IC component; at higher energies, however, the bremsstrahlung continuum falls off exponentially, allowing the non-thermal IC emission to eventually dominate and produce ``excess'' flux in the spectrum. While the first IC searches with \heaos yielded only upper limits, and thus lower limits on the average strength of ICM magnetic fields, $B \ga 0.1 \mu$G \citep{Rep87, RG88}, the next generation of hard X-ray capable satellites -- \rxtes and \saxs -- produced detections in several clusters, although mostly of marginal significance \citep[for a review, see, e.g.,][]{RNO+08}. The most recent observatories -- \suzakus and \swifts -- however, have largely failed to confirm IC at similar levels \citep{Aje+09,Aje+10,Wik+12,Ota12}. One exception is the Bullet cluster (a.k.a.\ 1E 0657-56, RX J0658-5557), although the detection significance of the non-thermal component is marginal in both the \rxtes and \swifts data alone. The \rxtes observation of the Bullet cluster's had X-ray emission was not very constraining, but the overall spectrum from the PCA and HEXTE instruments, fit jointly with \xmms MOS data, favored a non-thermal tail at not quite $3\sigma$ significance \citep{PML06}. A two-temperature model fit the data equally well, but the higher temperature component had a nearly unphysically high temperature ($\sim 50$~keV) for a large (10\%) fraction of the total emissivity. In a similar analysis, the \xmms data were simultaneously fit with a spectrum from the \swifts BAT all sky survey, and the non-thermal component was confirmed at the $5\sigma$ confidence level \citep{Aje+10}. However, a two-temperature model technically did a better job of describing the spectra, although the secondary temperature component was very low (1.1~keV), causing the authors to reject this interpretation. While this low temperature component is certainly not physical, the fact that a model can fit the data so well when an extra component is added solely at low energies indicates that the non-thermal component is not being strongly driven by the BAT data. Further confirmation of an IC component in the Bullet cluster is clearly necessary to rule out a purely thermal description of the hard band emission and uphold the implied magnetic field strength of $\sim 0.16 \mu$G. The intriguing evidence for a non-thermal excess at hard energies coupled with its smaller angular size makes the Bullet cluster an ideal galaxy cluster target for the \nustars X-ray observatory \citep{Har+13}. \nustars is the first focusing hard X-ray telescope with a bandpass between 3 and 80~keV and is the first telescope with the ability to focus X-rays in the hard X-ray band above 10~keV. It has an effective area at 30~keV of $2 \times 110$ cm$^{2}$ and imaging half power diameter (HPD) of $58\arcsec$. While the effective area is somewhat lower than that of previous instruments, the focusing capability % vastly reduces the background level and its associated uncertainties. Whereas collimators onboard \rxte, \sax, and \suzakus have quite large, $\ga 1$\arcdeg fields of view (FOVs) that include substantial emission from cosmic X-ray background (CXB) sources, the equivalent region of the Bullet cluster within \nustars spans $\sim 100\times$ less solid angle on the sky. Also, for clusters that fit well within \nustar's $\sim 13\arcmin \times 13$\arcmin FOV, simultaneous offset regions can be used to precisely characterize the background to an extent not possible with collimated instruments. We describe the two \nustars observations and their generic processing in Section~\ref{sec:obs}. In Section~\ref{sec:cal}, the modeling of the background and its systematics and the overall flux calibration are briefly described (see Appendices~\ref{sec:appendixbgd} and \ref{sec:appendixsim} for details). We examine hard band images and the character of the global spectrum in Section~\ref{sec:analy}. Lastly, the implications of these results are discussed in Section~\ref{sec:disc}. We assume a flat cosmology with $\Omega_M = 0.23$ and $H_0 = 70$ km s$^{-1}$ Mpc$^{-1}$. Unless otherwise stated, all uncertainties are given at the 90\% confidence level. | \label{sec:disc} \subsection{Brief Summary} \label{sec:disc:summary} The Bullet cluster was observed by \nustars in two epochs for a cumulative 266 ks of conservatively-cleaned exposure time. The cluster is clearly detected below $\sim 30$~keV with an energy-dependent morphology consistent with the extrapolation of projected temperature maps obtained with \chandras and \xmm. Above $\sim 30$~keV, potential emission associated with the ICM consists of $< 10$\% of the counts per channel. The average temperature of the global spectrum is $14.2 \pm 0.3$~keV, in good agreement with estimates from \rosat$+$\ascas \citep[14.5~keV,][]{LHB+00} and \chandras \citep[14.8~keV,][]{MGD+02}, but somewhat higher than independent estimates from \xmms and \rxtes \citep[$\sim 12$~keV,][]{PML06}. Given the differences between instrument sensitivity and the accuracy of their respective calibrations, we do not suggest any significant discrepancy. In order to search for a non-thermal excess above the thermal emission at hard energies, we invested a good deal of effort to understand the largest uncertain factor: the background. We constructed an empirical, spatial-spectral model of the background from blank sky data and applied it to our observations to derive a ``most likely'' model background spectrum for the region containing cluster emission. After evaluating the important systematic uncertainties in the model, 1000 realizations of the background are generated and each subtracted from the spectrum, which is fit with three spectral models representing a simple (1T) or more realistic (2T) thermal-only origin, or a significant IC component at the highest detectable energies (T+IC), for the emission. In over 98\% of the fits, the 2T model was statistically favored over the T+IC model, and reasonable values are obtained for both temperatures in the former. We therefore conclude that no significant non-thermal emission has been detected in the \nustars observations of the Bullet cluster and place an upper limit on the IC flux of $1.1 \times 10^{-12}$ ergs s$^{-1}$ cm$^{-2}$ (50--100~keV). This flux falls below that reported by \rxtes and \swift. \subsection{Comparison to and Implications Regarding Previous Results} \label{sec:disc:prevresults} As mentioned in Section~\ref{sec:intro}, \citet{PML06} first suggested the existence of significant IC emission at hard energies in the Bullet cluster based on a joint analysis of \xmms and \rxtes spectra. The uncertainty in the measurement of $(3.1 \pm 1.9) \times 10^{-12} $ ergs s$^{-1}$ cm$^{-2}$ (50--100~keV) is too large to justify a claim of detection. However, a more recent analysis \citep{Aje+10}, using a \swifts BAT spectrum found a flux of $(1.6 \pm 0.5) \times 10^{-12} $ ergs s$^{-1}$ cm$^{-2}$ (50--100~keV), roughly consistent with that from \citet{PML06}. Both fluxes are only barely in conflict with our conservative upper limit, but our most likely IC flux of $(0.58 \pm 0.52) \times 10^{-12} $ ergs s$^{-1}$ cm$^{-2}$ (50--100~keV) is clearly inconsistent with these previous measurements. The origin of the discrepancy has two potential explanations: either the spectra from the various instruments disagree; or the approach to modeling the spectra disagree. While even minor calibration differences between the characterization of the telescope responses and of the backgrounds can significantly affect results, a comparison of the \rxte, \swift, and \nustars spectra fit to 1T or 2T models implies these are not responsible. None of the instruments on these satellites reliably detect emission above 30~keV from the Bullet, and below this energy there is no compelling excess above a reasonable thermal-only model in Figure~2 of \citet{PML06}, the lower left panel of Figure~5 of \citet{Aje+10}, or Figure~\ref{fig:specallmodels} of this paper. At higher energies, the background dominates the count rate and its treatment becomes crucial, where even small fluctuations can result in a false IC signal. It is beyond the scope of this paper to evaluate the backgrounds from the other two missions, but no causes for worry are evident in the analyses of the \rxtes and \swifts data. If the spectra are all consistent with each other, we must attribute the conflicting conclusions to differences in how the spectra are modeled. In principle there should be no difference, since 1T, 2T, and T+IC models are each tried in all three analyses. The crucial distinction between them is the minimum energy used in the fits: 1~keV \citep{PML06}, 0.5~keV \citep{Aje+10}, or 3~keV (this work). The lower end of the energy range matters because the thermal gas of the Bullet cluster is decidedly {\it not} isothermal \citep{MGD+02}, and the fraction of the emission any temperature component contributes strongly varies with energy, with low temperature components dominating at soft energies but essentially disappearing from the hard band. Merging clusters, especially those like the Bullet where one subcluster hosts a cool core, may have components of roughly equal emission measure that span a factor of two in temperature. In particular, the emission coming from the cool core ranges from $kT \la 4$~keV up to 7~keV, has a higher abundance, and mostly contributes at the lowest energies. The gas associated with the main subcluster is hotter, with a central $kT \sim 12$ keV and shocked regions to the W and also to the slight SE with $kT \ga 16$~keV (M. Markevitch, priv.\ comm.). Given the extreme range in temperatures, even a 2T model may provide an insufficient description of the data over a broad energy range. Ironically, the T$+$IC model might better fit the {\it purely thermal} emission more successfully in this case, since a power law with free photon index is able to simultaneously account for emission from components at either extreme of the temperature distribution \citep[e.g., A3112,][]{BNL07,LNB+10}. By including data below 3~keV in order to better constrain the thermal component, in all likelihood the larger consequence is to bias the characterization of the thermal component, since only simple spectral models are considered. Because the response of \xmm's EPIC instruments peaks between 1--2~keV and shot noise, which has a fractional error decreasing with energy, sets the signal-to-noise ratio, fit minimization routines are overly biased to find good fits at these lower energies. The second model in the multi-component fits of \citet{Aje+10}, from this perspective, are focused on artificially ``fixing'' the residuals below 1 or 2~keV with either the second temperature or IC component, and the slope of the IC's photon index is determined mostly by the \xmms data alone, given that the T$+$IC model over-predicts almost every BAT data point. This explanation is less compelling for the \xmm$+$\rxtes analysis of \citet{PML06}. In this case, the fact that fits to both the \xmms (over 1--10~keV) and \rxtes (over 3--30~keV) yield the same temperature despite the different energy bands is worrisome; given the multi-temperature structure, one would expect the 3--10~keV temperature from \xmms to be hotter than this average, and the 3--10~keV temperature from \rxtes to be cooler or unchanged. In contrast, the temperatures in our 2T model roughly agree with the approximately bimodal temperature distribution seen with \chandra, lending credence to the still imperfect thermal model approximated with only two components. The much improved spectral resolution of \nustars over that of \rxtes and \swifts undoubtedly helps the fit find physical temperatures. For the T$+$IC model, when the photon index is left free, it tends toward a somewhat larger or steeper value where it only influences the lowest energy channels. The IC component, when exhibiting this behavior, mimics a lower temperature thermal component more than it tries to account for any excess emission at high energies, further refuting the existence of a significant non-thermal excess. By combining the synchrotron spectrum at radio frequencies with an IC estimate or upper limit, we can directly constrain the volume averaged magnetic field strength. Following the arguments and expression for $B$ in Equation~14 of \citet{WSF+09}, we use the total radio halo flux of 78 mJy at 1300 MHz and a radio spectral index of 1.2--1.4 \citep{LHB+00}. The radio spectrum exhibits no flattening at lower frequencies as in \citet{TKW03} for the Coma cluster, so we assume the spectrum continues as a power law to lower frequencies where the electron population producing the synchrotron is the same as those producing the IC. The upper limit on IC emission translates to a lower limit on the magnetic field strength of $B \ga 0.2$ $\mu$G, which is comparable to values found in other clusters using \suzakus and \swifts data \citep[e.g.,][]{Ota12,Wik+12}. Unlike estimates of $B \sim 0.1$--0.2 $\mu$G, such lower limits are more consistent with equipartition estimates \citep[$\sim$ 1 $\mu$G for the Bullet cluster,][]{PML06} and Faraday rotation measure estimates in other clusters, which typically place the field strength at a few $\mu$G \citep[e.g.,][]{KKD+90,CKB01,BFM+10}. While it is possible to reconcile these estimates with a lower volume averaged value of $B$, our lower limit does not requires it. \subsection{Implications for Future IC Searches} \label{sec:disc:genconcl} In order to detect diffuse, faint IC emission in galaxy clusters, the IC signal must be teased from both thermal and instrumental ``backgrounds,'' both of which are likely to be brighter than the IC emission itself. While going to harder energies reduces contaminating emission from the thermal gas, it requires a large effective area at high energies and/or low and well-characterized instrumental and/or cosmic backgrounds. Regarding the background, focusing optics like those onboard \nustars have clear advantages over non-focusing ones, such as collimators and coded-mask telescopes. The effective area or equivalent sensitivity, however, remains a greater challenge for reflective optics due to the large number -- and thus weight -- of mirror shells needed. IC photon intensity also declines rapidly with energy, making it exceedingly difficult to detect such emission at high energies given the statistical fluctuations of a realistic background level without a very large effective area. In the foreseeable future, IC emission in hot clusters will only be detectable as a subtle inflection of the thermal tail. Such non-thermal inflections, however, are complicated by having plausible alternative origins, such as background AGN, clumps of super hot gas, and slightly underestimated overall backgrounds. These difficulties, combined with magnetic field equipartition estimates nearly an order of magnitude larger than the field strengths inferred by IC measurements, emphasize the need for a conservative approach. The recent history of IC searches seems to justify this view. \citet{Ota12} nicely summarizes some \rxte, \sax, \swift, and \suzakus detections and upper limits in their Figure~10, which shows that clusters may exhibit an IC signal in the dataset of one observatory but not another -- sometimes, but not often, contradictorily. The reasons behind these differences are not always clear, but likely include some combination of relative instrumental calibration, background treatment, and telescope capabilities. Detections are only mildly statistically significant and are in danger of being compromised by the complications mentioned above. The clusters expected to host IC-producing electrons are those undergoing mergers, which produce -- possibly extreme -- multi-temperature distributions. Such distributions should in principle be straightforward to separate from a non-thermal component, {\it if} the IC component begins to dominate the spectrum at an energy where the signal-to-noise is sufficiently high, including systematic uncertainties. For the Bullet cluster, we reach this point around 20--30~keV. The next mission capable of detecting IC emission associated with radio halos is {\it Astro-H}, which will include a Hard X-ray Telescope (HXT) and Imager (HXI), with a sensitivity similar to \nustar, as well as substantial soft X-ray capabilities with the Soft X-ray Imager (SXI) and X-ray Calorimeter Spectrometer (XCS). Although the HXI alone provides for no improvement over \nustar, the SXI and especially the XCS should allow for a more detailed and complete accounting of the thermal components of target clusters through emission line diagnostics. A better understanding of the thermal continuum will make marginal non-thermal-like excesses at hard energies more significant and upper limits more constraining. If the average magnetic field strength in galaxy clusters hosting radio halos is typically closer to $\sim 1$ $\mu$G than the $\sim 0.2$ $\mu$G implied by past detections, even {\it Astro-H} is unlikely to be enough of a technical advance. Because the ratio of synchrotron to IC flux scales with the energy density of the of the magnetic field ($\propto B^2$), a $5\times$ stronger $B$ requires a $25\times$ more sensitive telescope than currently exists. IC emission at this level would only compete with the thermal emission of a Bullet-like cluster between 30--50~keV, and given how faint the cluster is at these energies relative to the background (e.g., Figures~\ref{fig:specallmodels} and \ref{fig:specsig}), it is likely that most of the sensitivity gain will come from increasing the effective area. An increase in effective area over \nustars of not quite an order of magnitude would be achieved by the proposed probe class {\it HEX-P} mission\footnote{http://pcos.gsfc.nasa.gov/studies/rfi/Harrison-Fiona-RFI.pdf}, so a substantial decrease in background and its systematic uncertainty would still be necessary. In terms of past IC detections, it may be the case that what has been measured is not IC emission associated with large scale radio halos. Instead of being associated with the electrons producing radio halos and relics, the IC emission might originate from electrons accelerated by accretion shocks at the virial radius \citep[e.g.,][]{KW10,KKL+12}. Non-imaging telescopes -- unlike \nustars -- would pick up this emission, which peaks in surface brightness $\ga$ Mpc from cluster centers. Given our restricted extraction region around the Bullet cluster, we are not sensitive to these electrons. However, the FOV does partially include the virial region, where we characterized the background, so in principle this IC emission could exist at very faint levels; a cursory check for a non-thermal component was made when the background was fit, but no such signal beyond the generic background model was apparent. Note that these observations are not ideally suited for searches of this emission, which would be better served by several offset pointings around the periphery of the cluster. Even so, the emission would be strongest at the low energy end, where we attribute extra flux detected in the background regions to scattered thermal photons. It should be feasible to constrain these models, but only after a more detailed accounting of the Bullet cluster's thermal structure has been undertaken, in order to separate local emission from scattered photons from various regions in the cluster. We will address this issue in a future paper focussed on the hard X-ray weighted temperature structure, including extreme temperature shock regions. | 14 | 3 | 1403.2722 |
1403 | 1403.4935_arXiv.txt | We use broadband photometry extending from the rest-frame UV to the near-IR to fit the {\it individual\/} spectral energy distributions (SEDs) of 63 bright ($L({\rm Ly}\alpha) > 10^{43}$~ergs~s$^{-1}$) Ly$\alpha$ emitting galaxies (LAEs) in the redshift range $1.9 < z < 3.6$. We find that these LAEs are quite heterogeneous, with stellar masses that span over three orders of magnitude, from $7.5 < \log M/M_{\odot} < 10.5$. Moreover, although most LAEs have small amounts of extinction, some high-mass objects have stellar reddenings as large as $E(B-V) \sim 0.4$. Interestingly, in dusty objects the optical depths for Ly$\alpha$ and the UV continuum are always similar, indicating that Ly$\alpha$ photons are not undergoing many scatters before escaping their galaxy. In contrast, the ratio of optical depths in low-reddening systems can vary widely, illustrating the diverse nature of the systems. Finally, we show that in the star formation rate (SFR)-log mass diagram, our LAEs fall above the ``main-sequence'' defined by $z \sim 3$ continuum selected star-forming galaxies. In this respect, they are similar to sub-mm-selected galaxies, although most LAEs have much lower mass. | \label{sec:intro} \cite{par67} originally predicted that the Ly$\alpha$ emission line could be a very useful probe of the high-redshift universe, and, while it took many years to detect this feature \citep{cow98, hu98}, Ly$\alpha$ emitting galaxies (LAEs) are now routinely observable from $z \sim 0.2$ \citep{deh08, cow10} to $z>7$ \citep{hu10, ouc10, lid12, ono12}. However, while the detection of Ly$\alpha$ in the high-redshift universe is relatively common, the physics of this emission is still not well understood. Since Ly$\alpha$ is a resonance transition, it is likely that each photon scatters many times off intervening neutral material before escaping into intergalactic space. As a result, even a small amount of dust should extinguish the line, and indeed, only $\sim 25$\% of Lyman-break galaxies (LBGs) at $z\sim2-3$ have enough Ly$\alpha$ in emission to be classified as an LAE \citep{sha03}. While it is possible for dusty galaxies to create an escape path for Ly$\alpha$ via supernova-blown bubbles and/or exotic geometry \citep[\eg][]{ver12} % most analyses suggest that the LAE population as a whole is made up of young, low-mass, low-metallicity systems, possessing relatively little interstellar dust \citep[\eg][]{gaw07, gua11}. To date, most Ly$\alpha$ emitters have been detected via deep narrow-band imaging with 4-m and 8-m class telescopes \citep[\eg][]{gro07, ouc08} . These surveys generally extend to low Ly$\alpha$ luminosities and sample a wide range of the high-redshift galaxy luminosity function. Unfortunately, in the continuum, LAEs are usually quite faint, which makes studying their spectral energy distributions (SEDs) difficult. As a result, most of our knowledge about those physical properties which are encoded in the objects' SEDs -- information such as stellar mass, extinction, and population age -- has come from stacking techniques \citep[\eg][]{gaw07, gua11}. These analyses only yield estimates for a ``typical'' LAE and may be subject to serious systematic biases associated with the stacking techniques \citep{var13}. Moreover, those few programs that have sought to measure the SEDs of individual LAEs \citep[\eg][]{fin09, nil09, yum10, nak12, mcl14} have generally been restricted to very small numbers of objects. These efforts have been able to provide hints as to the range of properties exhibited by the population, but have been unable to probe the statistics of the entire LAE population. Thus, while we have some idea about the mass and dust content of ``representative" LAEs, the distribution of physical parameters for the entire population remains poorly constrained. Here, we investigate the stellar populations of luminous Ly$\alpha$ emitters by analyzing the individual spectral energy distributions of 63 $1.9 < z < 3.6$ LAEs detected by the McDonald 2.7-m telescope's Hobby-Eberly Telescope Dark Energy Experiment (HETDEX) Pilot Survey. In Section~\ref{sec:sample}, we summarize the HETDEX Pilot Survey and describe the ancillary groundbased, {\sl HST,} and {\it Spitzer\/} photometry which is available for analysis. In Section~\ref{sec:analysis}, we briefly describe the SED-fitting code {\tt GalMC} \citep{acq11} and the underlying assumptions used to derive stellar mass, extinction, and age from a set of broadband photometry which extends from the rest-frame UV through to the near-IR\null. We also outline the procedures used to measure the physical sizes of the LAEs in a manner that is insensitive to the effects of cosmological surface brightness dimming. In Section~\ref{sec:results}, we present our results and show that the population of luminous $z \sim 3$ LAEs is quite heterogeneous, with sizes extending from $0.5$~kpc $ \lesssim r \lesssim 4$~kpc, stellar masses ranging from $7.5 < \log M/M_{\odot} < 10.5$, and differential extinctions varying between $0.0 < E(B-V) < 0.4$. We illustrate several trends involving LAE physical parameters, including a positive correlation between reddening and stellar mass, a positive correlation between stellar mass and galactic age, and a positive correlation between galaxy size and Ly$\alpha$ luminosity. We also examine the possible evolution of physical properties with redshift and compare our LAEs to other $z \sim 3$ objects on the star-forming galaxy main sequence. We conclude by discussing the implications of our results for the underlying physical mechanisms of Ly$\alpha$ escape in high redshift galaxies. For this paper we adopt a cosmology with $H_0 = 70$~km~s$^{-1}$~Mpc$^{-1}$, $\Omega_{\rm M} = 0.3$, and $\Omega_\Lambda = 0.7$ \citep{pla13, hin12}. | \label{sec:conclusion} Using broadband photometric data which extends from the rest-frame UV through to the near-IR, we have been able to measure the stellar masses, reddenings, and sizes for a sample of 63 luminous LAEs found in the HETDEX Pilot Survey. Our fits demonstrate that, contrary to popular belief, Ly$\alpha$ emitters are not exclusively low mass objects. In fact, HPS-selected LAEs are quite heterogeneous, and are drawn from almost the entire stellar mass range of high-redshift galaxies. Moreover, there is a striking similarity between the mass function of LAEs and the mass function expected for the galactic star-forming population as a whole. This fact, and the lack of correlation between Ly$\alpha$ luminosity and stellar mass, suggests that searches for Ly$\alpha$ emission are excellent way of sampling a large fraction of the mass function of high-redshift star-forming galaxies. Ly$\alpha$-emitting galaxies occupy a different part of stellar mass-SFR parameter space than that of galaxies found by other methods. Like the higher-mass sub-mm galaxies, LAEs fall above the main sequence of star-forming galaxies found by \cite{dad07}. This suggests that there is a different slope for the main sequence of star-bursting galaxies. Interestingly, LAEs do fall along the main sequence defined by \cite{whi12}, though the $\sim 2$ dex extrapolation required to reach their masses introduces significant uncertainty. Due to the various selection effects at work, the connection between the various classes of star-forming galaxies is murky at best. We also find that the range in observed $q$-factors is dependent on the reddening, with the widest range of $q$-values occurring at low extinction. Interestingly, the observed values of $q$ tend to unity as the reddening (or mass) increases, suggesting that in these objects, Ly$\alpha$ photons are not undergoing a large number of scattering events. This strongly implies that winds are an important component in the making of high-mass LAEs. Furthermore, we find that the half-light radius and the $q$-factor are positively correlated, implying that Ly$\alpha$ emission is enhanced in very small objects. | 14 | 3 | 1403.4935 |
1403 | 1403.4797_arXiv.txt | White dwarfs (WDs) can increase their mass by accretion from companion stars, provided the mass-accretion rate is high enough to avoid nova eruptions. The accretion regimes that allow growth of the WDs are usually calculated assuming constant mass-transfer rates. However, it is possible that these systems are influenced by effects that cause the rate to fluctuate on various timescales. We investigate how long-term mass-transfer variability affects accreting WDs systems. We show that, if such variability is present, it expands the parameter space of binaries where the WD can effectively increase its mass. Furthermore, we find that the supernova type Ia (SNIa) rate is enhanced by a factor 2-2.5 to a rate that is comparable with the lower limit of the observed rates. The changes in the delay-time distribution allow for more SNIae in stellar populations with ages of a few Gyr. Thus, mass-transfer variability gives rise to a new formation channel of SNIa events that can significantly contribute to the SNIa rate. Mass-transfer variability is also likely to affect other binary populations through enhanced WD growth. For example, it may explain why WDs in cataclysmic variables are observed to be more massive than single WDs, on average. | White dwarfs (WDs) in binaries can accrete from their companion stars. Such binaries are called cataclysmic variables (CVs) if the donor stars are low-mass main-sequence stars, symbiotic binaries (SBs) if they are evolved red giants, or AM CVNs if the donor stars are low-mass Helium WDs or Helium stars. For CVs and SBs, the matter accreted by the WD consists mainly of hydrogen. As the matter piles up on the surface of the WD, it eventually reaches temperatures and densities high enough for nuclear burning. The burning can proceed in two ways, depending on the accretion rate and the mass of the WD. For high accretion rates and WD masses, the hydrogen burning on the surface of the WD is continuous \citep{Whe73,Nom82}, whereas for low accretion rates and WD masses the hydrogen is burned in thermo-nuclear runaway novae \citep{Sch50,Sta74}. In general, the high mass-transfer rates needed for continuous surface hydrogen burning can only be reached by SBs, where high mass-transfer rates can be driven by the expansion of the evolved star and by systems with main-sequence donors more massive than the accreting WDs \citep{Nom00}. The masses of WDs with high accretion rates can grow effectively, but at very high accretion rates close to the Eddington limit, the growth of the white dwarf is limited. At these rates a hydrogen red-giant-like envelope forms around the WD and hydrogen burning on top of the WD is strong enough for a wind to develop from the WD \citep{Kat94,Hac96}. On the other hand, at low accretion rates mass accretion on to the WD is not very efficient either, as the nova eruptions eject some or all of the accreted matter from the binary system, possibly along with some of the surface material of the WD itself \citep[e.g.][]{Pri95}. The average mass-transfer rate allowing growth of the white dwarf is therefore limited to a relatively narrow range (approximately $10^{-7}-10^{-6}M_{\odot}$ yr$^{-1}$). The growth of WD masses can have important consequences. In the single-degenerate (SD) scenario for type Ia supernova (SNIa) progenitors \citep{Whe73,Nom82} the accretion on to a carbon-oxygen WD pushes the mass above the critical mass limit for WDs (close but not equal to the Chandrasekhar limit) which then explodes as a SNIa. In this scenario, it is necessary for the WD to retain several tenths of solar masses of accreted material. It is not possible to achieve such mass growth for the majority of systems with mass-transfer rates in the nova regime, even if some of the accreted matter is retained. Following this theory, the rate and delay time distribution (DTD, evolution of the rate as a function of time after a single star formation episode) can be estimated with the use of population synthesis models \citep[e.g.][]{Yun94,Too12,Bou13}. While there currently is no consensus between the models as to the shape of the DTD \citep{Nel13, Bou13}, the majority of models agree on two problems: 1) There are not enough systems with high mass-transfer rates to account for all the observed SNeIa, and 2) after an age of approximately $6-7$ Gyr, it is not possible to create SNIa explosions through this scenario, as only low-mass donors remain. The considerations above apply to systems where the mass-transfer rate is given by the evolutionary state of the system. I.e. two binaries with the same parameters will have the same mass-transfer rate. Observations of accreting WD systems indicate that the long-term average mass-transfer rates do indeed follow the expectations \citep[e.g.][]{Kni11}. However, it is possible that the mass-transfer rates are highly variable on intermediate timescales \citep{Pat84,Ver84,War87,Ham89}. In this paper, we discuss such variability and show that it affects the evolution of accreting WD systems. In particular, the effects can be of high importance for understanding SD SNIa progenitors, as it increases the volume of the parameter space of systems that can explode as SNeIa. | We have studied the effect of mass-transfer variability on accreting WDs in binary companion stars. Long-term mass-transfer variability can be induced by e.g. irradiation of the donor star by the accreting WD or by cyclic variations of the Roche lobe from mass loss episodes \citep{Kni11}. The timescale of the variability should be longer than the thermal timescale of the non-degenerate surface layer of the WD so that the surface burning is affected. On the other hand, the timescale of the mass-transfer cycles should not be too long, such that the binary is not affected in any observable way (e.g. strong bloating of donor stars by irradiation). Currently observations hardly constrain the theoretical models of mass-transfer variability \citep[e.g.][]{Bun04} and therefore we have constructed a number of models rather than studying the details of a particular mass-transfer variability model. We show that long-term mass-transfer variability can significantly affect the accretion process and retention efficiency of mass transfer towards WDs. Mass-transfer variability and accompanying enhanced retention efficiencies is likely to impact the properties of accreting WD binaries. We find that irrespective of the specific shape of the mass-transfer variability, for all variability models the WDs can effectively grow down to average mass-transfer rates a factor of $\beta$ lower than in the standard scenario without variability. As an example, we study the evolution of SNIa progenitors from the single-degenerate channel. We find that if mass-transfer cycles take place, the parameter space of systems that become SNIa events is increased towards low mass donor stars. Furthermore we find that the integrated SNIa rate increases by a factor of about 2-2.5, which is comparable with the lower limit of the observed rates \citep[see][]{Mao11b, Per12, Mao12, Gra13}. Variability models in which the maximum mass-transfer rate is not limited affect the SNIa rate less. In conclusion, mass-transfer cycles potentially lead to a new formation channel of SNIa events that can significantly contribute to the SNIa rate. | 14 | 3 | 1403.4797 |
1403 | 1403.1667_arXiv.txt | We present the average rest-frame spectrum of high-redshift dusty, star-forming galaxies from ${250-770}$\,GHz. This spectrum was constructed by stacking ALMA 3\,mm spectra of 22 such sources discovered by the South Pole Telescope and spanning $z=2.0-5.7$. In addition to multiple bright spectral features of \twco, [CI], and \hto, we also detect several faint transitions of \thco, HCN, HNC, \hcop, and CN, and use the observed line strengths to characterize the typical properties of the interstellar medium of these high-redshift starburst galaxies. We find that the \thco brightness in these objects is comparable to that of the only other $z > 2$ star-forming galaxy in which \thco has been observed. We show that the emission from the high-critical density molecules HCN, HNC, \hcop, and CN is consistent with a warm, dense medium with $\tkin \sim 55$\,K and $\nHt \gtrsim 10^{5.5}$\,\percc. High molecular hydrogen densities are required to reproduce the observed line ratios, and we demonstrate that alternatives to purely collisional excitation are unlikely to be significant for the bulk of these systems. We quantify the average emission from several species with no individually detected transitions, and find emission from the hydride CH and the linear molecule CCH for the first time at high redshift, indicating that these molecules may be powerful probes of interstellar chemistry in high-redshift systems. These observations represent the first constraints on many molecular species with rest-frame transitions from $0.4-1.2$\,mm in star-forming systems at high redshift, and will be invaluable in making effective use of ALMA in full science operations. | \label{intro} High redshift, dusty, star-forming galaxies (DSFGs) are a population of luminous ($\lir > 10^{12}\lsol$), dust-obscured objects undergoing short-lived intense starburst events \citep[e.g.,][]{blain02,lagache05}. First discovered by the SCUBA instrument on the James Clerk Maxwell Telescope at 850\,\um in the late 1990s \citep{smail97,barger98,hughes98}, these distant sources are sufficiently faint to make follow-up study at all wavelengths difficult. Additionally, the large beam sizes of single-dish submillimeter facilites has made the identification of optical or infrared counterparts to the submillimeter sources challenging. Their infrared luminosities imply star formation rates of hundreds to thousands of solar masses per year, making them capable of becoming massive, quiescent galaxies ($M_{*} \sim 10^{11}\msol$) in only 100\,Myr \citep{hainline11,michalowski12,fu13}. The space and redshift distributions of these extreme starbursts are clearly important diagnostics of the buildup of structure in the universe, but remain a challenge for current galaxy evolution models \citep[e.g.,][]{baugh05,swinbank08,dave10,hayward13}. In recent years, a picture has emerged in which the majority of gas-rich galaxies lie along a so-called `main sequence' in stellar mass vs. star formation rate, characterized by star formation in massive, secular disks \citep[e.g.,][]{noeske07b,daddi10,tacconi10,elbaz11,hodge12}. A minority of objects exhibit significantly enhanced star formation rates, and are characterized by star formation triggered by major mergers \citep[e.g.,][]{narayanan09,engel10}. Given the challenging nature of follow-up observations, the study of gravitationally lensed starburst systems continues to generate valuable insight into the properties and physics of high-redshift DSFGs. Strong gravitational lensing creates gains in sensitivity or angular resolution which allow much more detailed studies than are possible for otherwise equivalent unlensed systems. Unfortunately, the brightest sub-mm sources have such low number density ($N < 1$\,$\mathrm{deg}^{-2}$ for $\sef > 100$\,mJy; \citealt{negrello07}) that large area surveys are the only way to build up a statistically significant sample. Large numbers of such objects have recently been uncovered by wide-field sub/millimeter surveys, including those conducted by the South Pole Telescope (SPT; \citealt{carlstrom11,vieira10,mocanu13}) and \textit{Herschel}/SPIRE \citep{negrello10,wardlow13}. High-resolution follow-up imaging at 870\,\um has confirmed that these objects are nearly all lensed \citep{hezaveh13,vieira13,bussmann13}. Lensed DSFGs offer the best chance to observe these systems for spectral lines which would otherwise be too faint to detect at such great distances, allowing a more detailed characterization of the interstellar medium in these objects. The prodigious star formation rates of DSFGs require that they contain vast reservoirs of molecular gas ($M_{H_2} \sim 10^{10}\msol$; e.g., \citealt{greve05,bothwell13}) from which those stars form. Probing the density, thermodynamic state, and balance of heating and cooling of the interstellar gas then reveals the star-forming conditions in these extreme starbursts. Unfortunately, due to its low mass and lack of a permanent electric dipole moment, direct observations of cold H$_2$ are difficult. Instead, a suite of molecular and atomic fine structure lines are typically used to diagnose the interstellar medium of galaxies both locally and at high redshift. Carbon monoxide (\twco) is by far the most common molecule observed at millimeter wavelengths in extragalactic objects, due to its high abundance relative to H$_2$, ease of excitation, and rotational lines at frequencies of high atmospheric transmission. The ground state rotational line of \twco(1-0) ($\nu_\mathrm{rest} = $115\,GHz) has been used for decades \citep[e.g.,][]{wilson70} as a tracer of the bulk of the molecular gas in the interstellar medium. However, the numerical conversion between gas mass and \twco luminosity can vary by more than an order of magnitude depending on the metallicity and gas conditions of the galaxy, and the appropriate value for most high-redshift systems is uncertain \citep[e.g.,][]{downes98,tacconi08,ivison11,narayanan12,bolatto13}. Additional consideration of optically thin species, such as \thco and \ceto, may allow for accurate gas mass estimates, if the relative abundances of those species can be estimated. Due to its low dipole moment ($\sim$0.15\,D), \twco rapidly becomes collisionally thermalized at densities of just $\nHt \sim \rm{few} \times 10^{2}$\,\percc. Spectral features of other molecules with higher dipole moments, such as HCN, HNC, and \hcop, are thought to arise from regions with higher densities ($\nHt \gtrsim 10^{4}$\,\percc) where stars are actively forming \citep{gao04b}. The extreme conditions required for these molecules to be collisionally excited, combined with abundances lower than that of \twco by multiple orders of magnitude \cite[e.g.,][]{wang04,martin06}, make their lines faint and observation difficult. The extremely wide spectral range and high sensitivity of the \textit{Herschel}/SPIRE-FTS instrument \citep{griffin10} have allowed for spectral observations of nearby Ultra-Luminous Infrared Galaxies (ULIRGs) over the entire far-IR wavelength range. In the prototypical ULIRG Arp~220, for example, the \twco spectral line energy distribution (SLED) is now complete up to $J=13-12$, and dozens of lines of species including [CI], \hto, HCN, and OH and their ions and/or isotopologues have been seen in emission and absorption \citep{rangwala11,gonzalezalfonso12}. With a sufficiently wide range of transitions observed, some degeneracies inherent in excitation modeling can be eliminated, and simple geometric models can be constructed to reproduce all the observed spectral features. Many of these lines can only be observed in local sources from space, making direct comparison between local starbursts and their high-$z$ counterparts challenging. At high redshift ($z \gtrsim 1$), observations of \twco and various far-IR fine structure lines have become increasingly common (for a recent review, see \citealt{carilli13}), with well-sampled CO SLEDs available for an increasing number of objects \citep[e.g.,][]{weiss07,bradford09,riechers13}. Observations of other molecular species, on the other hand, remain rare due to the faintness of their lines. Thus far, detections of fainter molecular lines have been largely confined to extraordinarily luminous, highly gravitationally magnified quasar host galaxies, and only four objects have been detected in multiple molecules or isotopes besides \twco: H1413+117 (the ``Cloverleaf'' quasar), \apm, a highly magnified quasar host, \smm (the ``Cosmic Eyelash''), a cluster-lensed ULIRG, and HFLS\,3, a \textit{Herschel}-selected starburst at $z=6.3$. Specific observations of these objects will be discussed in more detail below. Observations of the interstellar medium of high-$z$ galaxies are being revolutionized with the beginning of science operations by the Atacama Large Millimeter/submillimeter Array (ALMA). In particular, ALMA has already been used in Cycle~0 to conduct a blind \twco-based redshift survey of 26 high-$z$ star-forming galaxies \citep{vieira13,weiss13}, with spectral features seen in $\sim90$\% of the sample. Such redshift searches operate by scanning through large swaths of frequency space looking for bright lines of \twco, [CI], and/or \hto. As a byproduct, they also offer the opportunity to detect emission from a variety of species whose transitions lie in and amongst the brighter lines. In contrast to previous, narrow-bandwidth targeted studies of specific transitions, blind redshift searches offer information on \textit{all} transitions which fall within the rest-frame frequency range observed, allowing future follow-up observations to focus on detectable species. Here, we present the detection and analysis of several lines of \thco, HCN, HNC, \hcop, and the CN radical in a stacked spectrum of 22 gravitationally lensed DSFGs spanning $z = 2 - 5.7$ discovered by the SPT. The stacked spectrum was created utilizing the ALMA~3\,mm spectra obtained as part of the blind redshift search presented in \citet{weiss13}, and spans 250--770\,GHz (0.39--1.2\,mm) in the rest frame. This stacked spectrum represents a first attempt at quantifying the relative strengths of a host of faint lines in high-redshift DSFGs and addresses the typical ISM conditions which give rise to such lines. The paper is organized as follows: in \S\ref{obs}, we briefly describe the sample selection and observations used in the construction of the stacked spectrum. In \S\ref{stacking}, we describe the method used to scale and stack the spectra of individual objects. We present the combined spectrum and analyze the average conditions of the ISM in these objects in \S\ref{results}, and conclude by comparing our derived properties to those of other high-redshift systems, constraining the average emission from individually undetected molecules, and discussing alternatives to pure collisional excitation in \S\ref{discussion}. Throughout this work we adopt a WMAP9 cosmology, with ($\Omega_m, \; \Omega_{\Lambda},\; \mathrm{H}_0) = (0.286,\; 0.713,\; 69.3$ km\,s$^{-1}$\,Mpc$^{-1}$) \citep{hinshaw12}. | 14 | 3 | 1403.1667 |
|
1403 | 1403.4873_arXiv.txt | The small satellites of the Pluto system (Styx, Nix, Kerberos, and Hydra) have very low surface escape velocities, and impacts should therefore eject a large amount of material from their surfaces. We show that most of this material then escapes from the Pluto system, though a significant fraction collects on the surfaces of Pluto and Charon. The velocity at which the dust is ejected from the surfaces of the small satellites strongly determines which object it is likely to hit, and where on the surfaces of Pluto and Charon it is most likely to impact. We also show that the presence of an atmosphere around Pluto eliminates most particle size effects and increases the number of dust impacts on Pluto. In total, Pluto and Charon may have accumulated several centimeters of small-satellite dust on their surfaces, which could be observed by the New Horizons spacecraft. | Pluto and its satellites form a uniquely complex dynamical system. Pluto and Charon are a true binary system with a mass ratio of 8.6 ($\pm$0.5) to 1, and a center of mass 890 km (0.77 Pluto radii) above the surface of Pluto \citep{Tholen2008}. Around them orbit four known small satellites, Nix and Hydra \citep{Weaver2006}, Kerberos \citep[formerly P4,][]{Showalter2011}, and Styx \citep[formerly P5,][]{Showalter2012}. The small satellites follow near-circular orbits centered on the Pluto-Charon center of mass and coplanar with the Pluto-Charon orbital plane. These orbits are stable, though only a slight change in eccentricity or inclination would lead to chaotic trajectories \citep{Youdin2012}. It is therefore not trivial to predict the behavior of dust in the Pluto system. Impacts onto Jupiter's innermost four satellites (from inner to outer, Metis, Adrastea, Amalthea, and Thebe) produce faint rings of short-lived dust particles \citep{Burns1999}. This dust is typically ejecta from impacts of interplanetary dust particles (IDPs) onto the inner satellites, and has a mean grain size of 5 $\mu{\rm m}$. Observations from the \textit{Spitzer} space telescope have shown that Saturn hosts a similar (but much larger) impact generated ring, formed from material ejected from the retrograde irregular satellite Phoebe \citep{Verbiscer2009}. The Phoebe ring extends inwards from Phoebe's orbit to at least 128 Saturn radii, and likely as far as the outermost regular satellite of Saturn, Iapetus, at 60 Saturn radii \citep{Verbiscer2009}. The ring material is swept up by Iapetus, accumulating over the age of the solar system approximately 20 cm of material, globally averaged \citep{Tamayo2011}. Importantly, though, the Phoebe ring impacts are not isotropic on the surface of Iapetus, but strongly favor the leading hemisphere. Thermal segregation caused by this accumulation of dark Phoebe material on the leading hemisphere contrasting with the relatively clean water ice trailing hemisphere explains the wildly different albedo of the two hemispheres \citep{Spencer2010}. The transfer of dust between satellites has also been used to explain the surfaces of other icy satellites. \citet{Bottke2013} suggested that the dark albedos of the Galilean satellites are a result of dust ejected from the small irregular satellites of Jupiter. \citet{Schenk2011} trace the outer edges of the E-ring generated by Enceladus's plumes and find that E-ring dust could be responsible for equatorial features seen on Rhea. \cite{Tamayo2013} showed that the dust influx from irregular satellites may cause the leading/trailing color asymmetry seen on Uranus's irregular satellites. All icy satellites should therefore be considered in the context of their dust environment. \citet{Thiessenhusen2002} suggested that Pluto and Charon are surrounded by a cloud of dust ejected from their surfaces, similar to Jupiter's impact-generated rings. This was difficult to show for dust from Pluto or Charon, though, as it would require a significant fraction of its impact velocity to escape from their surfaces. \citet{Stern2006} then suggested that impact ejecta dust from the newly discovered small satellites Nix and Hydra could produce temporary dust rings. The small satellites have much lower surface escape velocities, and so could potentially eject much more dust into the system. \citet{Steffl2007} measured an upper limit for any dust in the system which they used to limit the median dust particle lifetime to approximately 900 years. \citet{Poppe2011} and \citet{dosSantos2013} both performed numerical simulations of dust particles ejected from Nix and Hydra and perturbed by of solar radiation pressure. \citet{dosSantos2013} showed that only a small fraction of dust particles smaller than 10 $\mu{\rm m}$ survive for more than 100 years, while \citet{Poppe2011} found much longer lifetimes for particles larger than 10 $\mu{\rm m}$. \citet{Stern2009} suggested that this dust may also be swept up by the other objects in the system. \citet{Poppe2011} and \citet{dosSantos2013} then showed that the small satellite ejecta generally transfers inward, producing secondary impacts on Pluto and Charon. Here, we reproduce their simulations, but focusing on the impacts rather than the long-lived dust trajectories. In addition to estimating the fraction of small satellite ejecta which impacts Pluto and Charon, we also estimate the spatial distribution of those impacts on Pluto and Charon. Over time, the small satellites could have transferred a considerable amount of material to the surfaces of Pluto and Charon. We show that spatial distribution of the those impacts is sufficiently unique that it might observable by NASA's \textit{New Horizons} spacecraft when it flies past Pluto and Charon. We also include for the first time the effect of air drag from Pluto's atmosphere, and show that it reduces the number of long-period dust particles interior to Charon's orbit. And we directly compare these ejecta dust simulations to the trajectories of interplanetary dust particles through the system. | Through dynamical simulations, we have shown that dust ejected from the small satellites of the Pluto system can impact the surfaces of either Pluto or Charon. Dust ejected at lower velocities (\textless150 m/s) will preferentially impact Charon, but will also impact the trailing hemisphere of Pluto. Dust ejected at higher velocities (\textgreater150 m/s) is more likely to impact Pluto, especially on the anti-Charon hemisphere. High velocity interplanetary dust particles (\textgreater1 km/s) behave the same as the high velocity ejecta. Charon receives more of the small satellite dust overall than Pluto, and those impacts are primarily on the leading hemisphere. The low-velocity small satellite ejecta impacts Pluto in locations that correspond well to the dark albedo features observed on the equatorial areas of Pluto, implying that the dust may help to make those areas of Pluto dark. Observations by the \textit{New Horizons} spacecraft may therefore show these regions painted with dark small satellite dust. | 14 | 3 | 1403.4873 |
1403 | 1403.4222_arXiv.txt | {% We develop a methodology for estimating parity-odd bispectra in the cosmic microwave background (CMB). This is achieved through the extension of the original separable modal methodology to parity-odd bispectrum domains ($\ell_1 + \ell_2 + \ell_3 = {\rm odd}$). Through numerical tests of the parity-odd modal decomposition with some theoretical bispectrum templates, we verify that the parity-odd modal methodology can successfully reproduce the CMB bispectrum, without numerical instabilities. We also present simulated non-Gaussian maps produced by modal-decomposed parity-odd bispectra, and show the consistency with the exact results. Our new methodology is applicable to all types of parity-odd temperature and polarization bispectra.} \begin{document} | Bispectrum estimation of the cosmic microwave background (CMB) is one of the most powerful ways to explore the non-Gaussianity of primordial fluctuations. While standard single-field slow-roll inflation predicts a tiny amount non-Gaussianity (NG) of the primordial curvature perturbations ~\cite{Acquaviva:2002ud,Maldacena:2002vr}, this is no longer true for a large number of extensions of the simplest inflationary paradigm (see e.g., refs.~\cite{Bartolo:2004if, Komatsu:2010hc} and references therein). Measurements of primordial NG thus provide a stringent test of the standard single-field slow roll scenario, and allow to put stringent constraints on alternative models. The most stringent constraints on primordial NG to date have been obtained through bispectrum measurements of \textit{Planck} temperature data~\cite{Ade:2013ydc}. Future analyses, including correlations with E-mode polarization (and thus additional CMB bispectra of the type $\Braket{TTE}$, $\Braket{TEE}$ and $\Braket{EEE}$), will bring in further improvement on the current observational bounds \cite{Babich:2004yc, Yadav:2007rk}. All CMB NG searches so far have been focused on parity-even bispectra, in which the condition $\ell_1 + \ell_2 + \ell_3 = {\rm even}$ is enforced. This is because, as long as we consider the bispectrum of primordial curvature perturbations, parity cannot be broken, due to the spin-0 nature of the scalar mode. On the other hand, several interesting models predict bispectra generated by vector or tensor perturbations. In these cases the parity-even condition might have to be removed, since the vector or tensor modes can create parity-odd NG due to their spin dependence. For example, Early Universe models with some parity-violating or parity-odd sources, such as the gravitational and electromagnetic Chern-Simons actions \cite{Maldacena:2011nz, Soda:2011am, Barnaby:2012xt, Zhu:2013fja, Cook:2013xea}, or large-scale helical magnetic fields \cite{Caprini:2003vc, Kahniashvili:2005xe}, generate NG with sizable CMB bispectrum signals in parity-odd configurations ($\ell_1 + \ell_2 + \ell_3 = {\rm odd}$) \cite{Kamionkowski:2010rb, Shiraishi:2011st, Shiraishi:2012sn, Shiraishi:2013kxa}. Vector or tensor modes also induce B-mode polarization. B-mode bispectra can thus be useful to prove tensor NG \cite{Shiraishi:2013vha, Shiraishi:2013kxa}. At the same time, the parity-odd property of the B-mode field can generate $\ell_1 + \ell_2 + \ell_3 = {\rm odd}$ configurations in $\Braket{TTB}$, $\Braket{TEB}$, $\Braket{EEB}$ and $\Braket{BBB}$ bispectra, even when primordial NG has even parity. B-mode bispectra are also generated via secondary CMB lensing effects \cite{Lewis:2011fk}. These theoretical predictions motivate us to investigate the CMB signals in $\ell_1 + \ell_2 + \ell_3 = {\rm odd}$ triangles using observational data; hence, in this paper, we want to develop a general framework for parity-odd bispectrum estimation. CMB bispectrum estimation is generally aimed at measuring the so called non-linear parameter $f_{\rm NL}$. This can be done optimally by mean of the following estimator \cite{Komatsu:2008hk}: \begin{eqnarray} {\cal E} = \frac{1}{N^2} \left[ \prod_{n=1}^3 \sum_{\ell_n m_n} \right] \left( \begin{array}{ccc} \ell_1 & \ell_2 & \ell_3 \\ m_1 & m_2 & m_3 \end{array} \right) B_{\ell_1 \ell_2 \ell_3} \left[ \left( \prod_{n=1}^3 \frac{a_{\ell_n m_n}^{\rm O}}{C_{\ell_n}} \right) - 6 \frac{C_{\ell_1 m_1, \ell_2 m_2}}{C_{\ell_1} C_{\ell_2}} \frac{a_{\ell_3 m_3}^{\rm O}}{C_{\ell_3}} \right] ~, \label{eq:estimator_def} \end{eqnarray} where $B_{\ell_1 \ell_2 \ell_3}$ is a theoretical template of the CMB angle-averaged bispectrum, $a_{\ell m}^{\rm O}$ are the observed CMB multipoles, $C_\ell$ is the CMB power spectrum, and $C_{\ell_1 m_1, \ell_2 m_2} = \Braket{a_{\ell_1 m_1}^{\rm G} a_{\ell_2 m_2}^{\rm G}}$ is the covariance matrix, obtained from simulated Gaussian maps $a_{\ell m}^{\rm G}$. Finally, \begin{eqnarray} N^2 \equiv \sum_{\ell_1 \ell_2 \ell_3} \frac{B_{\ell_1 \ell_2 \ell_3}^2 }{C_{\ell_1} C_{\ell_2} C_{\ell_3}}~, \label{eq:normalization} \end{eqnarray} is a normalization factor. The estimated $f_{\rm NL}$ parameter basically measures the degree of correlation between the theoretical template under study and the three-point function extracted from the data. The input $B_{\ell_1 \ell_2 \ell_3}$ and $C_\ell$, as well as the Monte Carlo simulations used for covariance matrix calculations, include all realistic experimental features such as instrumental beam, mask and noise. Note that the form of the estimator written above is derived under the ``diagonal covariance approximation'' i.e. we are replacing the general $C^{-1}$ filtering of the multipoles, where $C$ is the (in realistic experimental conditions non-diagonal) $a_{\ell m}$ covariance matrix, with a much simpler ${1/C_\ell}$ filtering. This in principle implies some loss of optimality. In the context of \textit{Planck} data analysis, it was however shown ~\cite{Ade:2013ydc} that it is possible in practice to retain optimality using the simplified estimator above, provided the CMB map is pre-filtered by mean of a recursive inpainting technique. For this reason, we will work in diagonal covariance approximation throughout the rest of this work (in any case all of our derivation readily applies to the full-covariance expressions, by simply operating a $ {a_{\rm \ell m}/C_\ell} \rightarrow (C^{-1} a)_{\ell m}$ replacement). One important and well-known practical issue with the estimator of eq.~(\ref{eq:estimator_def}) is that its brute force numerical computation leads to ${\cal O}(\ell_{\rm max}^5)$ operations. This requires huge CPU time and, for the large $\ell_{\rm max}$ achieved in current and forthcoming observations, it makes a direct approach of this kind totally unfeasible. A similar issue also appears when simulating NG maps with a given bispectrum, using the following formula originally introduced in ref.~\cite{Smith:2006ud}: \begin{eqnarray} a_{\ell_1 m_1}^{\rm NG} = \frac{1}{6} \left[ \prod_{n=2}^3 \sum_{\ell_n m_n} \frac{a_{\ell_n m_n}^{{\rm G}*}}{C_{\ell_n}} \right] \left( \begin{array}{ccc} \ell_1 & \ell_2 & \ell_3 \\ m_1 & m_2 & m_3 \end{array} \right) B_{\ell_1 \ell_2 \ell_3}~. \label{eq:almNG_def} \end{eqnarray} Such numerical issues can be solved if the theoretical bispectrum is given by a separable form in terms of $\ell_1$, $\ell_2$ and $\ell_3$. Using a general technique originally introduced in ref.~\cite{Komatsu:2003iq}, and often dubbed the KSW method, the estimator can then be written in terms of a separate product of filtered maps in pixel space, thus massively reducing the computational cost to ${\cal O}(\ell_{\rm max}^3)$ operations. For the parity-even case, many bispectra can be directly written in separable form. In particular, the so called local, equilateral and orthogonal bispectra, encompassing a vast number of NG scenarios, can be described in terms of separable templates. The KSW approach is directly applicable in this case. On the other hand, the parity-odd bispectra here under study originate from complicated spin and angle dependences in the vector or tensor NG, or coming from lensing effects, and hence they are generally given by a complex non-separable form. A natural way to circumvent this issue is to adopt the separable modal methodology, originally developed by \cite{Fergusson:2009nv, Fergusson:2010dm, Fergusson:2011sa} for parity-even templates, extending it to parity-odd bispectrum domains. In the modal approach, a general non-separable bispectrum shape is expanded in terms of a suitably constructed, complete basis of separable bispectrum templates in harmonic or Fourier space. Provided we use enough templates in the expansion (with convergence speed depending on the choice of basis and the shape of the bispectrum to expand) we can always reproduce the starting template with as high as needed degree of accuracy, and the new expanded shape will be separable by construction. In order to extend the methodology to parity-odd bispectra, we will have to introduce a new weight function to account for spin dependence, and redefine a reduced bispectrum which is not restricted by $\ell_1 + \ell_2 + \ell_3 = {\rm even}$. After getting analytical expressions for our parity-odd estimator, we will numerically implement it for the three Early Universe models described in \cite{Shiraishi:2011st, Shiraishi:2012sn, Shiraishi:2013kxa}. This will allow us to confirm that the modal decomposition can be successfully applied to parity-odd bispectra. We also use the modal technique to produce NG maps including the bispectra under study. This paper is organized as follows. In the next section, we summarize the original modal decomposition for the parity-even case. In section~\ref{sec:modal_odd}, we extend it to parity-odd models. In section~\ref{sec:example}, we discuss the numerical implementation of the method, showing several applications, and we draw our conclusions in the final section. | Despite the fact that there are several theoretical primordial scenarios predicting the existence of parity-odd bispectra, no observational constraint on this type of NG has been placed so far. Generally, parity-odd bispectra are written in non-separable form, and this has made data analysis unpractical, due to large CPU-time requirements. This paper has developed a new framework for parity-odd CMB bispectrum estimation by extending the separable modal decomposition methodology, already developed and used for parity-even analyses, to parity-odd domains. The analytical extension to the case of interest has been obtained by defining a new reduced bispectrum and a new inner product weight function, in such a way as to account for spin dependence, and to change selection rules in order to include $\ell_1 + \ell_2 + \ell_3 = {\rm odd}$ configurations. In this way, we can achieve separability of parity-odd NG estimators and get fast NG maps algorithm, in strict analogy to the parity-even modal expansion procedure. Our parity-odd modal decomposition has been numerically implemented and tested by expanding temperature bispectra predicted by several parity-odd Early Universe models. We have checked that the numerical algorithm is stable and achieves convergence using a reasonable number of templates in a few CPU-hours. The exact convergence efficiency depends on the bispectrum shape and the type of modal eigenfunctions. Using decomposed separable bispectra, we have also produced NG simulations and checked the consistency with the exact results from a slow brute-force approach. As expected, we get massive computational gains when working with separable modal bispectra. The algorithm for bispectrum estimation developed in this paper is applicable to all types of parity-odd bispectra (i.e., bispectra enforcing the condition $\ell_1 + \ell_2 + \ell_3 = {\rm odd}$). Our numerical approach so far has included only temperature bispectra. Future interesting applications will include actual estimation of parity-odd NG from CMB data, and the extension of our method to polarized bispectra, which are generally predicted in parity-odd scenarios, with taking care of bias due to the experimental systematics or the imperfect sky coverage. | 14 | 3 | 1403.4222 |
1403 | 1403.3402_arXiv.txt | \noindent We investigate potential systematic effects in constraining the amplitude of primordial fluctuations $\sigma_8$ arising from the choice of halo mass function in the likelihood analysis of current and upcoming galaxy cluster surveys. We study the widely used $N$-body simulation fit of Tinker et al. (T08) and, as an alternative, the recently proposed analytical model of Excursion Set Peaks (ESP). We first assess the relative bias between these prescriptions when constraining $\sigma_8$ by sampling the ESP mass function to generate mock catalogs and using the T08 fit to analyse them, for various choices of survey selection threshold, mass definition and statistical priors. To assess the level of absolute bias in each prescription, we then repeat the analysis on dark matter halo catalogs in $N$-body simulations designed to mimic the mass distribution in the current data release of Planck SZ clusters. This $N$-body analysis shows that using the T08 fit \emph{without} accounting for the scatter introduced when converting between mass definitions (alternatively, the scatter induced by errors on the parameters of the fit) can systematically \emph{over-estimate} the value of $\sigma_8$ by as much as $2\sigma$ for current data, while analyses that account for this scatter should be close to unbiased in $\sigma_8$. With an increased number of objects as expected in upcoming data releases, regardless of accounting for scatter, the T08 fit could over-estimate the value of $\sigma_8$ by $\sim1.5\sigma$. The ESP mass function leads to systematically more biased but comparable results. A strength of the ESP model is its natural prediction of a weak non-universality in the mass function which closely tracks the one measured in simulations and described by the T08 fit. We suggest that it might now be prudent to build new unbiased ESP-based fitting functions for use with the larger datasets of the near future. | \label{sec:intro} \noindent Cosmology is now a precision science. The wealth of cosmological data from measurements of the Cosmic Microwave Background (CMB), Large Scale Structure and related probes is well described by the simple $6$-parameter Lambda-Cold dark matter ($\Lambda$CDM) model, whose parameters are now known with unprecedentedly small errors. The last decade in particular has witnessed a ten-fold increase in precision in recovering the values of these parameters \cite{jaffe01,planck13-XVI-cosmoparams}. Cosmological analyses have reached the stage where the error budget on parameter constraints is starting to be dominated by systematic rather than statistical uncertainties. Understanding these systematic effects -- in both data analysis as well as theoretical modeling -- is a pressing challenge, particularly in light of assessing the importance of tensions when constraining a given parameter from different data sets and complementary probes. We focus here on cosmological constraints from the abundance of galaxy clusters (see \cite{borgani08,aem11} for reviews). The sensitivity of cluster number counts to parameters such as $\sig_8$ (the strength of the primordial density fluctuations) and $\Om_{\rm m}$ (the fractional budget of non-relativistic matter) means that these remain a competitive probe even today \cite{hhm01,bw03,mb06,sahlen+09,chf09,fcmc11,weinberg+13}. Recent results from the Planck Collaboration \cite{planck13-XX-SZcosmo} suggest that there is a $2$-$3\sigma$ tension between the value of $\sig_8$ recovered from measurements of the CMB and from galaxy cluster counts determined using the Sunyaev-Zel'dovich (SZ) effect. It has been suggested that this tension could arise due to systematic choices in the CMB data analysis pipeline \cite{sfh13}, or due to mis-calibration of the mass-observable relation \cite{planck13-XX-SZcosmo,cbm14,vdL+14}, or even through more non-standard effects such as those due to massive neutrinos \cite{planck13-XX-SZcosmo,hh13,bm13} (although see \cite{castorina+13}). In this paper we investigate another potential source of systematic biases, namely, the halo mass function. The complexity of the nonlinear gravitational effects that lead to the formation of gravitationally bound, virialised `halos' has meant that, despite considerable analytical progress over the last several years, the gold standard for estimating the halo mass function continues to be measurements in numerical simulations. In addition to accounting for this complexity, simulations also allow for calibrations of the mass function for the various choices of mass definition that are suited to the specific observational probe (such as SZ flux/X-ray luminosity/optical richness) rather than being restricted to theoretical approximations and assumptions such as spherical or ellipsoidal collapse (see \cite{bk09} for a review). However, the nature of parameter recovery through likelihood maximisation or Bayesian techniques means that it is crucial to use analytical approximations that accurately capture the effect of cosmology on the mass function. Since it is unfeasible to run an $N$-body simulation for every combination of parameter values, the standard compromise has been the use of analytical fits to the results of simulations \cite{st99,jenkins+01,Tinker08,watson+13} (although, in principle, it should be possible to directly interpolate between simulations along the lines of \cite{heitmann+14,kwan+13}). As we emphasize below, these fits are routinely used in analyses of cluster abundances \emph{without} accounting for the error covariance matrices of the fit parameters \cite{vanderlinde+10,benson+13,hasselfield+13,reichardt+13,planck13-XX-SZcosmo}, and this opens the door to potential systematic biases \cite{ce10,bhattacharya+11,pmw13}. In the following we will set up a pipeline for analysing mock cluster catalogs, including various choices of survey selection threshold, mass-observable relation and priors on cosmological parameters, with a focus on the effect of the halo mass function model. Our catalogs will be based on both Monte Carlo sampling of analytical mass functions as well as halos identified directly in $N$-body simulations of CDM, and will allow us to explore the interplay between the nonlinear systematics inherent in the chosen mass function model and the other ingredients mentioned above. Although we do not explicitly model baryonic effects (these are expected to systematically alter the mass function at the $10$-$20\%$ level; see, e.g., \cite{stanek+10,bp13,martizzi+14,cbm14,velliscig+14}), our examples below will include biased mass-observable relations that show similar features. The paper is organised as follows. In Section~\ref{sec:analytical} we discuss the main analytical approximations used in typical cluster analyses, namely, the cluster likelihood and the halo mass function. We will focus on two prescriptions for the latter, namely the $N$-body fits of \citet{Tinker08} and the theoretical Excursion Set Peaks (ESP) prescription of \cite{psd13}. In Section~\ref{sec:montecarlo} we perform an in-depth statistical comparison of the $N$-body fits and the ESP mass function by using the former to analyse Monte Carlo mock catalogs generated by sampling the latter. In Section~\ref{sec:Nbody} we repeat the analysis using both these prescriptions to analyse catalogs built from halos identified in $N$-body simulations of CDM that were designed to mimic the mass distribution in the current data release of Planck SZ clusters. We conclude in Section~\ref{sec:conclude}. Appendix~\ref{app:masses} gives various technical details regarding mass calibration issues while Appendix~\ref{app:lightcones} describes our procedure for generating lightcones from the $N$-body halos. We assume a flat $\Lambda$CDM cosmology with Gaussian initial conditions. Unless stated otherwise, for our fiducial cosmology we set the fraction of total matter $\Om_{\rm m}=0.315$, the baryonic fraction $\Om_{\rm b}=0.0487$, the Hubble constant $H_0=100h\,{\rm km/s/Mpc}$ with $h=0.673$, the scalar spectral index $n_s=0.96$ and the linearly extrapolated r.m.s. of matter fluctuations in spheres of radius $8\Mpc$, $\sig_8=0.83$, which are compatible with the analysis of Planck CMB data \cite{planck13-XVI-cosmoparams}. We use the transfer function prescription of \citet{eh98} for all our calculations. We denote the natural logarithm of $x$ by $\ln(x)$ and the base-10 logarithm by $\log(x)$. | \label{sec:conclude} \noindent The quality and quantity of cosmological data are now at the stage where systematic effects at the few per cent level can potentially be mistaken for new physics \cite{planck13-XX-SZcosmo,sfh13,hh13,bm13}. In this paper we focused on cosmological analyses using galaxy clusters; these involve several ingredients amongst which the assumed halo mass function plays a key role. We have presented an in-depth statistical analysis to test the performance of two analytical mass function prescriptions, the \citet[][T08]{Tinker08} fit to $N$-body simulations and the Excursion Set Peaks (ESP) theoretical model of \citet{psd13}. Such an analysis is particularly timely in light of recent results showing a $2$-$3\sig$ tension between the values of $\sig_8$ recovered from cluster analyses such as those using the Planck SZ catalog \cite{planck13-XX-SZcosmo} or data from the SPT \cite{reichardt+13}, and the Planck CMB analysis \cite{planck13-XVI-cosmoparams}. Our basic strategy involved generating mock cluster catalogs and running them through a likelihood analysis pipeline that mimics what is typically used for real data. This includes the conversion between the observable and the true halo mass, which we modelled using two mass definitions $m_{\rm 200b}$ and $m_{\rm 500c}$, treating one as the true mass and the other as the observable and accounting for the relative scatter and mean offset between the two. We first used Monte Carlo catalogs generated assuming the ESP mass function to be the `truth', which we analysed using the T08 mass function. This allowed us to explore statistical differences between these mass functions for various choices of observable-mass relations, survey selection criteria and priors on parameters degenerate with $\sig_8$. For example, we showed that although these mass functions agree at the $\sim10\%$ level at any given redshift, for a Planck-like survey the constraints on $\sig_8$ recovered from each could be different by as much as $2\sig$ (see Section~\ref{sec:mc:sub:ESPvsTinker} and Table~\ref{tab:mocks} for details). While we used survey selection thresholds (limiting masses) inspired by Sunyaev-Zel'dovich surveys such as Planck and SPT, our results are also relevant for other surveys with similar limiting masses as a function of redshift. We then repeated the analysis with mock Planck-like cluster catalogs built using halos identified in $N$-body simulations and organised into lightcones. This is an important consistency check for the T08 mass function fit which is routinely used in galaxy cluster analyses \emph{without} accounting for the errors inherent in the fit parameter values, which could have significant effects due to scatter across the mass selection threshold. Indeed, we saw that ignoring the intrinsic scatter between $m_{\rm 500c}$ and $m_{\rm 200b}$ -- which is similar to (but likely more extreme than) ignoring the scatter due to parameter errors -- leads to an \emph{over-estimation} of the value of $\sig_8$ by as much as $2\sig$ (see the columns marked ``T08 $m_{\rm 500c}$ (no scatter)'' in Table~\ref{tab:Nbody}, and the discussion towards the end of Section~\ref{sec:Nbody:sub:analysis}). When the intrinsic scatter is accounted for, the significance of this bias is considerably reduced and the T08 analysis becomes essentially unbiased. However, we saw that increasing the number of clusters analysed (by switching from $m_{\rm 500c}$ to $m_{\rm 200b}$ while using the same selection threshold) leads to similar values of the absolute bias $\bar\sig_8-\sig_{8,{\rm fid}}$ while obviously decreasing the typical width $\Sigma_{\sig_8}$ of the $\sig_8$ posterior, thereby leading once again to a systematic over-estimation of $\sig_8$. Moreover, this $m_{\rm 200b}$ analysis gives a much cleaner comparison between the simulations and the T08 fit, since it avoids making any of the assumptions regarding mass conversion discussed in Appendix~\ref{app:masscal}. Similar trends for the absolute bias and significance are obtained when the selection threshold is altered to allow more objects at higher redshifts (see Tables~\ref{tab:Nbody} and~\ref{tab:Nbody-absbias}, and the discussion in Sections~\ref{sec:Nbody:sub:analysis} and~\ref{sec:Nbody:sub:results}). We concluded that (a) the T08 fit -- \emph{provided one accounts for the scatter when converting from $m_{\Delta{\rm b}}$ to $m_{\rm 500c}$} -- should be close to unbiased in $\sig_8$ for a current Planck-like survey and (b) with an increased number $n_{\rm clusters}$ as might be expected from upcoming Planck data releases, the T08 fit could lead to $\sig_8$ values biased high at $>1.5\sig$, which would exacerbate the current tension between cluster analyses and the Planck CMB results \cite{planck13-XVI-cosmoparams}. Additionally, we analysed the $N$-body based catalogs with the ESP mass function, and found that it leads to comparable but systematically more biased results than the T08 fit. The ESP model, however, was a proof-of-concept example presented by \citet{psd13} with minor tuning and was not intended for high-performance precision cosmology. As we discussed, one of the strongest features in this model is the natural prediction of mild non-universality in the mass function with no free parameters. The T08 fit on the contrary needed several parameters specifically to describe this behaviour of the mass function, since the basic template for this fit was the \emph{universal} prediction of the original excursion set approach. In light of our findings above, this suggests that it might now be more economical to build new fitting functions based on the non-universal ESP prescription instead, with the aim of obtaining an analytical function that remains unbiased even in the face of the better quality data that will soon be available. Conceivably, such a fit could be tailored for the high mass regime relevant for specific cluster surveys. We leave such a calibration to future work. | 14 | 3 | 1403.3402 |
1403 | 1403.3919_arXiv.txt | {Relic gravitational waves (RGWs) leave well-understood imprints on the anisotropies in the temperature and polarization of cosmic microwave background (CMB) radiation. In the TT and TE information channels, which have been well observed by WMAP and Planck missions, RGWs compete with density perturbations mainly at low multipoles. It is dangerous to include high-multipole CMB data in the search for gravitational waves, as the spectral indices may not be constants. In this paper, we repeat our previous work [W.Zhao \& L.P.Grishchuk, Phys.Rev.D {\bf 82}, 123008 (2010)] by utilizing the Planck TT and WMAP TE data in the low-multipole range $\ell\le100$. We find that our previous result is confirmed {with higher confidence}. The constraint on the tensor-to-scalar ratio from Planck TT and WMAP TE data is $r\in [0.06,~0.60]$ (68\% C.L.) with the maximum likelihood at around $r\sim 0.2$. Correspondingly, the spectral index at the pivot wavenumber $k_*=0.002$Mpc$^{-1}$ is $n_s=1.13^{+0.07}_{-0.08}$, which is larger than 1 at more than $1\sigma$ level. So, we conclude that the new released CMB data indicate a stronger hint for the RGWs with the amplitude $r\sim 0.2$, which is hopeful to be confirmed by the imminent BICEP and Planck polarization data. } | The relic (primordial) gravitational waves generated in the early Universe is a basic prediction in the modern cosmology, which depends only on the validity of General Relativity and Quantum Mechanics \cite{grishchuk1974,starobinsky1980}. The relic gravitational waves (RGWs) leave the imprints in all the cosmic microwave background (CMB) radiation anisotropy power spectra, including the TT, TE, EE and BB. In the near future, these provide the unique way to detect it in the observations. If the amplitude of the RGWs is large, (i.e. the tensor-to-scalar ratio $r>0.1$), the CMB TT and TE information channels can dominate the detection, since the amplitudes of these spectra generated by RGWs are much larger than those of EE and BB \cite{turner1994,zhao2009a}. However, if $r<0.1$, these channels become useless due to the cosmic variance, and the detection can only be done through the B-mode polarization \cite{zaldarriaga1997,kamionkowski1997}. In the era before the release of the Planck polarization data, the detection (or constraint) of RGWs mainly depends on the CMB TT and TE channels, which has been done by many groups, including the {WMAP} and {Planck} teams. It is well known that the TT and TE power spectra generated by RGWs are significant only in the large scales, i.e. the low multipoles $\ell\lesssim 100$. However, in the previous analyses, nearly all the groups utilized the full CMB data till to the very high multipoles ($\ell_{\max}\sim 1200$ for WMAP and $\ell_{\max}\sim 2500$ for Planck), and assumed density perturbations with a constant or a running spectral index. This can easily overlook the contribution of RGWs, due to the degeneracies among various cosmological parameters (in particular, the degeneracy between $r$ and $n_s$). In 2006, Basksran, Grishchuk and Polnarev noticed that the WMAP TE data are systematically smaller than the predictions of the best-fit cosmological model \cite{baskaran2006,grishchuk2007}, where the RGWs are absent, and argued that this might hint the existence of RGWs. In 2009, for the first time, one of us (W.Zhao) with Baskaran and Grishchuk carefully analyzed the three-year WMAP TE data in the low multipoles $\ell\le 100$, and gotten the constraints on the quadrupole ratio $R=0.149^{+0.247}_{-0.149}$ (note that the tensor-to-scalar ratio is $r\simeq 2R$) \cite{zbg2009a}. In addition, we have extended this analysis to the five-year and seven-year WMAP TT and TE data in the low multipoles $\ell\le 100$, and found that the indication of RGWs were stabilized: five-year data give $R=0.266\pm 0.171$ \cite{zbg2009b}, and seven-year data yield $R=0.273^{+0.185}_{-0.156}$ \cite{zbg2010}. In these analyses, we have adopted an approximate effective noises and the likelihood functions for the WMAP data, which are based on the exact Wishart distribution for the full-sky observables. However, these approximations were questioned by some authors (see for instance \cite{blame}). To clarify it, in paper \cite{zg2010}, we adopted the commonly used CosmoMC numerical package to repeat the WMAP7 analysis. We found the maximize likelihood (ML) values are $r=0.285$ and $n_s=1.052$, and one-dimensional (1d) marginalized likelihood gives the constraints: $r=0.20^{+0.25}_{-0.20}$ and $n_s=1.064^{+0.058}_{-0.059}$. The CosmoMC approach reduced the confidence of the indications from approximately 2$\sigma$ level to approximately 1$\sigma$ level, but the indications do not disappear altogether. Recently, Planck team released their CMB TT data, and shown some differences in the low multipoles compared with the WMAP data \cite{planck2013,planck2013_2}. In this paper, we shall repeat the analysis in \cite{zg2010} based on the combination of Planck TT data and nine-year WMAP TE data \cite{wmap9}, and investigate the hint of RGWs in these new data, where the public CosmoMC numerical package is used for the data analysis. As anticipated, we found that the new data favor the gravitational waves with {$r\sim 0.2$}, and a blue tilted spectrum of density perturbation with {$n_s\sim1.08$}. So, the new data stabilize what we found in the previous work \cite{zg2010}. | } Relic gravitational waves provide the unique antenna to study the expansion history of the very early Universe. The detection of RGWs through their imprints in the CMB temperature and polarizations anisotropies is the only possibility in the near future, which has also been considered as one of the key tasks for the current and future CMB observations. In the well observed CMB TT and TE information channels, RGWs compete with density perturbations only in the low-multipole range. So, it is sensible to utilize only the low-multipole data in the search of RGWs, which is helpful to keep away from the unwarranted assumptions about density perturbations, and avoid the oversight of RGWs in the data analysis. In this paper, we repeated our previous analysis in \cite{zg2010} by considering the low-multipole Planck TT data, as well as the nine-year WMAP TE data. We found that, the new data give the constraint $r\in[0.06,~0.60]$ at $68\%$ confidence level, which deviates from zero at more than $1\sigma$ confidence level. Meanwhile, the data favor a blue tilted spectra of primordial density perturbations with the spectral index $n_s=1.13^{+0.07}_{-0.08}$ in the large scale. All these are consistent with what we found in \cite{zg2010}. We hope the forthcoming CMB polarization data of BICEP experiment and Planck mission could confirm our expectations. \vspace{5mm} \noindent {\bf Note:} in the same day BICEP \cite{bicep2} released its data which indicates a discovery of the primordial gravitational waves with $r=0.20_{-0.05}^{+0.07}$ and $r=0$ disfavored at $7.0\sigma$. \vspace{5mm} {\it Acknowledgments:} WZ would like to dedicate this article to his friend Leonid Petrovich Grishchuk, who passed away in 13th, September 2012. We acknowledge the use of Planck Legacy Archive, ITP and Lenovo Shenteng 7000 supercomputer in the Supercomputing Center of CAS for providing computing resources. WZ is supported by project 973 under Grant No.2012CB821804, by NSFC No.11173021, 11322324 and project of KIP of CAS. QGH is supported by NSFC No.10821504, 11322545, 11335012 and project of KIP of CAS. \appendix | 14 | 3 | 1403.3919 |
1403 | 1403.7221_arXiv.txt | DIRAC (Distributed Infrastructure with Remote Agent Control) is a general framework for the management of tasks over distributed heterogeneous computing environments. It has been originally developed to support the production activities of the LHCb (Large Hadron Collider Beauty) experiment and today is extensively used by several particle physics and biology communities. Current (\Fermi Large Area Telescope -- LAT) and planned (Cherenkov Telescope Array -- CTA) new generation astrophysical/cosmological experiments, with very large processing and storage needs, are currently investigating the usability of DIRAC in this context. Each of these use cases has some peculiarities: \Fermi-LAT will interface DIRAC to its own workflow system to allow the access to the grid resources, while CTA is using DIRAC as workflow management system for Monte Carlo production and analysis on the grid. We describe the prototype effort that we lead toward deploying a DIRAC solution for some aspects of \Fermi-LAT and CTA needs. | \label{intro} The Large Area Telescope (LAT) is the primary instrument on the \emph{Fermi Gamma-ray Space Telescope} mission% , launched on June 11, 2008. It is the product of an international collaboration between DOE, NASA and academic US institutions as well as international partners in France, Italy, Japan and Sweden. The LAT is a pair-conversion detector of high-energy gamma rays covering the energy range from 20 MeV to more than 300 GeV \cite{LATinstrument}. It has been designed to detect gamma rays in a broad energy range, with good position resolution ($<$10 arcmin) and an energy resolution of $\sim$10\%. The LAT has been routinely monitoring the gamma-ray sky and has shed light on the extreme, non-thermal Universe. A brief and recent review of \Fermi-LAT discoveries can be found in \cite{Thompson2013}. The LAT response to gamma rays is parametrized by the so-called ``instrument response functions'' (IRFs), which together with the data from the instrument are provided to the scientific community\footnote{Data release and software maintainance is done via the Fermi Science Support Center http://fermi.gsfc.nasa.gov.}. As described in \cite{LATpass7}, IRFs are derived using Monte-Carlo (MC) simulations and also corrected for discrepancies observed between flight and simulated data, as the LAT team gains insight into the in-flight performance of the instrument. In the near future, major improvements are expected from the new ``Pass 8'' data, such as an increased effective area with respect to the current ``Pass 7'' public data \cite{LATpass8}. These improvements correspond to a radical revision of the LAT event-level analysis. The optimization of the event reconstruction and of the background rejection, and the full characterization of the new IRFs, require the production of large simulated data sets including gamma rays and charged cosmic backgrounds (protons, heavy ions, electrons). These simulations are also fundamental for high-level analyses which will require a proper evaluation of the residual backgrounds (e.g., the extragalactic diffuse emission \cite{LATdiffuse} or the cosmic electron-positron spectra \cite{LATe+e-}).\\ The Cherenkov Telescope Array (CTA) project \cite{cta_overview} is the next generation of Imaging Atmospheric Cherenkov Telescopes (IACTs), operating in the high- and very high-energy gamma-ray domain (between a few tens of GeV and 200 TeV). It will consist of two arrays of 50-100 telescopes of different sizes, located in each hemisphere. The CTA consortium gathers more than 1000 scientists and engineers from more than a hundred institutions world-wide. The project is currently in its preparatory phase. The construction is planned to be completed around 2018-2020. During the current CTA preparatory phase, large computing and storage resources are needed mostly for MC studies. In particular, the selection of the CTA sites (North and South) has a significant impact on the final sensitivity of the instrument. % The CTA MC working group is studying the impact of these various parameters by means of detailed MC simulations of the detector response to extensive air showers. Large sets of simulated events are generated for different primary particles. % Moreover, once CTA sites have been selected and the construction phase has started, more detailed simulations will be produced in order to test analysis algorithms and to determine the final performance of the instrument. \\ In order to fulfill present and future requirements for MC massive production and data analysis of \Fermi-LAT and CTA, we have proposed the use of the EGI grid infrastructure and of the DIRAC (Distributed Infrastructure with Remote Agent Control) \cite{DIRAC} framework for both current and future experiments. The DIRAC system, originally developed to support production activities of the LHCb experiment, today serves several communities. Compared to LHCb DIRAC installation, expanding over half a dozen powerful servers, the CTA installation is still rather modest, and it is currently being upgraded. The work presented in this paper was served by a DIRAC installation running on two virtual servers having, in total, 6 cores, 6 GB of RAM and 1.5 TB of local disk, plus a third machine hosting the web portal. \Fermi-LAT is using the French NGI multi-community DIRAC installation running on five servers \cite{FG-DIRAC}. In section \ref{Fermi} we describe the developments that have been necessary to extend the \Fermi-LAT pipeline to the grid through the DIRAC system. The context for CTA is quite different, since the project is in its preparatory phase and no existing production system was available for the management of the different computing activities. In section \ref{CTA} we present the work done to migrate both the CTA MC production and its analysis by the CTA physicists on the grid within the DIRAC framework. The first results from the MC campaigns in 2013, in terms of resource usage, are also presented. Section \ref{conclusions} is devoted to conclusions and perspectives for future work. | \label{conclusions} In order to exploit the grid resources for the massive MC simulations and analysis of \Fermi-LAT and CTA, in both cases we have deployed a prototype setup, based on the DIRAC framework. The whole production chain of the \Fermi-DIRAC setup has been extensively tested, confirming that the DIRAC solution fulfils all the requirements imposed by the \Fermi-LAT pipeline. In the medium-term, we also plan to learn how the overall system behaves under stress through scalability tests, and to optimize the resource usage before entering production mode in view of the massive simulations of ``Pass 8'' backgrounds in Fall 2013. As for CTA, the DIRAC setup has been intensively exploited during the MC campaigns in 2013 and the subsequent performance studies for site candidate evaluation. The perfomance obtained shows that DIRAC is well adapted to both CTA production and analysis activities. Future developments will aim to further automate the management of the MC production, implementing automatic job and data operations according to predefined scenarios. | 14 | 3 | 1403.7221 |
1403 | 1403.5154_arXiv.txt | Supernova remnants (SNRs) retain crucial information about both their parent explosion and circumstellar material left behind by their progenitor. However, the complexity of the interaction between supernova ejecta and ambient medium often blurs this information, and it is not uncommon for the basic progenitor type (Ia or core-collapse) of well-studied remnants to remain uncertain. Here we present a powerful new observational diagnostic to discriminate between progenitor types and constrain the ambient medium density of SNRs solely using Fe K-shell X-ray emission. We analyze all extant {\it Suzaku} observations of SNRs and detect Fe K$\alpha$ emission from 23 young or middle-aged remnants, including five first detections (IC\,443, G292.0+1.8, G337.2--0.7, N49, and N63A). The Fe K$\alpha$ centroids clearly separate progenitor types, with the Fe-rich ejecta in Type Ia remnants being significantly less ionized than in core-collapse SNRs. Within each progenitor group, the Fe K$\alpha$ luminosity and centroid are well correlated, with more luminous objects having more highly ionized Fe. Our results indicate that there is a strong connection between explosion type and ambient medium density, and suggest that Type Ia supernova progenitors do not substantially modify their surroundings at radii of up to several parsecs. We also detect a K-shell radiative recombination continuum of Fe in W49B and IC\,443, implying a strong circumstellar interaction in the early evolutionary phases of these core-collapse remnants. | \label{sec:intro} Supernova remnants (SNRs) provide unique insights into both the supernova (SN) explosion that generated them and the ambient medium that surrounded their progenitors at the time of the explosion. Unfortunately, the complex physical processes involved in the interaction between ejecta and ambient medium often blur this information, to the point that the explosion type (i.e., Type Ia or core-collapse: Ia and CC hereafter) of several well-studied SNRs still remains controversial. The X-ray emission from young and middle-aged SNRs is ideally suited to disentangle the contributions from the SN explosion and circumstellar interaction \citep[see][for a recent review]{Vink12}. Their thermal X-ray spectra are often dominated by strong optically-thin emission lines from ejecta that retain the nucleosynthetic signature of their birth events. On the other hand, the X-ray emitting plasma is in a state of non-equilibrium ionization (NEI), and its time-dependent ionization degree is controlled by the ambient medium density, which is a sensitive diagnostic of the presence of circumstellar material (CSM) left behind by the SN progenitor \citep[e.g.,][]{Badenes05,Badenes07}. Indeed, much progress has been made in the typing of SNRs using their X-ray emission. Using {\it ASCA} data, \cite{Hughes95} showed that it is possible to distinguish Ia remnants from CC ones by virtue of their ejecta composition; Fe-rich and O-poor SNRs are likely Ia, while SNRs dominated by O and Ne lines with weak Fe L emission are likely CC. More recently, \cite{Lopez09b,Lopez11} argued that {\it Chandra} images of Ia SNRs show a higher degree of symmetry than those of CC SNRs. This result implies that CC SNe are more asymmetric than Ia SNe, and/or CC SNRs expand into more asymmetric CSM. These methods are promising, but require sophisticated analysis techniques whose results might lead to ambiguous interpretations. Abundance determination in NEI plasmas is notoriously uncertain \citep[see][for a discussion]{Borkowski01}, and neither of these methods easily leads to placement of quantitative constraints on the presence of CSM in a SNR. In this {\it Letter}, we present a new, straightforward observational diagnostic for typing SNRs in X-rays that relies only on the centroid and flux of a single spectral line -- the Fe K$\alpha$ emission at 6.4--6.7\,keV. The Fe K line blend is well separated from emission lines of other abundant elements. Since the production of Fe occurs at the heart of an SN explosion, reverse shock heating of this element can be delayed compared to elements synthesized in the outer layers. This often results in an ionization state lower than He-like (Fe$^{24+}$) in young or middle-aged SNRs. The ionization state in turn determines the Fe K$\alpha$ centroid \citep[e.g.,][]{Yamaguchi14}, which is easily measured using current CCD instruments. Furthermore, the Fe K emission is largely unaffected by foreground extinction, unlike Fe L-shell blends. These spectral advantages and simplicities make our method more straightforward than the existing ones, and especially attractive for current and future X-ray missions with high throughput, like {\it Suzaku}, {\it XMM-Newton}, and {\it Astro-H}. Here we show that the Fe K$\alpha$ centroids (hence the Fe ionization state) clearly discriminate the progenitor type and place strong limits on the presence of CSM in SNRs at radii of several parsecs, which has important consequences for SN progenitor studies. | \label{sec:conclusion} We have presented a systematic analysis of Fe \Ka\ emission from 23 Galactic and LMC SNRs observed by {\it Suzaku}. We find that the Fe \Ka\ line luminosities of Type Ia and CC SNRs are distributed in a similar range ($L_{\rm K}$ = $10^{40-43}$\,photons\,s$^{-1}$), but the Fe \Ka\ centroid energies clearly distinguish Ia from CC SNRs, with the former always having centroids below $\sim$6.55\,keV and the latter always above. We interpret this separation as a signature of different mass-loss rates in Ia and CC SN progenitors. The Fe \Ka\ emission of all the Ia objects in our sample is compatible with SNR models that expand into a uniform ambient medium, which suggests that Ia progenitors do not modify their surroundings as strongly as CC progenitors do. This is in line with known limits from prompt X-ray \citep{Hughes07} and radio \citep{Chomiuk12} emission from Ia SNe, but our results probe a different regime, constraining the structure of the CSM to larger radii (several pc) and progenitor mass loss rates further back in the pre-SN evolution of the progenitor. A quantification of these constraints and a more detailed analysis of the CC SNR sample are left for future work. The full potential of our method will be realized when it is applied to larger samples of higher quality data, as will be accessible to high resolution spectrometers like those on {\it Astro-H} and other future missions with large effective areas in the Fe \Ka\ band like {\it Athena}. These instruments will open the possibility of studying statistically significant samples of X-ray emitting SNRs in nearby galaxies with resolved stellar populations like M31, which will in turn dramatically increase our knowledge of both Type Ia and CC SN progenitors. | 14 | 3 | 1403.5154 |
1403 | 1403.0771_arXiv.txt | A fundamental ingredient in wormhole physics is the presence of exotic matter, which involves the violation of the null energy condition. In this context, we investigate the possibility that wormholes could be supported by quark matter at extreme densities. Theoretical and experimental investigations of the structure of baryons show that strange quark matter, consisting of the $u$, $d$ and $s$ quarks, is the most energetically favorable state of baryonic matter. Moreover, at ultra-high densities, quark matter may exist in a variety of superconducting states, namely, the Color-Flavor-Locked (CFL) phase. Motivated by these theoretical models, we explore the conditions under which wormhole geometries may be supported by the equations of state considered in the theoretical investigations of quark-gluon interactions. For the description of the normal quark matter we adopt the Massachusetts Institute of Technology (MIT) bag model equation of state, while the color superconducting quark phases are described by a first order approximation of the free energy. By assuming specific forms for the bag and gap functions, several wormhole models are obtained for both normal and superconducting quark matter. The effects of the presence of an electrical charge are also taken into account. | A fundamental property in wormhole physics, in the context of classical general relativity, is that these exotic geometries are supported by ``exotic matter'' \cite{Morris}, which involves a stress-energy tensor $T_{\mu\nu}$ that violates the null energy condition (NEC), i.e., has $T_{\mu\nu}k^{\mu}k^{\nu}<0$ at the wormhole throat and its neighbourhood, where $k^{\mu}$ is {\it any} null vector \cite{Morris,Visser}. A wide variety of solutions have been obtained since the seminal Morris-Thorne paper \cite{Morris}, ranging from dynamic wormhole geometries \cite{dynWH}, rotating solutions \cite{Teo:1998dp}, thin-shell wormholes constructed using the cut-and-paste technique \cite{thinshell}, observational signatures using thin accretion disks \cite{Harko:2008vy}, solutions in conformal symmetry, which presents a more systematic approach in searching for exact wormhole solutions \cite{Boehmer:2007rm}, wormhole geometries in the semi-classical regime \cite{semiclassWH}, and more recently in modified theories of gravity \cite{modgrav,modgrav2}. In the modified gravity context, it was shown that the normal matter threading the wormhole can be constrained to satisfy the null energy condition, and it is the higher order curvature terms, interpreted as a gravitational fluid, that sustain these non-standard wormhole geometries, fundamentally different from their counterparts in general relativity. It has also been argued that wormhole solutions can be supported by several dark energy models responsible for the late-time cosmic acceleration \cite{phantomWH}, by imposing specific equations of state. In this work, we explore the possibility that wormholes could be supported by quark matter at extreme densities. This approach is motivated by theoretical and experimental investigations of baryonic structure showing that strange quark matter, consisting of the $u$ (up), $d$ (down) and $s$ (strange) quarks is the most energetically favorable state of baryon matter. The idea of the existence of stars made of quarks was initially introduced in \cite{It} and \cite{Bod}. Two ways of formation of stellar strange matter have been proposed in \cite{1} and \cite{2,2c}: the quark-hadron phase transition in the early universe, and the conversion of neutron stars into strange ones at ultrahigh densities. In the theories of strong interactions the quark bag models suppose that the breaking of physical vacuum takes place inside hadrons. As a result the vacuum energy densities inside and outside a hadron become essentially different and the vacuum pressure $B$ on a bag wall equilibrates the pressure of quarks thus stabilizing the system \cite{2,2c}. The structure of a realistic strange star is very complicated but its basic properties can be described as follows \cite{2,2c}. Beta-equilibrated strange quark-star matter consists of an approximately equal mixture of $u$, $d$ and $s$ quarks, with a slight deficit of the latter. The Fermi gas of $3A$ quarks constitutes a single color-singlet baryon with baryon number $A$. This structure of the quarks leads to a net positive charge inside the star. Since stars in their lowest energy state are supposed to be charge neutral, electrons must balance the net positive quark charge in strange matter stars \cite{2,2c}. However, the electrons, being bound to the quark matter by the electromagnetic interaction only (and not by the strong force), are able to displace freely across the quark surface. But they cannot move to infinity because of the electrostatic interaction with quarks. The electron distribution extends up to $\sim 10^{3}$ fm above the quark surface. The Coulomb barrier at the quark surface of a hot strange star could represent a powerful source of electron-positron ($e^{+}e^{-}$) pairs \cite{Us98}, which are created in the extremely strong electric field of the barrier. At surface temperatures of around $10^{11}$ K, the luminosity of the quark star surface may be of the order $\sim 10^{51}$ ergs$^{-1}$ \cite{elpos}. Moreover, due to both photon emission and $e^{+}e^{-}$ pair production, for about $8.6\times 10^4$ s for normal quark matter and for up to around $3\times 10^9$ s for superconducting quark matter, the thermal luminosity from the quark star surface may be orders of magnitude higher than the Eddington limit \cite{PaUs02}. The existence of a large variety of color superconducting states of quark matter at ultra-high densities has also been suggested and intensively investigated \cite{Al1,Al2,Al3,Al4}. At very high densities, matter is expected to form a degenerate Fermi gas of quarks in which the quark Cooper pairs with very high binding energy condense near the Fermi surface. This phase of the quark matter is called a color superconductor. Such a state is significantly more bound than ordinary quark matter. This implies that at extremely high density the ground state of quark matter is the superconducting Color-Flavor-Locked (CFL) phase, and that this phase of matter rather than nuclear matter may be the ground state of hadronic matter \cite{Al4}. The existence of the CFL phase can enhance the possibility of the existence of a pure stable quark star \cite{Al4}. In this context, the possibility that stellar mass black holes, with masses in the range of $3.8M_{\odot}$ and $6M_{\odot}$, respectively, could be in fact quark stars in the CFL phase was considered in \cite{Zoltan}. Depending on the value of the gap parameter, rapidly rotating CFL quark stars can achieve much higher masses than standard neutron stars, thus making them possible stellar mass black hole candidates. Moreover, quark stars have a very low luminosity and a completely absorbing surface -- the infalling matter on the surface of the quark star is converted into quark matter. It is the purpose of the present paper to investigate the possibility that wormhole geometries can be realized by using quark matter, in both normal and superconducting phases. To describe quark matter we adopt the Massachusetts Institute of Technology (MIT) bag model equation of state, while for the investigation of the superconducting quark matter we consider the equation of state obtained in a first order expansion of the free energy of the system. Generally the equations of state depend on several parameters, of which the most important are the bag and the gap constant. The bag constant forces the quarks to remain confined inside the baryons, while the gap constant describes the superconducting properties of the quark matter. However, in high density systems, which can be achieved, for example, in the interior of neutron stars, both the bag and the gap constants, as well as the quark masses, become effective, density dependent functions. It is exactly this property of strongly interacting systems in dense media we will exploit in order to obtain wormhole solutions of the static, spherically symmetric gravitational field equations in the presence of quark matter. By appropriately choosing the forms of the bag and gap functions several wormhole type solutions of the gravitational field equations are obtained, with the matter source represented by normal and superconducting quark matter, respectively. The present paper is organized as follows. In Section~\ref{eos}, the quark matter equations of state are presented. In Section~\ref{secII}, we explore the conditions under which wormhole geometries may be supported by the equations of state considered in the theoretical investigations of quark-gluon interactions. We discuss and conclude our results in Section \ref{concl} | \label{concl} The quark structure of baryonic matter is the central paradigm of the present-day elementary particle physics. At very high densities, which can be achieved in the interior of neutron stars, a deconfinement transition can break the baryons into their constitutive components, the quarks, thus leading to the formation of the quark-gluon plasma. Moreover, the strange quark matter, consisting of a mixture of $u$, $d$ and $s$ quarks, may be the most energetically favorable state of matter. At high densities quark matter may also undergo a phase transition to a color superconducting state. The thermodynamic properties of the quark matter are well-known from a theoretical point of view, and several equations of state of the dense quark-gluon plasma have been proposed in the framework of a Quantum Chromodynamical approach, such as the MIT bag model equation of state and the equations of state of the superconducting Color-Flavor-Locked phase. Motivated by these theoretical models, in the present paper we have explored the conditions under which wormhole geometries may be supported by the equations of state considered in the theoretical investigations of quark-gluon interactions. Since quark-gluon plasma can exist only at very high densities, the existence of the quark-gluon wormholes requires quark matter at extremely high densities. In these systems the basic physical parameters describing the properties of the QCD quark-gluon plasma (bag constant, gap energy, quark masses) become effective, density and interaction dependent quantities. It is this specific property of the strong interactions we have used to generate specific mathematical functional forms of the bag function and of the gap function that could make possible the existence of a wormhole geometry supported by a strongly gravitationally confined normal or superconducting quark-gluon plasma. In the case of the normal quark-gluon plasma, wormhole solutions can be obtained by assuming either a specific dependence of $B$ on the shape function $b$, or some simple functional representations of $B$. In both cases in the limit of large $r$ the bag function tends to zero, $\lim_{r\rightarrow\infty}B=0$, and in this limit the equation of state of the quark matter becomes the radiation type equation of the normal baryonic matter, $p=\varepsilon /3$. Therefore, once the density of the quark matter increases after a deconfinement transition, a density (radial coordinate) dependent bag function could lead to the violation of the null energy condition, with the subsequent generation of a wormhole supported by the quark-gluon plasma. A high intensity electric field with a shape function dependent charge distribution could also play a significant role in the formation of the wormhole. In the case of the superconducting quark matter the gravitational field equations can be solved by assuming that both the bag function and the gap function are shape function and $s$ quark mass dependent quantities. However, in the large $r$ limit, in order to reobtain the standard baryonic matter equation of state, the condition of the vanishing of the mass of the $s$ quark is also required, $\lim_{r\rightarrow\infty}m_s=0$. The assumption of a zero asymptotic $u$, $d$ and $s$ quark mass is also frequently used in the study of quark star models \cite{2}. | 14 | 3 | 1403.0771 |
1403 | 1403.6294_arXiv.txt | Blue compact dwarf galaxies (BCDs) form stars at, for their sizes, extraordinarily high rates. In this paper, we study what triggers this starburst and what is the fate of the galaxy once its gas fuel is exhausted. We select four BCDs with smooth outer regions, indicating them as possible progenitors of dwarf elliptical galaxies. We have obtained photometric and spectroscopic data with the FORS and ISAAC instruments on the VLT. We analyse their infra-red spectra using a full spectrum fitting technique which yields the kinematics of their stars and ionized gas together with their stellar population characteristics. We find that the \emph{stellar} velocity to velocity dispersion ratio ($(v/\sigma)_\star$) of our BCDs is of the order of 1.5, similar to that of dwarf elliptical galaxies. Thus, those objects do not require significant (if any) loss of angular momentum to fade into early type dwarfs. This finding is in discordance with previous studies, which however compared the stellar kinematics of dwarf elliptical galaxies with the gaseous kinematics of star forming dwarfs. The stellar velocity fields of our objects are very disturbed and the star-formation regions are often kinematically decoupled from the rest of the galaxy. These regions can be more or less metal rich with respect to the galactic body, and sometimes they are long lived. These characteristics prevent us from pinpointing a unique trigger of the star formation, even within the same galaxy. Gas impacts, mergers, and in-spiraling gas clumps are all possible star-formation ignitors for our targets. | One of the key questions occupying astrophysicists today is how star formation proceeds in galaxies. What triggers star formation and what ceases it? The reason for this interest is twofold: firstly, observations of stellar light carry most of the information about the Universe, and secondly, the furthest and thus the oldest observable galaxies are actively forming stars. The near-by blue compact galaxies are the closest approximation of these early days. They have low metallicities and form stars at, for their sizes, extraordinarily high rates. Blue compact dwarf (BCD) galaxies, with sizes of less than 2\,kpc and mostly centrally concentrated star formation \citep{hunter2006}, are also locally abundant and thus ideal to study the star-formation processes in detail. Their metallicities range from half to 1/30$^{th}$ of the solar value \citep{kunth1988}. This class of galaxies can be divided further, depending on the type of galactic body hosting the burst: E-BCDs and I-BCDs, for bursts in smooth dwarf ellipticals or in irregular bodies, respectively, or by the location of the burst: ``n'' for a nuclear concentrated burst or ``i'' for pockets of star formation distributed all over the galactic body. For example, iE-BCD will stand for a BCD with a smooth elliptical body and irregularly distributed pockets of star formation \citep{loose1986}. \begin{table*} \addtolength{\tabcolsep}{-4pt} % \caption{Basic characteristics of the galaxies in our sample. The columns are as follows: name of the galaxy, right ascension and declination in J2000 epoch, heliocentric radial velocity in \kms, distance in Mpc, inclination, size of the galaxy in arcsec and in kpc, total \hone\ mass in solar masses; mass to light ratio in $B$; star formation rate inferred from \hone\ observations; gas metallicities, ellipticity ($\epsilon = 1-b/a$), number of galaxies in the group. Columns 3,4,5, 6,7,8,9,10,11 are from \citet{vanzee2001}; columns 2,12,13 are from the HyperLeda database. } \begin{tabular}{lcccccccccccccc} \hline \hline Name & RA DEC & V$_{Helio}$ & Distance& Inclination & \multicolumn{2}{c}{D$_{25} \times$ d$_{25}$} & M$_{HI}$ & M$_{HI}$/L$_B$ & SFR$_{HI}$ & & $\epsilon$ & N$_{gal}$ \\ & J2000 & (\kms) & (Mpc) & (deg) & (arcsec) & (kpc) & (10$^8$ M$_\odot$) & (M$_\odot$/L$_\odot$) & (M$_\odot$/yr) & 12+log(O/H) & & \\ \hline Mk324 & J232632.82+181559.0& 1600 & 24.4 & 38 & 29$\times$23 & 3.4$\times$2.7 & 3.28 & 0.50 & 0.065 & 8.50$\pm$0.20 &0.09 & 8 \\ Mk900 & J212959.64+022451.5& 1155 & 18.0 & 43 & 44$\times$36 & 3.8$\times$3.1 & 1.55 & 0.21 & 0.088 & 8.74$\pm$0.20 & 0.38 & 1 \\ UM038 & J002751.56+032922.6& 1378 & 20.3 & 40 & 32$\times$24 & 3.1$\times$2.4 & 2.90 & 0.72 & 0.038 & 8.15$\pm$0.20 &0.12 & 3\\ UM323 & J012646.56-003845.9& 1915 & 26.7 & 43 & 24$\times$16 & 3.1$\times$2.1 & 4.23 & 1.03 & 0.111 & 7.70$\pm$0.20 & 0.24 & 10\\ \hline \label{table:sample} \end{tabular} \end{table*} While star formation in dwarf galaxies (like dwarf irregulars, dIrrs) is not unusual, it is the intensity and the concentration of the bursts in the BCDs that makes them special. The BCDs are usually more compact than the dIrrs and experience massive bursts of star formation with high specific star-formation rates (the star-formation within the scale length, \citealt{hunter2004}). This, combined with their typical gas content of about 50\,percent of their mass or more \citep{zhao2013}, results in gas exhaustion times of less than 1\,Gyr \citep{gildepaz2003}. Hence, we must be witnessing an extraordinary process, possibly transforming these galaxies into early type dwarfs. Dwarf galaxies are not efficient at transforming their gas into stars. The reasons are that, on one side, the gas density is lower in dwarfs, and on the other, that their shallow gravitational potentials allow even a few supernova explosions to heat the gas, ceasing further condensation into stars in almost the full galactic body. These observationally proven and theoretically backed facts \citep{bigiel2010,hunter2012,schroyen2013} are in contradiction with the observed bursts in BCDs. A possible solution to this problem can be offered by sequential triggering of star-formation \citep{gerola1980}, cloud impact \citep{gordon1981}, mergers \citep{bekki2008}, tidal effects \citep{vanzee1998}. Based on the disturbed H$_\alpha$\ velocity fields and irregular optical morphologies of the BCDs, many observational studies support that the burst of star formation is triggered by dwarf galaxy mergers or gas accretion \citep[e.g.][]{ostlin2004} . Here, we will try to distinguish between different burst-triggering mechanisms by comparing the stellar, ionized gas and neutral gas kinematical data of a sample of blue compact dwarfs. We will also study their stellar population properties and try to anticipate the possible outcome of a BCD when the burst is over. Our paper is organized as follows: in Sect.\,\ref{sect:data} we will present the data and describe the analysis tools, in Sect.\,\ref{sect:results} we will present our results which will be followed by discussions (Sect.\,\ref{sect:discussions}) and conclusions (Sect.\,\ref{sect:conclusions}). | 14 | 3 | 1403.6294 |
|
1403 | 1403.3825_arXiv.txt | Gamma-ray bursts (GRBs) usually occurs in a dense star-forming region with massive circum-burst medium. The small-angle scattering of intense prompt X-ray emission off the surrounding dust grains will have observable consequences, and sometimes can dominate the X-ray afterglow. In most of the previous studies, only Rayleigh-Gans (RG) approximation is employed for describing the scattering process, which works accurately for the typical size of grains (with radius $a\leq 0.1\,{\rm \mu m}$) in the diffuse interstellar medium. When the size of the grains may significantly increase as in a more dense region where GRBs would occur, the RG approximation may not be valid enough for modeling detailed observational data. In order to study the temporal and spectral properties of the scattered X-ray emission more accurately with potentially larger dust grains, we provide a practical approach using the series expansions of anomalous diffraction (AD) approximation based on the complicated Mie theory. We apply our calculations to understanding the puzzling X-ray afterglow of recently observed GRB~130925A which showed a significant spectral softening. We find that the X-ray scattering scenarios with either AD or RG approximation adopted could both well reproduce the temporal and spectral profile simultaneously. Given the plateau present in early X-ray light curve, a typical distribution of smaller grains as in the interstellar medium would be suggested for GRB~130925A. | \label{sec:intro} The nature of interstellar dust grain has been well studied (Draine 2003 and the reference therein), while the dust grains around GRBs are still poorly known. Nevertheless, the existence of dust grains around GRBs has been well established. The commonly found optically ``dark bursts'' have been mainly diagnosed as the result of dust extinction in the host galaxy (Lazzati et al., 2002; Perley et al. 2009; Greiner et al. 2011). The spectral energy distributions of optical/near-infrared afterglows that deviate from the intrinsic power law are also indicative of dust extinction in the host galaxy (Stratta et al. 2004; Kann et al. 2006; Chen et al. 2006). Another important indication is the intrinsic excess of gas column density in X-ray spectra (Stratta et al. 2004; Campana et al. 2006; Campana et al. 2012). X-ray scattering by dust grains is an important tool for investigating the physical properties of the interstellar medium along the line of sight (Mathis \& Lee 1991; Predehl \& Klose 1996; Smith \& Dwek 1997; Draine \& Tan 2003). Since gamma-ray bursts (GRBs) are usually in star-forming regions which are rich in dust grains, several pioneering studies have investigated the idea that the X-ray flux from a GRB could be affected by the circum-burst dust grains (Klose 1998; M\'{e}sz\'{a}ros \& Gruzinov 2000; Sazonov \& Sunyaev 2003). Shao \& Dai (2007) provided the first treatment for evaluating both the temporal and spectral evolution of the delayed emission due to X-ray scattering by circum-burst dust grains in an illustrative model. Interestingly, the quite puzzling light curves, in particular, a shallow decay followed by a ``normal'' decay and a further rapid decay of X-ray afterglows can be well understood with such a simple model (Shao et al. 2008). This model, so called as the dust scattering model, predicted a strong spectral softening in the X-ray afterglow and was not supported by the observational data of the time back then (Shen et al. 2009). Even though most X-ray afterglows did not exhibit such a spectral softening, it would be a great challenge to the standard afterglow models if this feature indeed showed up in some X-ray afterglows as in GRB~090417B (Holland et al. 2010). We will investigate this spectral feature further in this paper. The scattering of X-ray photons by the interstellar dust was first studied using the Rayleigh-Gans (hereafter RG) approximation for spherical grains (Overbeck 1965). The validity of the RG approximation requires a limiting case of the phase shift across the grain, i.e., $|\rho|=2x|m-1| \approx 6\,(E/1\,{\rm keV})^{-1}(a/ 1\,{\rm \mu m})\ll1$ (see the Appendix B for details), where $x=2\pi a/\lambda$, $a$ is the radius of the dust grain, $\lambda$ is the wavelength of the photon, $E$ is the energy of the photon, and $m$ is the complex refractive index of the dust grain (van de Hulst 1957; Alcock and Hatchett 1978). Even though commonly employed, the RG approximation is no longer valid at X-ray energies $E\lesssim 1 {\rm KeV}$ for typical interstellar dust grains with an average radius $\bar{a}\sim 0.1\,{\rm \mu m}$ (Smith and Dwek 1998). This was also cautioned in Shao \& Dai (2007) where the typical interstellar dust model (e.g., Mathis et al. 1977) was considered when using RG approximation for X-ray Scattering around GRBs. Figure 1 shows the valid regions of different photon energy and grain size for RG approximation and anomalous diffraction (hereafter AD) approximation, respectively. The size distribution of interstellar dust grain in the Milky Way Galaxy has been well determined by reproducing the Galactic extinction curves (Weingartner and Draine 2001), while it has not been well constrained around GRBs. Even though the GRB extinction curves have been found complicated as a whole (Zafar et al. 2011; Kr\"{u}hler et al. 2011; Schady et al. 2012; Covino et al. 2013), a more realistic size distribution favoring larger dust grains has been suggested by flatter extinction curves derived from optical/near-infrared spectra of some GRBs (Stratta et al. 2004; Chen et al. 2006; Li et al. 2008; Liang \& Li 2009; 2010). This trend might be consistent with the fact that most GRBs are probably in a dense star-forming region where small dust grains tend to coagulate onto large ones (Jura 1980; Kim 1996; Maiolino et al. 2001; Weingartner \& Draine 2001), or the fact that the radiation field around GRBs is strong so that smaller grains tend to be destroyed more rapidly (e.g., Waxman \& Draine 2000). The extinction to observed ratio of hydrogen column density, $A_V/N_{\rm H}$, is one measure to distinguish these mechanisms. The coagulation model predicts a reduction in $A_V/N_{\rm H}$ and the accretion of gas-phase material can predict an enhancement in $A_V/N_{\rm H}$ (Whittet 2003). In this paper, to better understand the spectrally softening feature predicted by the dust scattering model, we investigate the X-ray scattering scenario with the AD approximation which relaxes the condition of ``phase shift'' previously violated by using the RG approximation. In particular, we compare the temporal and spectral features obtained with these two approximations and apply our results to understanding the X-ray afterglow of GRB~130925A that also showed the spectral softening that is consistent with the model prediction. The paper is structured as follows. In Section 2, we describe a practical approach for evaluating the differential scattering cross section using the series expansions of the AD approximation. In Section 3, we investigate the temporal and spectral properties of dust scattering model employing AD approximation, compared with those of RG approximation. In Section 4, we apply the model to understanding the X-ray afterglow of GRB~130925A. In Section 5, we have a summary of our conclusions. | \label{sec:conclusion} In this paper we revisited the X-ray scattering around GRBs adopting AD approximation for the differential cross section. In most of the practical cases, the generally used RG approximation is no longer valid with soft X-ray photon or large grain size according to the Mie theory. Given that the original Mie theory is complex and time-consuming in computation, we provided a valid and more practical approach within the AD approximation of Mie theory to evaluate the differential cross sections. We found that for the vicinity of GRBs where larger grains might prevail, RG approximation would overestimate the flux of dust echo emission in the early light curves and also overestimate the lower part of the spectrum. Instead, by AD approximation, a steep rise in the lower part of the spectrum would be obtained especially when the maximum size of the dust grains, the most important parameter in size distribution, is significantly larger than $1\,{\rm \mu m}$. At later time, both the significantly-softening spectrum and steeply-decaying light curve would be the same for AD and RG approximations. We also found that a smaller maximum size of dust grains would be implied by both approximations if an early plateau clearly appears in the X-ray light curve. A detailed modeling for the observational data of both lightcurve and spectrum would help determine the size distribution of dust grains. As has been previously delivered (Shao \& Dai 2007; Shao et al. 2008; Shen et al. 2009), the significant spectral softening, as a unique outcome of this X-ray scattering scenario, is generally expected with either RG or AD approximation adopted. Even though most X-ray afterglows did not exhibit such a spectral feature, it would be a great challenge to standard afterglow models if this feature indeed showed up (Holland et al. 2010). We gave an example of how this feature may have also been observed in a recent burst GRB~130925A. The significant spectral softening and optically dark nature of GRB~130925A strongly favor the X-ray scattering scenario. Given the early plateau present in the X-ray light curve, we found that a typical distribution of smaller grains as in the interstellar medium would be suggested. Therefore, the X-ray scattering scenarios with either AD or RG approximation adopted could both well reproduce the late-time temporal and spectral profile simultaneously for GRB~130925A. | 14 | 3 | 1403.3825 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.