subfolder
stringclasses 367
values | filename
stringlengths 13
25
| abstract
stringlengths 1
39.9k
| introduction
stringlengths 0
316k
| conclusions
stringlengths 0
229k
| year
int64 0
99
| month
int64 1
12
| arxiv_id
stringlengths 8
25
|
---|---|---|---|---|---|---|---|
1403 | 1403.6111_arXiv.txt | The amount of decaying dark matter, accumulated in the central regions in neutron stars together with the energy deposition rate from decays, may set a limit on the neutron star survival rate against transitions to more compact objects provided nuclear matter is not the ultimate stable state of matter and that dark matter indeed is unstable. More generally, this limit sets constraints on the dark matter particle decay time, $\tau_{\chi}$. We find that in the range of uncertainties intrinsic to such a scenario, masses $(m_{\chi}/ \rm TeV) \gtrsim 9 \times 10^{-4}$ or $(m_{\chi}/ \rm TeV) \gtrsim 5 \times 10^{-2}$ and lifetimes ${\tau_{\chi}}\lesssim 10^{55}$ s and ${\tau_{\chi}}\lesssim 10^{53}$ s can be excluded in the bosonic or fermionic decay cases, respectively, in an optimistic estimate, while more conservatively, it decreases $\tau_{\chi}$ by a factor $\gtrsim10^{20}$. We discuss the validity under which these results may improve with other current constraints. | 14 | 3 | 1403.6111 |
||
1403 | 1403.6327_arXiv.txt | Recent observations of short gamma-ray bursts (SGRBs) suggest that binary neutron star (NS) mergers can create highly magnetised, millisecond NSs. Sharp cut-offs in $X$-ray afterglow plateaus of some SGRBs hint at the gravitational collapse of these remnant NSs to black holes. The collapse of such `supramassive' NSs also describes the blitzar model, a leading candidate for the progenitors of fast radio bursts (FRBs). The observation of an FRB associated with an SGRB would provide compelling evidence for the blitzar model and the binary NS merger scenario of SGRBs, and lead to interesting constraints on the NS equation of state. We predict the collapse times of supramassive NSs created in binary NS mergers, finding that such stars collapse $\sim10\,{\rm s}$ -- $4.4\times10^{4}\,{\rm s}$ ($95\%$ confidence) after the merger. This directly impacts observations targeting NS remnants of binary NS mergers, providing the optimal window for high time resolution radio and $X$-ray follow-up of SGRBs and gravitational wave bursts. | Fast radio bursts \citep[FRBs; ][]{lorimer07,keane12,thornton13} are among the most exciting astronomical discoveries of the last decade. They are intense, millisecond-duration broadband bursts of radio waves so far detected in the 1.2\,GHz to 1.6\,GHz band, with dispersion measures between $300$\,cm$^{-3}$\,pc and $1200$\,cm$^{-3}$\,pc. FRBs are not associated with any known astrophysical object. Their anomalously large dispersion measures given their high galactic latitudes, combined with the observed effects of scattering consistent with propagation through a large ionised volume \citep{lorimer07,thornton13}, suggest a cosmological origin for FRBs \citep[although, see][]{sarah11,loeb13,bannister14}. Numerous physical mechanisms have been suggested to produce FRBs at cosmological distances, including magnetar hyperflares \citep[e.g.,][]{popov13}, binary white dwarf \citep{kashiyama13} or neutron star \citep[NS;][]{totani13} mergers, and the collapse of supramassive NSs to form black holes \citep{falcke13}. Here, we focus on the latter mechanism, termed the `blitzar' model. A supramassive NS is one that has a mass greater than the non-rotating maximum mass, but is supported from collapse by rotation. As these stars spin down, they lose centrifugal support and eventually collapse to black holes. When magnetic field lines cross the newly-formed horizon, they snap violently, and the resulting outwardly-propagating magnetic shock dissipates as a short, intense radio burst \citep{falcke13,dap13}. The merger of two NSs is one possible formation channel for supramassive NSs. In general, there are three possible outcomes of such a merger, which, for a given NS equation of state (EOS), depend on the nascent NS mass, $M_p$, and angular momentum distribution. These outcomes are as follows: \begin{enumerate} \item If $M_{p}\leq M_{\rm TOV}$, where $M_{\rm TOV}$ is the maximum non-rotating mass, the NS will settle to an equilibrium state that is uniformly rotating and eternally stable \citep[e.g.,][]{giacomazzo13}. \item If $M_{p}>M_{\rm TOV}$, and if magnetic braking has caused the star to rotate uniformly (see below), it will survive for $\gg1$\,s as a supramassive star until centrifugal support is reduced to the point where the star collapses to a black hole \citep[e.g.,][]{duez06}. \item If $M_{p}$ is greater than the maximum mass that may be supported by uniform rotation, the NS may either instantly collapse to a black hole or survive for $10-100$\,ms as a hypermassive star supported by differential rotation and thermal pressures \citep[e.g.,][]{baiotti08,kiuchi09b,rezzolla11,hotokezaka13}. \end{enumerate} Gamma-ray bursts (GRBs) with short durations ($\lesssim2$\,s) and hard spectra are associated with the coalescences of compact object binaries, i.e., either NS-NS or NS-black hole mergers \citep{nakar07,lee07,berger13}. However, the emission mechanisms of short GRBs (SGRBs) are not well understood. From a theoretical standpoint, a short-lived, collimated, relativistic jet can be launched from both black hole \citep[e.g.,][]{rezzolla11} and NS \citep[e.g.,][]{metzger08a} remnants of compact binary coalescences, as necessitated by the `relativistic fireball' model of prompt GRB emission \citep[e.g.,][]{piran99,nakar07,lee07}. Jet launching through magnetic acceleration requires a short-lived ($\sim0.1-1$\,s) accretion disk, and toroidal and small-opening-angle poloidal magnetic fields that are both $\sim10^{15}$\,G \citep[e.g.,][]{komissarov09}. Numerical simulations of the merger of two 1.5$M_{\odot}$ NSs with $10^{12}$\,G poloidal magnetic fields \citep{rezzolla11} result in a black hole remnant with the necessary conditions, where the magnetic field is amplified through magnetohydrodynamic instabilities, to launch a jet with an energy output of $\sim1.2\times10^{51}$\,erg. This is consistent with SGRB observations \citep{nakar07,lee07}. Simulations of a lower-mass NS binary coalescence by \citet{giacomazzo13} resulted in a stable millisecond `protomagnetar', although the small-scale instabilities required to amplify the magnetic field to $\sim10^{15}$\,G were unresolved. On the other hand, high-resolution simulations of isolated NSs \citep[e.g.,][]{duez06} show that magnetic field amplification to $\sim10^{15}$\,G and accretion disk formation is possible for nascent protomagnetars, suggesting that protomagnetars can power prompt SGRB emission through magnetically accelerated jets. Baryon-free energy deposition through neutrino-antineutrino annihilation, driven by accretion onto protomagnetars, is another mechanism to power prompt SGRBs \citep{metzger08a}. In this paper, we assume that nascent protomagnetars formed through binary NS coalescences can power prompt SGRB emission, although we note that a deeper understanding is still required. The most attractive characteristic of the protomagnetar model for SGRBs is its ability to explain features of the $X$-ray afterglows. Lasting a few hundred seconds following the initial burst, these features are either extremely bright \citep[up to 30 times the fluence of the prompt emission;][]{perley09} and variable on timescales comparable to prompt emission variability, or smoothly decaying plateaus \citep{rowlinson13}. Both features are difficult to explain through, for example, fall-back accretion onto black hole central engines \citep[][and references therein]{metzger08a,bucciantini12}, although see \citet{siegel2014}. However, the bright $X$-ray afterglows are explained by the effects of surrounding material on outflows from millisecond protomagnetars with strong magnetic fields \citep{metzger08a,bucciantini12}. The plateau phases observed in 65\% of SGRB $X$-ray afterglow lightcurves are consistent with electromagnetic spin-down energy losses from protomagnetars with dipolar magnetic field strengths of $B_{p}>10^{15}$\,G and rotation periods at the beginning of the X-ray emitting phase of $p_{0}\sim1$\,ms \citep{zhang01,rowlinson13}. Of this population, 39\% show an abrupt decline in $X$-ray flux within 50$-$1000\,s of the SGRB event, which is interpreted by \citet{rowlinson13} as the gravitational collapse of supramassive protomagnetars to black holes. A similar interpretation is given to abrupt declines in the $X$-ray afterglows of long GRBs \citep{troja07,lyons10}. In this paper, motivated by the possible connection between the protomagnetar model for SGRBs and the blitzar scenario for FRBs \citep[e.g.,][]{zhang14}, we present a robust estimate of the possible lifetimes of supramassive protomagnetars created in binary NS mergers.\footnote{We stress that FRBs associated with SGRBs are likely to represent only a subset of the FRB population \citep{zhang14}.} The observation of an FRB associated with a sharp decline in the plateau phase of an SGRB $X$-ray lightcurve would provide powerful evidence for the protomagnetar model for SGRBs, as well as the blitzar model of FRBs. FRB/SGRB associations may also be used to characterise the intergalactic baryon content of the Universe \citep{deng14} through measurements of FRB dispersion measures and SGRB redshifts. Our results also provide a quantitative guide to interpreting observations of declines in SGRB $X$-ray lightcurves \citep{rowlinson13} in the context of supramassive protomagnetar collapse. In particular, secure measurements of the lifetimes of supramassive NSs produced in binary NS mergers allow for interesting new constraints to be placed on the equation of state (EOS) of nuclear matter \citep{lasky13c}. Mergers of binary NSs are prime candidate sources for ground-based gravitational wave (GW) interferometers \citep{abadie10}. However, the collimated nature of SGRBs \citep[e.g.,][]{burrows06} implies not all GW events will be associated with SGRBs. Some alternative electromagnetic signals of GW bursts \citep{zhang13b,gao13} rely on the births of stable or supramassive NSs acting as central engines, implying the timescales for these signals are sensitively related to the lifetimes of the nascent NSs and therefore the calculations performed herein. We show that, if a binary NS merger remnant does not collapse in the first $\sim4.4\times10^{4}\,{\rm s}$ after formation, it is unlikely to collapse at all (97.5\% confidence). That is, almost all supramassive NSs born from binary NS mergers will collapse to form black holes within $\sim4.4\times10^{4}\,{\rm s}$. In \S2, we summarise our method for calculating the range of collapse times of merger remnants. We check the consistency of this method against a sample of SGRBs with $X$-ray plateaus from \citet{rowlinson13} in \S3. In \S4, we generalise our calculations to the full population of binary NS merger remnants, and we present our conclusions in \S5. | We predict the collapse times of supramassive protomagnetars born from the merger of two neutron stars (NSs). We show that, if the protomagnetar has not collapsed within $4.4\times10^{4}\,{\rm s}$, the probability that it will ever collapse is small; quantitatively, $P(t_{\rm col}>4.4\times10^{4}\,{\rm s})=0.025$. We also consider the dependence on the assumed NS equation of state (EOS) and the distribution of initial spin periods of the fractions of binary NS mergers that result in supramassive and eternally stable NSs. Of the scenarios we consider, only EOSs similar to GM1 and $p_{\rm max}>8$\,ms are consistent with the current short gamma-ray burst (SGRB) $X$-ray lightcurve sample \citep{rowlinson13}. Our results strongly impact observations targeting protomagnetars created in binary NS mergers. We consider these observations in turn: \begin{enumerate} \item \textit{Fast radio bursts (FRBs) from blitzars associated with SGRBs.} \citet{falcke13} posit that the collapse of a supramassive NS causes the emission of an intense radio burst. These authors further suggest that $\sim10^{3}$\,yr may be required to sufficiently clear the NS environments to allow the efficient outward propagation of blitzar radio emission. However, \citet{zhang14} found that GRB blast waves and the shocked circum-burst media are likely to have plasma oscillation frequencies significantly lower than FRB emission frequencies, indicating that blitzars occurring shortly after SGRBs will be visible. Our results show that radio follow-up observations of SGRBs to detect FRBs have an optimal detection window between 10\,s and $4.4\times10^{4}$\,s after the initial burst. This is consistent with the calculation of \citet{zhang14}, who suggested radio follow-up observations commencing 100\,s following the initial burst. We note that from the SGRB sample of \citet{rowlinson13}, it is likely that between 15\% and 25\% of SGRBs will result in supramassive stars. \item \textit{$X$-ray SGRB follow-up.} The theoretical framework that we use to calculate the collapse time window provides a guide to interpreting $X$-ray observations of SGRBs designed to be sensitive to abrupt declines in plateau phases of the lightcurves. The observation of an abrupt decline greater than $4.4\times10^{4}$\,s following the initial burst would be inconsistent with this framework. \item \textit{Electromagnetic counterparts to gravitational wave sources.} A large fraction of any non-SGRB signatures of binary NS mergers that rely on a protomagnetar central engine \citep{zhang13b,gao13} may have timescales associated with the distribution of supramassive NS collapse times (Fig.~3). Any such signatures with corresponding timescales, conversely, may be suggestive of binary NS mergers. \end{enumerate} Our calculations have various uncertainties that require further study. The basic electromagnetic spin down formula we apply to calculate $t_{\rm col}$ in Eq.~(2) requires revision to fully model the spin down torques of binary NS merger remnants \citep{lasky13c}. Additional spin-down mechanisms cause $t_{\rm col}$ to reduce in general, and may affect the overall fractions of supramassive NSs created in binary NS mergers. We also do not account for changes in the gravitational masses, radii and moments of inertia of NSs as they spin down, although these effects are not likely to significantly change our results \citep[e.g., see][]{falcke13}. Our assumption of a simple dipole field structure orthogonally oriented to the rotation axis is also unlikely to be fully correct, although we do not account for this uncertainty in our predictions. The biggest uncertainties included in our predictions of $t_{\rm col}$ come from the assumed distributions for $p_0$ and $B_p$ and the assumed EOS. We have, however, chosen the broadest possible priors on $p_0$ and $B_p$, and consider a variety of EOSs, implying the range of possible $t_{\rm col}$ values we find is conservative. In conclusion, we strongly urge high time resolution radio and $X$-ray follow-up observations of short gamma-ray bursts, and in the future gravitational wave bursts, from binary neutron star mergers. The optimal window for these observations is between 10\,s and $4.4\times10^{4}$\,s after the initial bursts. These observations have the potential to provide strong evidence for the protomagnetar model of short gamma-ray bursts and the blitzar model of fast radio bursts. | 14 | 3 | 1403.6327 |
1403 | 1403.0946.txt | In this paper, we study static vacuum solutions of quantum gravity at a fixed Lifshitz point in (2+1) dimensions, and present all the diagonal solutions in closed forms in the infrared limit. The exact solutions represent spacetimes with very rich structures: they can represent generalized BTZ black holes, Lifshitz space-times or Lifshitz solitons, in which the spacetimes are free of any kind of space-time singularities, depending on the choices of the free parameters of the solutions. We also find several classes of exact static non-diagonal solutions, which represent similar space-time structures as those given in the diagonal case. The relevance of these solutions to the non-relativistic Lifshitz-type gauge/gravity duality is discussed. | \renewcommand{\theequation}{1.\arabic{equation}} \setcounter{equation}{0} Anisotropic scaling plays a fundamental role in quantum phase transitions in condensed matter and ultracold atomic gases \cite{CS13}. Recently, such studies have received considerable momenta from the community of string theory in the content of gauge/gravity duality \cite{MP}. This is a duality between a quantum field theory (QFT) in D-dimensions and a quantum gravity, such as string theory, in (D+1)-dimensions. An initial example was found between the supersymmetric Yang-Mills gauge theory with maximal supersymmetry in four-dimensions and a string theory on a five-dimensional anti-de Sitter space-time in the low energy limit \cite{MGKPW}. Soon, it was discovered that such a duality is not restricted to the above systems, and can be valid among various theories and in different spacetime backgrounds \cite{MP}. One of the remarkable features of the duality is that it relates a strong coupling QFT to a weak coupling gravitational theory, or vice versa. This is particular attractive to condensed matter physicists, as it may provide hopes to understand strong coupling systems encountered in quantum phase transitions, by simply studying the dual weakly coupling gravitational theory \cite{Sachdev}. Otherwise, it has been found extremely difficult to study those systems. Such studies were initiated in \cite{KLM}, in which it was shown that nonrelativistic QFTs that describe multicritical points in certain magnetic materials and liquid crystals may be dual to certain nonrelativistic gravitational theories in the Lifshitz space-time background \footnote{Another space-time that is conjectured to be holographically dual to such strongly coupled systems is the Schr\"odingier space-time \cite{Son}, in which the related symmetry algebra is Schr\"odingier, instead of Lifshitz. However, to realize such an algebra, it was found that the space-time needs to be $(D+2)$-dimensions, instead of $(D+1)$-dimensions.}, \bq \lb{1.0} ds^2 = - \left(\frac{r}{\ell}\right)^{2z} dt^2 + \left(\frac{r}{\ell}\right)^{2}dx^i dx^i + \left(\frac{\ell}{r}\right)^{2} dr^2, %ds^2 = - r^{2z} dt^2 + {r}^{2}dx^i dx^i + \frac{dr^2}{r^2}. \eq where $z$ is a dynamical critical exponent, and $\ell$ a dimensional constant. Clearly, the above metric is invariant under the anisotropic scaling, \bq \lb{1.1} t \rightarrow b^{z} t, \;\;\; {\bf x} \rightarrow b {\bf x}, \;\;\; \eq provided that $r$ scales as $r \rightarrow b^{-1}r$. Thus, for $z \not= 1$ the relativistic scaling is broken, and to have the above Lifshitz space-time as a solution of general relativity (GR), it is necessary to introduce gauge fields to create a preferred direction, so that the anisotropic scaling (\ref{1.1}) becomes possible. In \cite{KLM}, this was realized by two p-form gauge fields with $p = 1, 2$, and was soon generalized to different cases \cite{Mann}. It should be noted that the Lifshitz space-time is singular at $r = 0$ \cite{KLM}, and this singularity is generic in the sense that it cannot be eliminated by simply embedding it to high-dimensional spacetimes, and that test particles/strings become infinitely excited when passing through the singularity \cite{Horowitz}. To resolve this issue, various scenarios have been proposed \cite{HKW}. There have been also attempts to cover the singularity by horizons \cite{LBHs}, and replace it by Lifshitz solitons \cite{LSoliton}. On the other hand, starting with the anisotropic scaling (\ref{1.1}), recently Ho\v{r}ava constructed a theory of quantum gravity, the so-called Ho\v{r}ava-Lifshitz (HL) theory \cite{Horava}, which is power-counting renormalizable, and lately has attracted a great deal of attention, due to its remarkable features when applied to cosmology and astrophysics \cite{reviews}. The HL theory is based on the perspective that Lorentz symmetry should appear as an emergent symmetry at long distances, but can be fundamentally absent at short ones \cite{Pav}. In the ultraviolet (UV), the system exhibits a strong anisotropic scaling between space and time with $z \ge D$, while at the infrared (IR), high-order curvature corrections become negligible, and the lowest order terms $R$ and $\Lambda$ take over, whereby the Lorentz invariance (with $z = 1$) is expected to be ``accidentally restored," where $R$ denotes the D-dimensional Ricci scalar of the leaves $t =$ Constant, and $\Lambda$ the cosmological constant. Since the anisotropic scaling (\ref{1.1}) is built in by construction in the HL gravity, it is natural to expect that the HL gravity provides a minimal holographic dual for non-relativistic Lifshitz-type field theories with the anisotropic scaling and dynamical exponent $z$. Indeed, recently it was showed that the Lifshitz spacetime (\ref{1.0}) is a vacuum solution of the HL gravity in (2+1) dimensions, and that the full structure of the $z=2$ anisotropic Weyl anomaly can be reproduced in dual field theories \cite{GHMT}, while its minimal relativistic gravity counterpart yields only one of two independent central charges in the anomaly. In this paper, we shall provide further evidence to support the above speculations, by constructing various solutions of the HL gravity, and show that these solutions provide all the space-time structures found recently in GR with various matter fields, including the Lifshitz solitons \cite{LSoliton} and generalized BTZ black holes. Some solutions represent incomplete space-time, and extensions beyond certain horizons are needed. After the extension, they may represent Lifshitz black holes \cite{LBHs}. The distinguishable features of these solutions are that: (i) they are exact vacuum solutions of the HL gravity without any matter; and (ii) the corresponding metrics are given explicitly and in closed forms, in contrast to the relativistic cases in which most of the solutions were constructed numerically \cite{LBHs,LSoliton}. We expect that this will facilitate considerably the studies of the holographic dual between the non-relativistic Lifshitz QFTs and theories of quantum gravity. It should be noted that the definition of black holes in the HL gravity is subtle \cite{HMTb,GLLSW}, because of the inclusions of high-order derivative operators, for which the dispersion relationship is in general becomes nonlinear, \bq \lb{0.3} E^2 = c_{p}^2 p^2\left(1 + \alpha_1 \left(\frac{p}{M_{*}}\right)^2 + \alpha_2 \left(\frac{p}{M_{*}}\right)^4\right), \eq where $E$ and $p$ denote, respectively, the energy and momentum of the particle, and $c_p$ and $\alpha_i$ are coefficients, depending on the particular specie of the particle, while $M_{*}$ denotes the suppression energy scale of the higher-dimensional operators. Then, both of the phase and group velocities of the particle become unbounded as its momentum increases. As a result, black holes may not exist at all in the HL theory \cite{GLLSW}. However, in the IR the high-order terms of $p$ are negligible, and the first term in Eq.(\ref{0.3}) becomes dominant, so one may still define black holes, following what was done in GR \cite{HE73,Tip77,Hay94,Wangb}. Therefore, in this paper we shall consider the HL gravity in the IR limit. Nevertheless, cautions must be taken even in this limit: Because of the Lorentz violation of the theory, spin-0 gravitons generically appear \cite{reviews}, whose velocity in general is different from that of light. To avoid the Cherenkov effects, it is necessary to require it to be no smaller than the speed of light \cite{MS}. As a result, even they are initially trapped inside the horizons, the spin-0 gravitons can escape from them and make the definition of black holes given in GR invalid \footnote{One might argue that black holes then can be defined in terms of the light cone of these spin-0 gravitons. However, due to the Lorentz violation, other excitations with different speeds might exist, unless a mechanism is invented to prevent this from happening, for example, by assuming that the matter sector satisfies the Lorentz symmetry up to the Planck scale \cite{PS}.}. Fortunately, it was shown recently that universal horizons might exist inside the event horizons of GR, where the preferred time foliation simply ceases to penetrate them within any finite time \cite{BS11}. Universal horizons have already attracted lot of attention, and various interesting results have been obtained \cite{UHs}. For more detail regarding to black holes in the HL gravity, we refer readers to \cite{HMTb,GLLSW,BS11,UHs}, and references therein. To simplify the technique issues and be comparable to the studies carried out in \cite{GHMT}, in this paper we shall restrict ourselves only to (2+1) dimensional spacetimes \footnote{In (2+1)-dimensions, observational constraints from the Cherenkov effects are out of question, so in principle the speed of the spin-0 gravitons can be smaller than that of light.}, although we find that exact vacuum solutions of the HL gravity in any dimensional spacetimes exist, and have similar space-time structures \cite{LSWW}. Specifically, the paper is organized as follows: In Section II, we give a brief introduction to the non-projectable HL theory in (2+1) dimensions. In Section III, we first present all the static diagonal vacuum solutions of the HL theory, and then study their local and global structures. We find that the Lifshitz space-time (\ref{1.0}) is only one of the whole class of solutions, and the rest of them can represent either Lifshitz solitons, in which space-time is not singular, or generalized BTZ black holes. Some solutions represent incomplete space-time, and extensions beyond certain horizons are needed. After the extension, they may represent Lifshitz black holes \cite{LBHs}. In Section IV, we construct several classes of static non-diagonal ($g_{tr} \not= 0$) vacuum solutions of the HL theory, and find that there exist similar space-time structures as found in the diagonal case. In Section V, we present our main conclusions. | \renewcommand{\theequation}{5.\arabic{equation}} \setcounter{equation}{0} In this paper, we have studied static vacuum solutions of quantum gravity at a Lifshitz point, proposed recently by Ho\v{r}ava \cite{Horava}, using the anisotropic scaling between time and space (\ref{1.1}). %This is similar to the Lifshitz scalar field \cite{Lifshitz}, well-studied in % condensed matter physics \cite{CS13}, so it is often referred to as the HL theory of gravity. %It is remarkable to note that, in a completely different context, The same scaling was also used in \cite{KLM} to construct the Lifshitz spacetimes (\ref{1.0}) in the content of the non-relativistic gauge/gravity duality. Because of this same scaling, lately it was argued \cite{GHMT} that the HL gravity should provide a minimal holographic dual for non-relativistic Lifshitz-type field theories. In this paper, we have provided further evidences to support such a speculation. In particular, in Section III we have found all the static vacuum diagonal ($g_{tr} = 0$) solutions of the HL gravity, and shown that the corresponding spacetimes have very rich structures. They can represent the generalized BTZ black holes, Lifshitz spacetimes and Lifshitz solitons, depending on the choice of the free parameters involved in the solutions [cf. Figs. \ref{fig1} - \ref{fig5}]. In Section IV, we have generalized our studies presented in Section III to the non-diagonal case where $g_{tr} \not= 0$ (or $N^r \not= 0$), and found several classes of exact solutions. We have shown that there exist similar space-time structures as those found in the diagonal case. Note that some solutions presented in Sections III and IV represent incomplete space-time, and extensions beyond certain horizons are needed. After the extension, they may represent Lifshitz black holes \cite{LBHs}. It would be very interesting to study those spacetimes in terms of the universal horizons \cite{BS11,UHs}. In addition, Penrose's notion of conformal infinity of spacetime was generalized to the case with anisotropic scaling \cite{HMTb}, and one would wonder how one can define black holes in terms of anisotropic conformal infinities? Further more, what is the corresponding thermodynamics of such defined black holes? Clearly, such studies are out of scope of the current paper, and we would like very much to come back to these important issues soon in another occasion. Finally, we note that, although our studies presented in this paper have been restricted to (2+1)-dimensional spacetimes, we find that static vacuum solutions of the HL gravity in higher dimensional space-times exhibit similar space-time structures \cite{LSWW}. This is not difficult to understand, if we note that the higher dimensional space-time $ds^2_{D+1}$ is simply the superposition of the (2+1)-dimensional space-time given in this paper, and a $(D-2)$-spatial partner, \bqn \lb{HD} ds^{2}_{D+1} &=& ds^{2}_{2+1} \oplus ds^{2}_{D-2}\nb\\ &=& - f^2(r) r^{2z}dt^2 + \frac{g^2(r)} {r^2}\left(dr + N^r(r)dt\right)^2 \nb\\ && + r^2dx^2 + r^2\sum_{i = 1}^{D-2}{dx^idx^i}. \eqn Therefore, the space-time structures are mainly determined by the sector $ g_{ab}dx^adx^b\; (a, b = t, r)$. With these exact vacuum solutions, it is expected that the studies of the non-relativistic Lifshitz-type gauge/gravity duality will be simplified considerably, and we wish to return to these issues soon. | 14 | 3 | 1403.0946 |
1403 | 1403.4052_arXiv.txt | { Internal gravity waves (hereafter IGWs) are studied for their impact on the angular momentum transport in stellar radiation zones and the information they provide about the structure and dynamics of deep stellar interiors. We present the first 3D nonlinear numerical simulations of IGWs excitation and propagation in a solar-like star. } { The aim is to study the behavior of waves in a realistic 3D nonlinear time-dependent model of the Sun and to characterize their properties. } { We compare our results with theoretical and 1D predictions. It allows us to point out the complementarity between theory and simulation and to highlight the convenience, but also the limits, of the asymptotic and linear theories. } { We show that a rich spectrum of IGWs is excited by the convection, representing about 0.4\% of the total solar luminosity. We study the spatial and temporal properties of this spectrum, the effect of thermal damping, and nonlinear interactions between waves. We give quantitative results for the modes' frequencies, evolution with time and rotational splitting, and we discuss the amplitude of IGWs considering different regimes of parameters. } { This work points out the importance of high-performance simulation for its complementarity with observation and theory. It opens a large field of investigation concerning IGWs propagating nonlinearly in 3D spherical structures. The extension of this work to other types of stars, with different masses, structures, and rotation rates will lead to a deeper and more accurate comprehension of IGWs in stars. } | IGWs are perturbations propagating in stably stratified regions under the influence of gravity. Planetary atmospheres and stellar radiation zones are therefore ideal places to find them. For example, they can be observed in striated cloud structures in Earth's atmosphere where they are known to produce large-scale motions such as the quasi-biennial oscillation (QBO) in the lower stratosphere \citep{1978JAtS...35.1827P,1997JGR...10226053D,2001RvGeo..39..179B,2002GeoRL..29.1245G}. In stars, IGWs propagate in the radiative cores of low-mass stars and the external envelopes of intermediate-mass and massive stars \citep[e.g.,][]{aerts2010asteroseismology}. {High-frequency gravity modes have been observed in solar-like stars \citep[e.g.,][]{1995ApJ...443L..29C} and more massive stars}. IGWs are known for their ability to mix {chemical} species and to transport angular momentum, affecting the evolution of stars. They can be excited by several processes, depending on the type of stars being considered. In single stars, three excitation processes have been invoked. {First}, the $\kappa$-mechanism is due to opacity bumps in ionization regions \citep[e.g.,][]{1989nos..book.....U,Gastine:2010ki}. {Next}, the $\epsilon$-mechanism occurring in massive evolved stars is a modulation of the nuclear reaction rate in the core \citep[e.g.,][]{2012ApJ...749...74M}. {Finally}, for solar-type stars, IGWs are mainly excited by stochastic motions such as the pummeling of convective plumes at the interface with adjacent radiative zones \citep[e.g.,][]{1986ApJ...311..563H,1990ApJ...363..694G,BBT2004,2005ApJ...620..432R,Belkacem:2009cc,Brun:2011bl,Shiode:2013kp,Lecoanet:ws}.\\\newline {The propagation of IGWs in stellar radiative zones can affect their evolution on secular timescales.} They have been subject to intense theoretical studies, invoking them to explain several physical mechanisms. With the large-scale meridional circulation \citep{1992A&A...265..115Z,2004A&A...425..229M}, the different hydrodynamical shear and baroclinic instabilities \citep{1983apum.conf..253Z}, and the fossil magnetic field \citep{1998Natur.394..755G,BrunZahn2006,2008MNRAS.391.1239G,Duez:2010hg,2011AN....332..891S}, IGWs constitute the fourth main process responsible for the angular momentum redistribution in radiative interiors. Indeed, when they propagate, IGWs are able to transport and deposit a net amount of angular momentum by radiative damping \citep{Press1981,1993A&A...279..431S,ZahnTalonMatias1997} and corotation resonances \citep{Booker:1967wd,2013A&A...553A..86A}. Their action induces important changes in the internal rotation profiles of stars during their evolution \citep{2008A&A...482..597T,Charbonnel:2013df,2013A&A...558A..11M}. In the particular case of the Sun, IGWs are serious candidates to explain the solid body rotation of its radiative interior down to 0.2$R_\odot$ \citep{1999ApJ...520..859K,CharbonnelTalon2005}. They may also provide the extra mixing required to answer the Li depletion question in F stars \citep{1991ApJ...377..268G} and in the Sun \citep{Montalban:1994wx}.\\\newline {By interfering constructively, IGWs form standing modes also known as gravity (g) modes.} Indeed, gravity waves' frequencies must be inferior to the Brunt-Väisälä (BV) frequency deduced from the characteristics of the star (gravity, density, and pressure profiles). For this reason, IGWs can propagate only in a limited cavity and are susceptible to entering resonance, according to the geometry of this cavity. {Such modes have became} the object of study of astero- and helioseismology \citep{aerts2010asteroseismology,Christensen-Dalsgaard97lecturenotes}, together with acoustic (p) modes. Detecting and characterizing g-modes is of great interest for obtaining informations about the inner structure of different types of stars.\\\newline For white dwarfs, \citet{1968ApJ...153..151L} was the first to observe a rapid timescale oscillation in the single white dwarf now known as HL Tau 76. Four years later, \citet{1972NPhS..239....2W} and \citet{1972NPhS..236...83C} were able to identify these oscillations with nonradial gravity mode pulsations. Today, an abundance of reports of high-frequency variability in white dwarf stars have been found and used to understand the motions and internal composition of these stars \citep{2005EAS....17..133V,Winget:2008dz}. In the case of subdwarf B (sdB) stars, \citet{Green:2002va} observed a new class of sdB pulsators with periods of about an hour corresponding to gravity modes. And other reports have been made about detections of gravity modes in the {upper} main-sequence (for example, in slowly pulsating B (SPB) and Be stars) \citep{1991A&A...246..453W,DeCat:2011gq,2012A&A...546A..47N}. In the past few years, the importance of g-modes have been underlined thanks to the CoRoT and Kepler missions. In particular, the detection of mixed-modes{ that have the character of g-modes in the core region and of p-modes in the envelope} has led to numerous results in red giant seismology \citep[see][for a complete review]{Mosser:2013vu}. For instance, \citet{Bedding:2011te} have used them as a way to distinguish between hydrogen- and helium-burning red giants, and they also provide good results for the deduction of the core rotation from the measurement of their rotational splitting \citep{2012Natur.481...55B,Mosser:2013uk,2012ApJ...756...19D}.\\\newline However, g-modes remain hardly detectable in the Sun and solar-like stars \citep{Anonymous:0GcaI-aC,TurckChieze:2004vw,2010A&ARv..18..197A}. Indeed, these stars possess outer convective envelopes where IGWs are evanescent. They thus have a low amplitude when they reach the photosphere level where one could have a chance to detect the oscillations. In past years, intense research have been invested in the quest for the detection of g-modes in the Sun. {Both theoretical and numerical works have been undertaken to estimate the solar g-modes' frequencies \citep{1991SoPh..133..127B} and surface amplitudes} \citep{gough1986seismology,1990A&A...227..563B,1992A&A...257..763A,Anonymous:0GcaI-aC,1996A&A...312..610A,2009A&A...494..191B}, concluding that most powerfull modes should have amplitudes of about $10^{-3}$ to $10^{-1}$ cm/s \citep{2010A&ARv..18..197A}. Detection of g-modes at the surface of the Sun was one of the goals of the SOHO mission \citep{1995SSRv...72...81D}. Today, asymptotic signatures of gravity modes have been found \citep{Garcia:2007iq} and used to constrain the rotation of the core \citep{Mathur:2008hs}, but the detection of individual g-modes at the surface of the Sun seems to elude the community. \\\newline In parallel with observational and theoretical works, numerical simulations can help for understanding IGWs' properties and behavior in solar-like stars. In the Sun, the main mechanism for exciting IGWs is convective overshoot. Thus, a series of studies have been performed to determine the extension of convective penetration zone and the resulting excitation of IGWs in 2D \citep{1984A&A...140....1M,1986ApJ...311..563H,1994ApJ...421..245H,2005ApJ...620..432R,Rogers:2006ks} , and in 3D \citep{2000ApJ...529..402S,Brun:2011bl}. Some authors also compared the spectrum of IGWs excited by convection and the energy flux carried by the waves with simpler parametric models of wave generation \citep{1994SoPh..152..241A,1996A&A...312..610A,2003AcA....53..321K,Kiraga:2005wg,2005A&A...438..365D}. Finally, the transport of angular momentum by waves has been studied with 1D stellar evolution codes \citep{Talon:2005iu} but also in 2D \citep{2006ApJ...653..756R}. {Here}, the use of a realistic stratification in radiation zones is of great importance. Indeed, g-modes are very sensitive to the form of the cavity defined by the BV frequency, particularly for the central region, under 0.2$R_\odot$ \citep{Brun1998,2012sf2a.conf..289A}. For instance, a slight modification of the nuclear reaction rates in the model taken for calculating the BV frequency can induce a frequency shift up to $2\mu$Hz in the range 50-300$\mu$Hz where solar g-modes are expected to be found. Moreover, as shown by \citet{RogersGlatzmaier2005} and \citet{Rogers:2008bl}, the effects of wave-wave and wave-mean-flow nonlinear interactions have to be taken into account, which puts nonlinear codes in the foreground. \\\newline In the present work, we show results of 3D spherical nonlinear simulations of a full sphere solar-like star. The computational domain extends from 0 to 0.97$R_\odot$ by taking the full radiative cavity into account. IGWs are naturally excited by penetrative convection at the interface with the inner radiative zone and can propagate and {give birth to standing modes} in the cavity. The paper is organized in four sections. After introducing the equations and notations that define the numerical models, we show in Sect.~\ref{sec:excit-penetr-conv} that a rich spectrum of IGWs is excited by convective penetration. In Sect.~\ref{sec:waves-properties}, we examine the properties of this spectrum precisely, highlighting its richness where both modes and propagating waves are present. We give quantitative results about the group velocity of such waves, we measure their period spacing, their lifetime, and the splitting induced by the rotation. Lastly, Sect.~\ref{sec:amplitudes} presents our results concerning the waves' amplitude and the effect of the radiative damping affecting their propagation. In particular, we discuss the effect of the diffusivities on the amplitude of waves and the nonlinear wave-wave interactions. | In this paper, we have presented the first study of IGWs stochastic excitation and propagation in a 3D spherical Sun using a realistic stratification in the radiative zone and a nonlinear coupling between radiative and convective zones. {This configuration allows a direct comparison with seismic studies}. These results are extremely rich, and we stand yet at the beginning of their exploration and comprehension. \\\newline Since \citet{BrunZahn2006}, the ASH code has entered a new area because it is no longer dedicated to the study of convective envelopes alone \citep{2000ApJ...533..546E,BrunToomre2002}. The nonlinear coupling with the inner radiative zone opens up a large field of investigation. We presented two recent improvements in the ASH code that have a strong impact on our study of gravity waves. \begin{itemize} \item On one hand, the implementation of the LBR equations \citep{Brown:2012bd} ensures the right conservation of energy in the radiative zone and allows IGWs' frequencies and amplitudes to be computec with a better accuracy. \item On the other hand, the extension of the computational domain to $r=0$ by imposing special boundary conditions \citep{2007PhRvE..75b6303B} largely improves the treatment of g-modes since we now model the entire radiative cavity without any central cutoff \citep{Brun:2011bl,2012sf2a.conf..289A}. \end{itemize} We then discussed the convective overshoot observed in our models and related this process to the excitation of a large spectrum of IGWs, in agreement with both fluid mechanics and stellar oscillations theory predictions. This spectrum extends from zero to the maximum of the BV frequency ($\sim$0.45mHz), which implies that both propagative (low-frequency) and standing waves (high-frequency) must be represented. Using our raytracing code \citep[e.g.,][]{goughHouches,Christensen-Dalsgaard97lecturenotes} also contributes to improving our understanding and illustrates the behavior of IGWs as propagative waves, their group and phase velocity, and their location in the 3D sphere. This underlines the complementarity between our simulations of the Sun and linear and asymptotic theories and models. \\ \newline The properties of the spectrum of IGWs presented in this paper are multiple. To understand its structure, we decomposed it into its spatial and temporal parts, and retrieved the results of \citet{Belkacem:2009wl} predicting that the frequency spectrum was better fitted with a Lorentzian-like function rather than with a Gaussian function. We also showed the quick drop of energy with increasing wavenumbers $k_h$. Then, we presented the changes in this spectrum as a function of the depth and proposed a distinction between propagative waves and g-modes. Indeed, this subject is rather hazy in the literature, and it is sometimes difficult to place the limit between both types. Although they correspond to the same physical process, only g-mode frequencies are described by integers $n$. We then discussed some important properties relative to g-modes. \begin{itemize} \item We applied the same method as \citet{Garcia:2007iq} to detect g-modes signatures at the surface of the Sun and confirmed that the stratification chosen in the model plays an important role in the calculation of g-modes frequencies. \item We also had a look at the impact of the rotation on g-modes, whose frequencies were splitted with respect to their azimuthal number $m$. We showed that the precision of the inversion process strongly depends on the radial order of the modes that are considered and that one must take at least up to $n$=25 to get a precision of 5\% in the estimation of the rotation rate. \item Finally, we explained that the energy is not equally distributed into values of $m$ but is instead distributed in high $m$. That shows that the assumption made in several codes, supposing an equal distribution of the energy must be treated with caution. Moreover, since high $m$ modes are located close to the equator, these results could orient the research of g-modes at the surface of the Sun. {This last result, in particular, could not have been obtained without taking the three dimensions of the problem into account.} \end{itemize} Finally, we dealt with the energy transferred from the convection to IGWS and then carried by them. \begin{itemize} \item We showed that the different formula supplied by the literature to estimate this energy give a comparable estimation of the percentage of the solar luminosity carried by waves. Indeed, we found that about 0.4\% of the solar luminosity is converted into waves at the interface between radiative and convective zones. \item We pointed out that the radiative damping predicted by the linear theory is much stronger than the one observed and partly explained this difference by considering the impact of the nonlinear processes. \item Concerning, finally, the amplitude of g-modes that could be detected at the surface of the Sun, we are not yet able to reach the required domain of parameters but we showed a promising trend toward a good estimation of these amplitudes. \end{itemize} Our results are of interest for several astrophysical applications. The part concerning g-modes is directly related to helioseimology. The asteroseismology community can {be concerned by a better understanding of the waves and} also seeing that other types of stars can be simulated by the ASH code. Concerning low-frequency propagating IGWs, our work provides new information about the radiative damping and the related effect of nonlinearities to be considered. The spectra presented {and the radiative damping found} can be implemented in stellar evolution codes to provide a more realistic repartition of energy, especially concerning the distribution accross $m$ components.\\\newline Finally, some perspectives of this work are identifiable and will be the object of future works. We presented in Sect. \ref{sec:rotational-splitting} our first results concerning the effect of the rotation on IGWs. Following \citet{DintransRieutord2000}, \citet{Ballot:2010jy}, and \citet{Rogers:2013ui}, it could be possible to study the behavior of IGWs in rapidly rotating stars \citep{Mathis:2013wv} and the transport of angular momentum by gravito-inertial waves \citep{Mathis:2008ba,Mathis2009}. Also of great interest could be the addition of a magnetic field in the simulations to characterize its impact on IGWs \citep{1992ApJ...395..307G,2010MNRAS.401..191R,MathisDeBrye2011,Mathis:2012tn}. { Indeed, the presence of a magnetic field will modify the dispersion relation. If its amplitude is high enough, we can anticipate that a large-scale magnetic field trapped in the radiative zone will have a significant impact on the propagation of IGWs, such as wave reflexions, filtering, and frequency shift. Particularly, for waves frequencies close to the Alfven frequency, IGWs will be trapped vertically, while for frequencies below the inertial frequency ($2\Omega$) some equatorial trapping will occur. Moreover, we could expect that a time-dependent magnetic field generated by dynamo action would modulate the waves' signal.}\\\newline {This work thus constitutes a first cornerstone where the completementary use of 3D nonlinear simulations and of asymptotic theories allows bringing the study of the excitation, propagation, and damping of gravity waves in stellar interiors to a new level of understanding. Morever, the potential application to other types of rotating and possibly magnetic stars open a new window in theoretical asteroseismology in the whole HR diagram.} | 14 | 3 | 1403.4052 |
1403 | 1403.3358_arXiv.txt | We study the mechanisms of the gravitational collapse of the Bose-Einstein condensate dark matter halos, described by the zero temperature time-dependent nonlinear Schr\"odinger equation (the Gross-Pitaevskii equation), with repulsive inter-particle interactions. By using a variational approach, and by choosing an appropriate trial wave function, we reformulate the Gross-Pitaevskii equation with spherical symmetry as Newton's equation of motion for a particle in an effective potential, which is determined by the zero point kinetic energy, the gravitational energy, and the particles interaction energy, respectively. The velocity of the condensate is proportional to the radial distance, with a time dependent proportionality function. The equation of motion of the collapsing dark matter condensate is studied by using both analytical and numerical methods. The collapse of the condensate ends with the formation of a stable configuration, corresponding to the minimum of the effective potential. The radius and the mass of the resulting dark matter object are obtained, as well as the collapse time of the condensate. The numerical values of these global astrophysical quantities, characterizing condensed dark matter systems, strongly depend on the two parameters describing the condensate, the mass of the dark matter particle, and of the scattering length, respectively. The stability of the condensate under small perturbations is also studied, and the oscillations frequency of the halo is obtained. Hence these results show that the gravitational collapse of the condensed dark matter halos can lead to the formation of stable astrophysical systems with both galactic and stellar sizes. | The recently published Planck satellite data \cite{Pl} have generally confirmed the predictions of the standard $\Lambda $CDM ($\Lambda $Cold Dark Matter) cosmological model, as well as the matter composition of the Universe. The $\Lambda $CDM model successfully describes the accelerated expansion of the Universe, the observed temperature fluctuations in the cosmic microwave background radiation, the large scale matter distribution, and the main aspects of the formation and the evolution of virialized cosmological objects. On the other hand the latest Cosmic Microwave Background (CMB) data, as well as the observations of the distant Type IA supernovae, baryon acoustic oscillations (BAO), weak gravitational lensing, and the abundance of galaxy clusters, provide compelling evidence that about 95\% of the content of the Universe resides in two unknown forms of matter/energy, called dark matter and dark energy, respectively: the first residing in bound objects as non-luminous matter at the galactic and extragalactic scale \cite{dm}, while the latter is in the form of a zero-point energy that pervades the whole Universe \cite{Rev0, PeRa03}. The dark matter is assumed to be composed of cold neutral weakly interacting massive particles, beyond those existing in the Standard Model of Particle Physics, and not yet detected in accelerators or in dedicated direct and indirect searches. There are many possible candidates for dark matter, the most popular ones being the axions and the weakly interacting massive particles (WIMP) (for a review of the particle physics aspects of dark matter see \cite{OvWe04}). The interaction cross sections of dark matter particles with normal baryonic matter, while extremely small, are expected to be non-zero, and we may expect to detect them directly \cite{AMS}. Scalar fields or other long range coherent fields coupled to gravity have also intensively been used to model galactic dark matter \cite{scal}. Alternative theoretical models to explain the galactic rotation curves have also been elaborated recently \cite{alt}. Despite its important achievements, at galactic scales of the order of $\sim 10$ kpc, the $% \Lambda $CDM model faces major challenges in explaining the observed distribution of the dark matter around the luminous one. In fact, $N$% -body simulations, performed in the $\Lambda $CDM scenario, have shown that bound halos surrounding galaxies must have very characteristic density profiles that feature a well pronounced central cusp, $\rho _{NFW}(r)=\rho _{s}/(r/r_{s})(1+r/r_{s})^{2}$ \cite{nfw}, where $r_{s}$ is a scale radius and $\rho _{s}$ is a characteristic density. On the observational side, high-resolution rotation curves show, instead, that the actual distribution of dark matter is much shallower than the simulated one, and it presents a nearly constant density core: $\rho _{B}(r)=\rho _{0}r_{0}^{3}/(r+r_{0})(r^{2}+r_{0}^{2})$ \cite{bur}, where $r_{0}$ is the core radius and $\rho _{0}$ is the central density. Therefore, to solve this contradiction between observation and theory, new models and a new understanding of dark matter and its properties are required. The observation of the Bose-Einstein condensation in 1995 in dilute alkali gases, such as vapors of rubidium and sodium, confined in a magnetic trap, and cooled to very low temperatures \cite{exp} did represent a major breakthrough in condensed matter and statistical physics. At very low temperatures, all particles in a dilute Bose gas condense to the same quantum ground state, forming a Bose-Einstein Condensate (BEC), i.e., a sharp peak over a broader distribution in both coordinates and momentum space. Particles become correlated with each other when their wavelengths overlap, that is, the thermal wavelength $\lambda _{T}$ is greater than the mean inter-particles distance $l$. This happens at a temperature $T<2\pi \hbar ^{2}n^{2/3}/mk_{B}$% , where $m$ is the mass of the particle in the condensate, $n$ is the number density, and $k_{B}$ is Boltzmann's constant \cite{Da99, rev,Pit,Pet,Zar}. A coherent state develops when the particle density is high enough, or the temperature is sufficiently low. From experimental point of view, the occurrence of the condensation is indicated by a sharp peak in the velocity distribution, observed below a critical temperature. This show that all the atoms have condensed in the same ground state, with a narrow peak in the momentum and coordinate space \cite{exp}. Quantum degenerate gases have been created by a combination of laser and evaporative cooling techniques, opening several new lines of research, at the border of atomic, statistical and condensed matter physics \cite{Da99,rev,Pit,Pet,Zar}. Since the Bose-Einstein condensation is a phenomenon observed and well studied in the laboratory, the possibility that it may occur on astrophysical or cosmic scales cannot be rejected {\it a priori}. Thus, dark matter, which is required to explain the dynamics of the neutral hydrogen clouds at large distances from the galactic center, and which is a cold, bosonic system, could also be in the form of a Bose-Einstein condensate \cite{Sin}. In this early studies either a phenomenological approach was used, or the non-relativistic Gross-Pitaevskii equation describing the condensate was investigated numerically. A systematic study of the condensed galactic dark matter halos and of their properties was performed in \cite{BoHa07}. By introducing the Madelung representation of the wave function, the dynamics of the dark matter halo can be formulated in terms of the continuity equation and of the hydrodynamic Euler equations. Hence condensed dark matter can be described as a non-relativistic, Newtonian gas, whose density and pressure are related by a barotropic equation of state. In the case of a condensate with quartic non-linearity, the equation of state is polytropic with index $n=1$ \cite{BoHa07}. To test the validity of the condensed dark matter model the Newtonian tangential velocity equation was fitted with a sample of rotation curves of low surface brightness and dwarf galaxies, respectively. A very good agreement was found between the theoretical rotation curves and the observational data. Therefore dark matter halos can be described as an assembly of light individual bosons that acquire a repulsive interaction by occupying the same ground energy state. The repulsive interaction prevents gravity from forming the central density cusps. The condensate particle is light enough to naturally form condensates of very small masses that later may coalesce, forming the structures of the Universe in a similar way than the hierarchical clustering of the bottom-up CDM picture. Then, at large scales, BEC perfectly mimic an ensemble of cold particles, while at small scales quantum mechanics drives the mass distribution. The properties of the Bose-Einstein condensed dark matter halos, as well as their cosmological implications, have been intensively investigate recently. The recently observed size evolution of very massive compact galaxies in the early universe can be explained, if dark matter is in a Bose Einstein Condensate state \cite{Lee0}. The size of the dark matter halos and galaxies depends on the correlation length of dark matter and, hence, on the expansion of the universe. The BEC predicts that the size of the galaxies increases as the Hubble radius of the universe even without merging, which agrees well with the recent observational data. In \cite{Lee1} it was shown that the finite length scale of the condensate dark matter can explain the recently observed common central mass of the Milky Way satellites ($\sim 10^7M_{\odot}$) independent of their luminosity, if the mass of the dark matter particle is about $10^{-22}$ eV. The validity of the BEC model on the galactic scale by using observed rotation curves was tested in \cite{Har2} by comparing the tangential velocity equation of the model with a sample of eight rotation curves of dwarf galaxies. A good agreement was found between the theoretically predicted rotation curves (without any baryonic component) and the observational data. The mean value of the logarithmic inner slope of the mass density profile of dwarf galaxies was also obtained, and it was shown that the observed value of this parameter is in agreement with the theoretical results. The study of the galactic rotation curves in the BEC model was considered in \cite{Mat1} and \cite{Ger}. The BEC model predicts that all galaxies must be very similar and exist for bigger redshifts than in the $\Lambda $CDM model. In \cite{Mat1} the fits of high-resolution rotation curves of a sample of thirteen low surface brightness galaxies were compared with fits obtained using a Navarro-Frenk-White and Pseudo-Isothermal (PI) profiles. A better agreement with the BEC model and PI profiles was found. The mean value of the logarithmic inner density slopes is -0.27 +/- 0.18. A natural way to define the core radius, with the advantage of being model-independent, was also introduced. Using this new definition in the BEC density profile it was found that the recent observation of the constant dark matter central surface density can be reproduced. The BEC model gives a similar fit to the Navarro-Frenk-White dark matter model for all High Surface Brightness (HSB) and Low Surface Brightness (LSB) galaxies in a sample of 9 galaxies \cite{Ger}. For dark matter dominated dwarf galaxies the addition of the BEC component improved more upon the purely baryonic fit than the NFW component. Thus despite the sharp cut-off of the halo density, the BEC dark matter candidate is consistent with the rotation curve data of all types of galaxies \cite{Ger}. The dynamics of rotating Bose Condensate galactic dark matter halos, made of an ultralight spinless bosons, and the impact of the halo rotation on the galactic rotation curves was analyzed in \cite{Guz1}. Finite temperature effects on dark matter halos were analyzed in \cite{Har1}, where the condensed dark matter and thermal cloud density and mass profiles at finite temperatures were explicitly obtained. The obtained results show that when the temperature of the condensate and of the thermal cloud are much smaller than the critical Bose-Einstein transition temperature, the zero temperature density and mass profiles give an excellent description of the dark matter halos. The angular momentum and vortices in BEC galactic dark matter halos were studied in \cite{Kai}-\cite{Shap2}, respectively. In \cite{Pir} it was proposed that the dark matter content of galaxies consist of a cold bosonic fluid, composed of Weakly Interacting Slim Particles (WISPs), represented by spin-0 axion-like particles and spin-1 hidden bosons, thermalized in the Bose-Einstein condensation state and bounded by their self-gravitational potential. By comparing this model with data obtained from 42 spiral galaxies and 19 Low Surface Brightness galaxies, the dark matter particle mass was constrained to the range $10^{-6}-10^{-4}$ eV, and the lower bound for the scattering length was found to be of the order of $10^{-14}$ fm. The possibility that due to their superfluid properties some compact astrophysical objects may contain a significant part of their matter in the form of a Bose-Einstein condensate was investigated in \cite{Harko3}. To study the condensate the Gross-Pitaevskii equation was used, with arbitrary non-linearity. In this way a large class of stable astrophysical objects was obtained, whose basic astrophysical parameters (mass and radius) sensitively depend on the mass of the condensed particle, and on the scattering length. The Bose-Einstein condensation process in a cosmological context, by assuming that this process can be described (at least approximately) as a first order phase transition was studied in \cite{Har4}. It was shown that the presence of the condensate dark matter and of the Bose-Einstein phase transition could have modified drastically the cosmological evolution of the early universe, as well as the large scale structure formation process. The effects of the finite dark matter temperature on the properties of the Bose-Einstein Condensed dark matter halos were analyzed, in a cosmological context, in \cite{Har5}. The basic equations describing the finite temperature condensate, representing a generalized Gross-Pitaevskii equation that takes into account the presence of the thermal cloud were formulated. The static condensate and thermal cloud in thermodynamic equilibrium was analyzed in detail, by using the Hartree-Fock-Bogoliubov and Thomas-Fermi approximations. It was also shown that finite temperature effects may play an important role in the early stages of the cosmological evolution of the dark matter condensates. The cosmological perturbations in the cosmological models with condensed dark matter were studied in \cite{Har6,Chav,Frei}. The large scale perturbative dynamics of the BEC dark matter in a model where this component coexists with baryonic matter and cosmological constant was investigated in \cite{Wam}. The perturbative dynamics was studied using neo- Newtonian cosmology (where the pressure is dynamically relevant for the homogeneous and isotropic background) which is assumed to be correct for small values of the sound speed. BEC dark matter effects can be seen in the matter power spectrum if the mass of the condensate particle lies in the range $15\; {\rm meV} < m < 700\; {\rm meV}$, leading to a small, but perceptible, excess of power at large scales. Simulation codes that are designed to study the behavior of the dark matter galactic halos in the form of a Bose-Einstein Condensate were developed in \cite{Mad} and \cite{Guz2}. In \cite{Pires1} it was shown that once appropriate choices for the dark matter particle mass and scattering length are made, the galactic dark matter halos composed by axion-like Bose-Einstein Condensed particles, trapped by a self-gravitating potential, may be stable in the Thomas-Fermi approximation. The validity of the Thomas-Fermi approximation for the halo system was also discussed, and it was shown that the kinetic energy contribution is indeed negligible. The Thomas-Fermi approximation for the study of the condensed dark matter halos was also discussed in \cite{Toth}. The Thomas-Fermi approximation is based on the assumption that in the presence of a large number of particles, the kinetic term in the Gross-Pitaevskii energy functional can be neglected. However, this assumption is violated near the condensate surface. It was also shown that the total energy of the self-gravitating condensate in the Thomas-Fermi approximation is positive. A major recent experimental advance in the study of the Bose-Einstein condensation processes was the observation of the collapse and subsequent explosion of the condensates \cite{Don}. A dynamical study of an attractive $^{85}$Rb BEC in an axially symmetric trap was done, where the interatomic interaction was manipulated by changing the external magnetic field, thus exploiting a nearby Feshbach resonance. In the vicinity of a Feshbach resonance the atomic scattering length a can be varied over a huge range, by adjusting an external magnetic field. Consequently, the sign of the scattering length is changed, thus transforming a repulsive condensate of $^{85}$Rb atoms into an attractive one, which naturally evolves into a collapsing and exploding condensate. From a simple physical point of view the collapse of the Bose-Einstein Condensates can be described as follows. When the number of particles becomes sufficiently large, so that $N>N_c$, where $N_c$ is a critical number, the attractive inter-particle energy overcomes the quantum pressure, and the condensate implodes. In the course of the implosion stage, the density of particles increases in the small vicinity of the trap center. When it approaches a certain critical value, a fraction of the particles gets expelled. In a time period of an order of few milliseconds, the condensate again stabilizes. There are two observable components at the final stage of the collapse: remnant and burst particles. The remnant particles are those which remain in the condensate. The burst particles have an energy much larger than that of the condensed particles. There is also a fraction of particles, which is not observable. This fraction is referred to as the missing particles \cite{Ryb04}. The study of the Bose-Einstein collapse within a model of a gas of free bosons described by a semi-classical Fokker-Planck equation was performed in \cite{Chavcol} and \cite{Chav2}. A striking similarity between the Bose-Einstein condensation in the canonical ensemble, and the gravitational collapse of a gas of classical self-gravitating Brownian particles was found. It was also shown that at the Bose-Einstein condensation temperature $T_c$, the chemical potential $\mu (t)$ vanishes exponentially with a universal rate. After $t_{coll}$, the finite time interval in which $\sqrt{\mu (t)}$ vanishes, the mass of the condensate grows linearly with time, and saturates exponentially to its equilibrium value for large times \cite{Chavcol}. It is the purpose of the present paper to study the dynamics of gravitationally self-bound Bose-Einstein dark matter condensates of collisionless particles, without exterior trapping potentials. In particular, we focus on the description, mechanism and properties of the condensate collapse. In order to study the gravitational collapse, and to solve the Gross-Pitaevskii equation describing the dynamics of the condensate, we employ a variational method. By appropriately choosing a trial wave function, the dynamical evolution of the condensate can be described by an effective time - dependent action, with the equation of motion of the condensate being given by the equation of motion of a single particle in an effective potential. The effective potential contains the contributions of the zero point kinetic energy, of the gravitational energy, and of the interaction energy, respectively. The effective equation of motion of the collapsing dark matter condensate is studied by using both analytical and numerical methods. The collapse of the condensate ends with the formation of a stable astrophysical configuration, corresponding to the minimum of the effective potential. The radius and the mass of the resulting dark matter object are obtained, as well as the collapse time of the condensate by numerically solving the effective equation of motion. Approximate expressions for the radius of the stable configuration and of the collapse time are also obtained. The numerical values of these global astrophysical quantities, characterizing condensed dark matter systems, strongly depend on the two parameters describing the condensate, the mass of the dark matter particle, and of the scattering length, respectively. The stability of the condensate under small perturbations is also studied, and the oscillations frequency of the halo is obtained. Hence the results obtained in the present paper show that the gravitational collapse of the condensed dark matter halos can lead to the formation of stable astrophysical systems on both galactic and stellar scales. The present paper is organized as follows. The basic physical properties of the static Bose-Einstein condensed dark matter halos are reviewed briefly in Section~\ref{sect2}. The variational formulation of the Gross-Pitaevskii equation, the choice of the trial wave function, and the formulation of the effective dynamics of the condensate as a motion of a single particle in an effective potential are presented in Section~\ref{sect3}. The physical parameters of the time dependent dark matter halos (density, gravitational potential, and the physical parameters of the effective potential) are determined, within the framework of the variational approach in Section~\ref{sect4}. The gravitational collapse of the dark matter halos is analyzed in Section~\ref{sect5}. The stability properties of the dark matter halos formed after the gravitational collapse are investigated in Section Section~\ref{sectnn}. We discuss and conclude our results in Section~\ref{sect6}. | \label{sect6} In this paper we have analyzed a simple model for the collapse of BEC dark matter halos, based on the dynamical properties of the Gross-Pitaevskii equation. The present model does not include damping nor a microscopic mechanism for particle ejection. The rotational effects have also been ignored, as well as the possible presence of vortices in the condensate. We also do not include any decay mechanism of the vortices (or turbulence) during the collapse, since these effects can be neglected because of the rapid increase of the density. Starting from the general description of a time dependent Bose-Einstein gravitationally confined condensate, we focus our attention on a simple limiting case, in order to obtain some intuitive understanding of the physical properties of the gravitational collapse of the dark matter halos, and the formation of stable astrophysical systems. In order to obtain a simple mathematical description of the collapse process we have used a variational approach, in which a trial form of the condensate wave function was adopted, with all the physical parameters of the dark matter halos assumed to have a $r/R(t)$ dependence. With the help of the trial wave function, the equation of motion and dynamic properties of the time dependent condensate can be obtained from an effective time-dependent Lagrangian, which describes the time evolution of the condensate radius $R$. If the condensate wave function depends on one or more parameters, the resulting Lagrangian functional yields approximate Lagrangian equations of motion for these parameters. With the help of the trial wave function one minimizes the action with respect to the free parameter (the Rayleigh-Ritz method). The choice of the trial wave function is not unique, and different choices may lead to different results. The precision of the method depends on the number of free trial parameters, and on how physically realistic the trial function is. The continuity and Poisson equations can be solved exactly, and the density (square of the wave function) and the gravitational potential of the dark matter halo can be explicitly obtained in an analytical form, thus allowing a complete description of the dynamical evolution of the condensate. The motion of the collapsing dark matter halos can be described as the motion of a single point particle with mass $m_{eff}$ in the force field generated by the effective potential $U(R)$, which incorporates the effects of the zero point kinetic energy $E_{zp}$, of the interaction energy of the condensate particles $E_{int}$, and of the gravitational energy $E_{grav}$. The variational procedure allows us to express these energies as a function of the radius $R(t)$ only. However, their explicit functional form also depends on the scattering length $a$, the mass of the dark matter particles, and the total mass of the dark matter halo. The trial wave function depends on a single parameter $R(t)$, the time dependent radius of the condensate. The time-dependent density profile, as well as the gravitational potential, are similar to the first order approximations of the static density and gravitational potential profiles, given by Eqs.~(\ref{app1}) and (\ref{app2}), respectively, with the static radius of the condensate $R_{BE}$ substituted by the time dependent radius $R(t)$, and with a time-dependent central density $\rho _{BE}^{(c)}(t)$. The variational procedure used can be extended and significantly improved by the choice of a trial wave function depending on several physical parameters. The adopted approach allows a complete exact analytical treatment of the gravitational collapse of the condensed dark matter halos. Other choices of the trial wave function, or the increase of the number of free parameters would require the extensive use of numerical methods for the integration of the evolution equations. The expected error in this variational approach may be of the order of a few percent, when compared to the exact numerical solution. The study of the equation of motion of the collapsing condensate shows that the collapse process ends with the formation of a stable configuration, with radius $R_{st}$ and mass $M_{st}$. The resulting configuration can be of stellar nature, or of galactic nature, depending on the physical processes and the initial mass of the dark matter halo. During the cosmological evolution such a collapse process could have played an important role in the formation of the galactic structure, and of the dark matter halos. On the other hand local perturbations of the condensed dark matter could lead to the formation of smaller mass condensate stars. At the end of the collapse the density distribution of the formed stable static structure is given by \be\label{89} \rho _{BE}(r)=\frac{15M_{st}}{8\pi}\frac{1}{R_{st}^3}\left(1-\frac{r^2}{R_{st}^2}\right)=\rho _{BE}^{(c)}\left(1-\frac{r^2}{R_{st}^2}\right), \ee where the central density $\rho _{BE}^{(c)}$ of the condensate is given by \bea\label{93} &&\rho _{BE}^{(c)}=\frac{15}{8\pi}\left(\frac{2}{63}\right)^{3/2}\frac{G^{3/2}m_{\chi}^{9/2}}{\hbar ^3a^{3/2}}M_{st}=3.141\times 10^{-27}\times \nonumber\\ &&\left(\frac{m_{\chi}}{10^{-32}\;{\rm g}}\right)^{9/2}\left(\frac{a}{10^{-7}\; {\rm cm}}\right)^{-3/2}\left(\frac{M_{st}}{10^6M_{\odot}}\right)\;{\rm g/cm^3}.\nonumber\\ \eea Eq.~(\ref{93}) gives the mass-central density relation for stable gravitationally confined condensed astrophysical objects. This central density is of the same order of magnitude as the central dark matter density of galactic dark matter halos. On the other hand, Eq.~(\ref{89}) is consistent with Eq.~(\ref{app1}), which gives an approximate representation of the density profile of the static density profile of the condensed Bose-Einstein dark matter halos. In the first order approximation the density profile of the static condensate is $\rho _{BE}(r)\approx \rho _{BE}^{(c)}\left[1 - (\pi ^2/6)(r^2/R_{BE}^2)\right]$. With the start of the collapse, the radius of the halo becomes time-dependent, $R_{BE}\rightarrow R(t)$, while the central density of the initial static halo changes in time as $\rho _{BE}^{(c)}\rightarrow M/R^3(t)$. This shows that the choice of the trial function for the time dependent case is consistent with the static case. An interesting question is the possibility of formation of dark dense Bose-Einstein condensed stars, having astrophysical properties (mass and radius) similar to those of the standard neutron stars. The radius and the mass of the dark star are determined by the mass of the dark matter particle, and by the scattering length. The radius of a neutron star is of the order of $R_{NS}\approx 10^{6}$ cm. A scaling of the mass and of the scattering length of the form $a=\alpha \times 10^{-7}$ cm, $m_{\chi }=\beta \times 10^{-32}$ g will give a radius of the same order of magnitude as the radius of a neutron star if the coefficients $\alpha $ and $\beta $ satisfy the condition $\alpha /\beta ^3=10^{-15}$. For $\alpha =10^{-12}$, giving $a=10^{-19}$ cm, we obtain $\beta =10$, which implies a dark matter particle mass of $m_{\chi}\approx 10^{-31}\;{\rm g}\approx 55$ eV. On the other hand, for realistic dark matter densities the corresponding mass of the star will exceed the general relativistic stability limit. Hence the realistic description of the dark stars requires the inclusion of general relativistic effects in the study of their structure \cite{Harko3}. The details of the collapse of the Bose-Einstein condensate, as well as the numerical values of the physical parameters of the resulting stable configuration are strongly dependent on the numerical values of the two parameters describing the physical properties of the condensate, the dark matter particle mass $m_{\chi}$, and the scattering length $a$. The numerical values of these physical quantities are poorly known. We have discussed a number of observational constraints (galactic radii and Bullet Cluster data) that provide some limits for $m_{\chi}$ and $a$. Within the framework of the Bose-Einstein condensed dark matter model these astrophysical constraints point towards a dark matter particle with mass in the range of meV to a few eV, and a scattering length of the order of $10^{-19}$ cm. However, in the present paper most of the numerical results are normalized for a scattering length of $10^{-7}$ cm, and a mass of the dark matter particle of the order of $10^{-32}$ g. By a simple scaling all the numerical values corresponding to other choices of $m_{\chi}$ and $a$ can be obtained easily. We have also considered the stability properties of the stable dark matter halos with respect to small oscillations, and the oscillations frequencies of the halos have also been obtained. These results show that the stable configuration, formed from the collapse of the condensed dark matter halos are stable with respect to small perturbations. A large number of astrophysical observations, including the flat galactic cores, or the constant density surfaces points towards the possibility that dark matter may exist in the Universe in the form of a Bose-Einstein condensate, and that this possibility cannot be excluded {\it a priori}. The confirmation of this hypothesis by further observations on both galactic and cosmological scales would lead to a major change in our understanding of the basic principles of cosmology and astrophysics. In the present papers we have developed some theoretical tools that can help in the better understanding of the structure formation in the presence of condensed dark matter. | 14 | 3 | 1403.3358 |
1403 | 1403.3499_arXiv.txt | Through a direct comparison between numerical simulations in two and three dimensions, we investigate topological effects in reconnection. A simple estimate on increase in reconnection rate in three dimensions by a factor of $\sqrt{2}$, when compared with a two-dimensional case, is confirmed in our simulations. We also show that both the reconnection rate and the fraction of magnetic energy in the simulations depend linearly on the height of the reconnection region. The degree of structural complexity of a magnetic field and the underlying flow is measured by current helicity and cross-helicity. We compare results in simulations with different computational box heights. | Reconnection of a magnetic field is a process in which magnetic field lines change connection with respect to their sources. In effect, magnetic energy is converted into kinetic and thermal energies, which accelerate and heat the plasma. Historically, reconnection was first observed in the solar flares and the Earth's magnetosphere, but today it is also investigated in star formation theory and astrophysical dynamo theory. Recently, reconnection has also been invoked in the acceleration of cosmic rays \cite{L05}. In solar flares, oppositely directed magnetic flux is first accumulated, and then reconnection occurs, enabling energy transfer to kinetic energy and heating of plasma. From such an ejection of matter, we can observe the onset of reconnection and estimate the energy released in this process. Recent results from measurements by the instruments onboard the Solar Dynamic Observatory \cite{su13} have revealed new unexpected features and show that even the morphology of the solar reconnection is still not completely understood. In the context of accretion disks around protostars, neutron stars and black holes, reconnection is a part of the transport of heat, matter and angular momentum. It enables re-arrangement of the magnetic field, after which angular momentum can be transported from the matter that is infalling from an accretion disk, towards the central object. In \v{C}emelji\'{c} et al. \cite{scl1} we performed resistive simulations of star-disk magnetospheric outflows in 2D axisymmetric simulations. On-going reconnection is producing a fast, light micro-ejections of matter from the close vicinity of a disk gap. When going to three-dimensional simulations, more precise model of reconnection is needed, as it will define the topology of magnetic field. In the cases where flows are less ordered, turbulent reconnection has been invoked \cite{LV99}. The Sweet-Parker model \cite{sw58} was the first proposed model for reconnection. Parker \cite{par57} solved time-independent, non-ideal MHD equations for two regions of plasma with oppositely directed magnetic fields pushed together. Particles are accelerated by a pressure gradient, with use of the known facts about magnetic field diffusion. Viscosity and compressibility are assumed to be unimportant, so that the magnetic field energy converts completely into heat. This model is robust, but gives too slow a time for the duration of reconnection, when compared with observed data for solar flares. Petschek \cite{pet64} proposed another model, for fast reconnection. For energy conversion, he added stationary slow-mode shocks between the inflow and outflow regions. This decreased the aspect ratio of the diffusion region to the order of unity, and increased the energy release rate, so that the observed data were now better matched. However, his model fails in the explanation of solar flares because fast reconnection can persist only for a very short time period. Many aspects of the reconnection process have been studied since, but the problem of the speed of reconnection remains unsolved. Because of numerical difficulties, research on reconnection was for a few decades constrained to two-dimensional solutions. In three dimensional space there are more ways of reconnection than in two dimensions, and the very nature of reconnection is different \cite{php03}. There is still no full assessment of three dimensional reconnection -- for a recent review, see Pontin \cite{pont11}. Our 2D setup here is a familiar Harris current sheet, an exact stationary solution to the problem of a current sheet separating regions of oppositely directed magnetic field in a fully ionized plasma \cite{har62}. It is possible to obtain a Petschek-like reconnection in resistive-MHD simulations with uniform resistivity, but it demands special care with the setup of boundary conditions, as described in \cite{bat06}. To avoid this issue, we chose to set a spatially asymmetric profile for resistivity, as suggested in \cite{bat06}. In 3D simulations, we build a column of matter above such a Harris 2D configuration, with resistivity dependent on height in the third direction. Because of a modification of the resulting shocks, it enables a Petchek reconnection also in the third dimension. We first investigate differences between the reconnection rate in 2D and 3D numerical simulations, by comparing energies in the computational box. Then we compare change of current helicity and cross-helicity with the increasing height of the matter column in the third direction. | We have presented new results with the direct comparison of numerical simulations of reconnection in two and three dimensions. Reconnection in our simulations is facilitated by an asymmetry in the Ohmic resistivity. Without asymmetry, reconnection does not occur in our setup. Asymmetry in the X-Y plane is enabling the reconnection in that plane, and dependence of the resistivity with height in Z-direction is changing the shocks in the Z-direction, so that the Petschek reconnection starts also in the Y-Z plane. By comparing the integral kinetic energy in 2D and 3D computations, we find that the 3D simulation proceeds with a reconnection rate which is for a factor $\sqrt{2}$ larger from the rate in the 2D simulation. This finding confirms the simple analytic estimate from Priest \& Schrijver \cite{ps99}. We also show that a fraction of magnetic energy in total energy is increasing linearly with the increase in box height. We obtained our results in the case when reconnection was set by an asymmetry in resistivity. There are other means of facilitating reconnection. One natural generalization from a 2D simulation of X-point collapse of a magnetic field into a localized current layer in a 3D situation is to obtain points in space at which the magnetic field strength is zero -- 3D null points. Topology of such points is characterized by a pair of field lines forming a separatrix surface, which separates portions of magnetic field which are of different topologies. Yet another way to form a current sheet in 3D is to connect two such null points -- forming a separator line (\cite{pont11} and references therein). Reconnection in 3D is also possible without null points, in regions in which field lines are non-trivially linked with each other (as for example in braided magnetic fields or as the result of some ideal instability). Among others, there is also a possibility of a current sheet formation by a motion of a magnetic field line footpoint \cite{par72}. Comparison of results in the various approaches mentioned above is not straightforward; this is why we decided for more general measures. By computing current helicity, cross helicity and ``mixed helicity'' in our choice of setup, we find three characteristic time intervals in all our simulations. In two of them, reconnection in the three dimensional simulation increasingly differs from the corresponding reconnection in the two dimensional simulation, and the results also depend on the height of the reconnection region. It remains to be studied if reconnection in three dimensional simulations is well described by energies and helicities in the cases of less ordered, and of turbulent reconnection. In a future study we will also include other resistive terms, and apply the results in models of resistivity in simulations of reconnection in astrophysical outflows. | 14 | 3 | 1403.3499 |
1403 | 1403.4046_arXiv.txt | We present a spectroscopic component analysis of 18 candidate young, wide, non-magnetic, double-degenerate binaries identified from a search of the Sloan Digital Sky Survey Data Release 7 (DR7). All but two pairings are likely to be physical systems. We show SDSS\,J084952.47+471247.7 + SDSS\,J084952.87+471249.4 to be a wide DA + DB binary, only the second identified to date. Combining our measurements for the components of 16 new binaries with results for three similar, previously known systems within the DR7, we have constructed a mass distribution for the largest sample to date (38) of white dwarfs in young, wide, non-magnetic, double-degenerate pairings. This is broadly similar in form to that of the isolated field population with a substantial peak around $M$$\sim$0.6$M_{\odot}$. We identify an excess of ultra-massive white dwarfs and attribute this to the primordial separation distribution of their progenitor systems peaking at relatively larger values and the greater expansion of their binary orbits during the final stages of stellar evolution. We exploit this mass distribution to probe the origins of unusual types of degenerates, confirming a mild preference for the progenitor systems of high-field-magnetic white dwarfs, at least within these binaries, to be associated with early-type stars. Additionally, we consider the 19 systems in the context of the stellar initial mass-final mass relation. None appear to be strongly discordant with current understanding of this relationship. | \begin{figure} \includegraphics[angle=0,width=\linewidth]{0195fig1.ps} \caption{A plot of the cumulative number of observed pairings that meet our photometric selection criteria (filled grey circles) and are expected for a random on-sky distribution of objects (black line) as a function of angular separation. A crude estimate of the proportion of physical systems as a function of angular separation is also shown (dashed line). For angular separations of 30 arcsec or less, roughly 90\% of candidates are likely to be physical binaries.} \label{cumulate} \end{figure} A substantial proportion of stars reside in binary or multiple stellar systems \citep[e.g.][]{duquennoy91, fischer92, kouwenhoven05, kouwenhoven07b}. Empirical determinations of the stellar binary fraction as a function of primary mass and of the binary mass ratio and orbital period distributions inform theories of the star formation process \citep[e.g.][]{zinnecker84, pinfield03, parker13}. Moroever, studies of close systems, with orbital periods of a few days or less, can yield important dynamical determinations of masses and radii which lend themselves to arguably the most stringent examinations of models of stellar structure \citep[e.g.][]{huang56, maxted04, clausen08}. Wide, spatially resolved, binary systems where the components are separated by 100--10000AU and have generally evolved essentially as single stars \citep[e.g.][]{andrews12}, are also of significant interest, since they are, in effect, miniature versions of open clusters, the traditional but often rather distant testbeds for refining our theories of stellar evolution \citep[e.g.][]{barbaro84,nordstrom96, casewell09, kalirai10, casewell12}. By considering observational constraints on the stellar binary fraction across a broad range of primary masses, from the late-F/G stars \citep[57\%][]{duquennoy91} to the numerically dominant low mass M dwarfs \citep[26\%, ][]{delfosse04}, \cite{lada06b} has highlighted that the majority of stars reside in single stellar systems. However, since only those stars of the Galactic disk with $M$$\simgreat$1$M$$_{\odot}$ have had sufficient time to evolve beyond the main-sequence, around half or more of the members of the field white dwarf population presumably must have once been part of multiple systems. \cite{miszalski09} determine that around least 12--21\% of planetary nebulae have close binary central stars, while \cite{holberg08b, holberg13} have concluded that at least 25-30\% of the white dwarfs within 20pc of the Sun are presently part of binary systems, with around 6\% being double-degenerates. The systems at the short end of the double-degenerate period distribution are of substantial astrophysical relevance since a subset may ultimately evolve to Type Ia supernovae \citep[e.g.][]{yoon07}. Widely separated double-degenerates are also of interest for setting limits on the age of the Galactic disk through the white dwarf luminosity function \citep{oswalt96} and for investigating the late-stages of stellar evolution, in particular the heavy mass-loss experienced on the asymptotic giant branch, as manifest through the form of the stellar initial-mass final-mass relation \citep[e.g.][]{finley97}. Additionally, when a wide double-degenerate harbours an unusual or peculiar white dwarf (e.g. a high field magnetic white dwarf), measurements of the parameters of the normal component can be used to investigate its fundamental parameters and origins, either directly or potentially statistically \citep[e.g.][]{girven10,dobbie12a,dobbie13a}. Here we begin to build the foundations for a statistical approach by presenting the mass distribution for what is by far the largest spectroscopically observed sample (38) of non-magnetic white dwarfs residing in young, wide, double-degenerate systems, to date. In subsequent sections we describe our photometric identification of 53 candidate young, wide, binary systems from a search of the Sloan Digital Sky Survey (SDSS) data release 7 \citep[DR7, ][]{abazajian09} and discuss our new spectroscopic follow-up and analysis of the components of 18 of these systems and detail our assessment of their physical reality. We assemble a mass distribution for the components of the systems we find to have a strong likelihood of being binaries and those of three previously identified wide double-degenerates within the DR7 footprint. We compare this to that of isolated field white dwarfs and discuss the similarities and the differences. We demonstate how this mass distribution can be used to probe the origins of unsual degenerates, in this case high-field-magentic white dwarfs (HFMWDs). Finally, we explore the 19 non-magnetic binary systems within the context of our current understanding of the form of the stellar initial mass-final mass relation. \begin{table*} \setlength{\tabcolsep}{4pt}. \scriptsize \centering \begin{minipage}{180mm} \caption{Survey designation, SDSS designation, photometric data and observed angular separations for the candidate double-degenerate systems (CDDS) we have identified. Candidates for which we have obtained new resolved spectroscopy (Spec follow-up = Y) and objects for which a spectroscopic analysis exists in the literature (Spec follow-up = L), are labelled. White dwarfs included in {\protect \cite{baxter11}} (B) and those with SDSS DR7 spectroscopy (italics) are also highlighted.} \label{allcands} \begin{tabular}{lcccccccccccc} \hline CDDS ID & SDSS & Spec & $u$ & $g$ & $r$ & $i$ & SDSS & $u$ & $g$ & $r$ & $i$ & Sep. \\ (Desig.) & (Desig.) & follow-up & (/mag.) & (/mag.) & (/mag.) & (/mag.) & (Desig.) & (/mag.) & (/mag.) & (/mag.) & (/mag.) & (/arcsec) \\ \hline CDDS1 & J000142.84+251506.1 & &17.81(0.02) & 17.79(0.02) & 18.16(0.02) & 18.45(0.02) & {\it J000142.79+251504.0} & 19.16(0.19) & 18.70(0.16) & 19.01(0.16) & 19.32(0.17) & 2.16 \\ CDDS2 & J002925.28+001559.7 & &20.02(0.05) & 19.59(0.02) & 19.59(0.02) & 19.68(0.02) & {\it J002925.62+001552.7} & 18.91(0.03) & 18.48(0.01) & 18.53(0.02) & 18.65(0.02) & 8.64 \\ CDDS3$^{B}$ & J005212.26+135302.0 & Y &17.79(0.02) & 17.71(0.03) & 17.98(0.02) & 18.24(0.02) & J005212.73+135301.1 & 19.35(0.03) & 18.89(0.03) & 18.92(0.02) & 19.05(0.02) & 6.78 \\ CDDS4 & {\it J011714.48+244021.5} & &19.94(0.03) & 19.63(0.02) & 19.77(0.02) & 20.06(0.03) & J011714.12+244020.3 & 20.29(0.04) & 19.83(0.02) & 19.96(0.02) & 20.16(0.03) & 5.05 \\ CDDS5 & J012726.89+391503.3 & &19.16(0.03) & 18.70(0.02) & 18.83(0.02) & 19.01(0.02) & J012725.51+391459.2 & 20.35(0.06) & 19.99(0.02) & 19.99(0.02) & 20.07(0.03) & 16.55 \\ CDDS6$^{B}$ & J021131.51+171430.4 & Y &17.36(0.12) & 17.26(0.09) & 17.65(0.07) & 17.87(0.08) & J021131.52+171428.3 & 16.60(0.02) & 16.71(0.02) & 17.08(0.01) & 17.40(0.01) & 2.04 \\ CDDS7 & {\it J022733.09+005200.3} & &20.03(0.05) & 19.62(0.02) & 19.69(0.02) & 19.80(0.03) & {\it J022733.15+005153.6} & 20.30(0.06) & 19.86(0.02) & 19.91(0.02) & 19.99(0.03) & 6.72 \\ CDDS8$^{B}$ & {\it J033236.86-004936.9} & Y &15.32(0.01) & 15.64(0.02) & 16.09(0.02) & 16.41(0.02) & {\it J033236.60-004918.4} & 18.64(0.03) & 18.20(0.02) & 18.30(0.02) & 18.45(0.02) & 18.91 \\ CDDS9 & J054519.81+302754.0 & Y &20.19(0.05) & 19.82(0.02) & 19.96(0.02) & 20.10(0.03) & J054518.98+302749.3 & 20.05(0.05) & 19.64(0.02) & 19.80(0.02) & 19.97(0.03) & 11.72 \\ CDDS10 & {\it J072147.38+322824.1} & &18.18(0.02) & 18.07(0.01) & 18.17(0.01) & 18.28(0.01) & J072147.20+322822.4 & 18.76(0.08) & 18.32(0.06) & 18.34(0.06) & 18.44(0.06) & 2.77 \\ CDDS11$^{D2}$ & {\it J074853.07+302543.5} & Y &17.41(0.07) & 17.59(0.05) & 17.88(0.05) & 18.49(0.12) & J074852.95+302543.4 & 17.57(0.05) & 17.59(0.04) & 17.96(0.04) & 18.24(0.03) & 1.50 \\ CDDS12 & J075410.53+123947.3 & &19.19(0.09) & 18.78(0.06) & 18.99(0.08) & 19.22(0.07) & J075410.58+123945.5 & 19.22(0.04) & 18.86(0.05) & 18.99(0.04) & 19.25(0.04) & 2.00 \\ CDDS13 & {\it J080212.54+242043.6} & &19.87(0.04) & 19.54(0.02) & 19.78(0.02) & 20.01(0.03) & J080213.44+242020.9 & 20.23(0.04) & 19.84(0.02) & 20.00(0.02) & 20.19(0.03) & 25.85 \\ CDDS14 & J080644.09+444503.2 & Y &18.54(0.02) & 18.14(0.02) & 18.32(0.01) & 18.54(0.02) & J080643.64+444501.4 & 19.18(0.02) & 18.74(0.02) & 18.82(0.01) & 18.94(0.02) & 5.09 \\ CDDS15 & J084952.87+471249.4 & Y &16.64(0.02) & 16.77(0.02) & 17.08(0.02) & 17.35(0.02) & J084952.47+471247.7 & 18.14(0.14) & 17.77(0.08) & 17.79(0.06) & 18.04(0.07) & 4.37 \\ CDDS16 & J085915.02+330644.6 & Y &18.27(0.02) & 18.01(0.02) & 18.34(0.02) & 18.59(0.02) & J085915.50+330637.6 & 19.07(0.03) & 18.70(0.02) & 18.87(0.02) & 19.04(0.02) & 9.29 \\ CDDS17 & J085917.36+425031.6 & Y &19.37(0.03) & 18.94(0.05) & 19.01(0.02) & 19.08(0.02) & J085917.23+425027.4 & 18.83(0.02) & 18.38(0.04) & 18.53(0.02) & 18.70(0.02) & 4.39 \\ CDDS18$^{*}$ & J092513.18+160145.4 & L &17.07(0.09) & 17.12(0.08) & 17.52(0.06) & 17.83(0.06) & J092513.48+160144.1 & 16.07(0.02) & 16.14(0.02) & 16.55(0.01) & 16.88(0.02) & 4.51 \\ CDDS19$^{D1}$ & J092647.00+132138.4 & Y &18.74(0.03) & 18.40(0.03) & 18.46(0.05) & 18.60(0.04) & J092646.88+132134.5 & 18.46(0.02) & 18.34(0.02) & 18.39(0.02) & 18.50(0.02) & 4.35 \\ CDDS20 & J095458.73+390104.6 & &20.31(0.05) & 19.86(0.02) & 19.95(0.03) & 19.95(0.03) & {\it J095459.97+390052.4} & 17.96(0.02) & 17.69(0.02) & 18.02(0.02) & 18.29(0.02) & 18.87 \\ CDDS21 & J100245.86+360653.3 & &19.42(0.03) & 19.04(0.02) & 19.09(0.02) & 19.16(0.03) & {\it J100244.88+360629.6} & 19.32(0.03) & 18.92(0.02) & 19.01(0.02) & 19.09(0.03) & 26.53 \\ CDDS22$^{D3}$ & J105306.13+025052.5 & Y &19.57(0.04) & 19.14(0.02) & 19.28(0.02) & 19.51(0.03) & {\it J105306.82+025027.9} & 19.37(0.03) & 18.98(0.02) & 19.18(0.02) & 19.37(0.03) & 26.60 \\ CDDS23 & J113928.52-001420.9 & &19.84(0.04) & 19.42(0.02) & 19.52(0.02) & 19.71(0.03) & J113928.47-001418.0 & 20.13(0.07) & 19.80(0.06) & 19.85(0.06) & 19.93(0.08) & 2.95 \\ CDDS24 & J115030.12+253210.1 & &20.43(0.05) & 19.95(0.02) & 19.97(0.02) & 20.05(0.04) & J115030.48+253206.0 & 19.30(0.03) & 18.86(0.02) & 19.09(0.02) & 19.29(0.02) & 6.38 \\ CDDS25 & {\it J115305.54+005646.1} & &18.42(0.02) & 18.89(0.02) & 19.38(0.02) & 19.62(0.03) & J115305.47+005645.8 & 18.50(0.02) & 18.91(0.02) & 19.34(0.02) & 19.78(0.03) & 1.22 \\ CDDS26$^{B}$ & {\it J115937.81+134413.9} & Y &18.45(0.03) & 18.07(0.02) & 18.12(0.02) & 18.16(0.02) & J115937.82+134408.7 & 18.42(0.03) & 18.28(0.02) & 18.52(0.02) & 18.75(0.03) & 5.18 \\ CDDS27$^{D3}$ & J122739.16+661224.4 & Y &17.72(0.02) & 17.86(0.02) & 18.13(0.02) & 18.44(0.02) & J122741.05+661224.3 & 18.23(0.02) & 17.99(0.02) & 18.21(0.02) & 18.46(0.02) & 11.43 \\ CDDS28 & {\it J131012.28+444728.3} & &17.88(0.01) & 17.84(0.02) & 18.02(0.01) & 18.23(0.02) & J131013.38+444717.8 & 17.95(0.01) & 17.59(0.02) & 17.85(0.01) & 18.11(0.02) & 15.71 \\ CDDS29$^{B}$ & J131332.14+203039.6 & Y &18.13(0.02) & 17.80(0.02) & 17.98(0.01) & 18.19(0.02) & J131332.56+203039.3 & 17.86(0.02) & 17.48(0.02) & 17.69(0.01) & 17.91(0.02) & 5.93 \\ CDDS30 & J131421.70+305051.4 & Y &18.59(0.10) & 18.20(0.08) & 18.22(0.09) & 18.31(0.09) & {\it J131421.50+305050.5} & 18.23(0.04) & 17.86(0.04) & 17.88(0.05) & 18.01(0.06) & 2.76 \\ CDDS31$^{B}$ & J132814.28+163151.5 & Y &16.34(0.02) & 16.27(0.02) & 16.63(0.02) & 16.99(0.02) & J132814.36+163150.9 & 17.75(0.27) & 17.65(0.23) & 17.74(0.19) & 17.84(0.15) & 1.32 \\ CDDS32 & J135713.14-065913.7 & Y &18.94(0.04) & 19.25(0.02) & 19.76(0.02) & 20.18(0.04) & J135714.50-065856.9 & 18.58(0.04) & 18.16(0.02) & 18.35(0.02) & 18.54(0.02) & 26.29 \\ CDDS33$^{D1}$ & J150746.48+521002.1 & Y &17.14(0.02) & 16.91(0.03) & 17.29(0.01) & 17.55(0.02) & J150746.80+520958.0 & 17.98(0.02) & 17.76(0.03) & 18.06(0.01) & 18.33(0.02) & 5.05 \\ CDDS34 & J151508.30+143640.8 & Y &18.38(0.02) & 18.00(0.02) & 18.20(0.01) & 18.47(0.02) & J151507.90+143635.4 & 19.76(0.03) & 19.63(0.02) & 19.88(0.02) & 20.19(0.03) & 7.90 \\ CDDS35 & J154641.48+615901.7 & &19.07(0.03) & 18.63(0.02) & 18.75(0.02) & 18.93(0.02) & J154641.79+615854.3 & 17.16(0.02) & 16.89(0.02) & 17.17(0.02) & 17.42(0.02) & 7.64 \\ CDDS36 & J155245.19+473129.5 & Y &18.79(0.02) & 18.71(0.02) & 19.06(0.02) & 19.36(0.03) & J155244.41+473124.0 & 19.21(0.04) & 18.99(0.03) & 19.30(0.02) & 19.61(0.03) & 9.65 \\ CDDS37 & J162650.11+482827.9 & &19.72(0.03) & 19.62(0.02) & 19.94(0.03) & 20.20(0.04) & J162652.12+482824.7 & 19.14(0.02) & 18.98(0.01) & 19.30(0.02) & 19.59(0.02) & 20.22 \\ CDDS38 & J163647.81+092715.7 & &18.13(0.02) & 17.72(0.01) & 17.93(0.01) & 18.18(0.01) & J163647.33+092708.4 & 19.98(0.04) & 19.54(0.02) & 19.54(0.02) & 19.66(0.03) & 10.12 \\ CDDS39 & {\it J165737.90+620102.1} & &18.72(0.02) & 18.65(0.01) & 18.98(0.03) & 19.23(0.02) & J165734.39+620055.9 & 18.88(0.02) & 18.53(0.01) & 18.76(0.02) & 18.99(0.02) & 25.47 \\ CDDS40$^{B}$ & {\it J170355.91+330438.4} & Y &19.16(0.02) & 18.81(0.01) & 18.86(0.01) & 18.97(0.02) & J170356.77+330435.7 & 18.48(0.02) & 18.16(0.01) & 18.27(0.01) & 18.42(0.02) & 11.16 \\ CDDS41 & J173249.57+563900.0 & &19.35(0.03) & 18.95(0.02) & 19.12(0.02) & 19.35(0.02) & J173249.32+563858.8 & 18.99(0.04) & 19.12(0.05) & 19.27(0.06) & 19.47(0.04) & 2.36 \\ CDDS42 & J175559.57+484359.9 & &19.04(0.03) & 19.21(0.02) & 19.39(0.02) & 19.43(0.02) & J175558.35+484348.8 & 18.02(0.02) & 17.68(0.01) & 17.90(0.01) & 18.17(0.02) & 16.41 \\ CDDS43 & J204318.96+005841.8 & &18.51(0.03) & 18.24(0.02) & 18.42(0.01) & 18.59(0.02) & J204317.93+005830.5 & 18.96(0.03) & 18.59(0.02) & 18.75(0.01) & 18.94(0.02) & 19.13 \\ CDDS44 & J211607.27+004503.1 & &18.60(0.02) & 18.67(0.01) & 18.89(0.01) & 19.11(0.02) & J211607.20+004501.3 & 19.43(0.10) & 18.96(0.07) & 19.05(0.09) & 19.28(0.06) & 2.06 \\ CDDS45 & J213648.79+064320.2 & &18.07(0.02) & 17.94(0.02) & 18.24(0.01) & 18.48(0.02) & J213648.98+064318.2 & 19.72(0.04) & 19.35(0.02) & 19.39(0.03) & 19.50(0.02) & 3.44 \\ CDDS46 & J214456.12+482352.9 & &19.19(0.03) & 18.74(0.01) & 18.83(0.02) & 19.02(0.02) & J214457.39+482345.5 & 19.81(0.05) & 19.49(0.02) & 19.49(0.02) & 19.64(0.03) & 14.67 \\ CDDS47 & J215309.89+461902.7 & &18.15(0.02) & 17.72(0.01) & 17.90(0.01) & 18.05(0.01) & J215308.90+461839.1 & 18.88(0.03) & 19.08(0.01) & 19.36(0.02) & 19.56(0.02) & 25.68 \\ CDDS48$^{B}$ & J222236.30-082808.0 & Y &16.68(0.02) & 16.41(0.02) & 16.67(0.03) & 16.92(0.03) & J222236.56-082806.0 & 17.56(0.03) & 17.11(0.07) & 17.30(0.07) & 17.47(0.06) & 4.29 \\ CDDS49$^{+}$ & J222301.62+220131.3 & L &15.66(0.01) & 15.60(0.01) & 15.91(0.01) & 16.22(0.01) & J222301.72+220124.9 & 16.37(0.01) & 16.01(0.03) & 16.20(0.03) & 16.46(0.01) & 6.56 \\ CDDS50$^{B}$ & J222427.07+231537.4 & Y &17.53(0.02) & 17.15(0.02) & 17.36(0.02) & 17.47(0.02) & J222426.91+231536.0 & 18.22(0.08) & 17.77(0.07) & 17.92(0.07) & 17.94(0.06) & 2.64 \\ CDDS51$^{\dagger}$ & J224231.14+125004.9 & L &16.48(0.01) & 16.23(0.02) & 16.50(0.01) & 16.74(0.02) & J224230.33+125002.3 & 16.83(0.01) & 16.50(0.02) & 16.75(0.01) & 16.97(0.02) & 12.13 \\ CDDS52$^{D3}$ & J225932.74+140444.2 & Y &19.02(0.03) & 18.57(0.02) & 18.68(0.01) & 18.85(0.02) & J225932.21+140439.2 & 16.16(0.02) & 16.36(0.01) & 16.78(0.01) & 17.12(0.01) & 9.14 \\ CDDS53 & J233246.27+491712.0 & &18.76(0.02) & 18.64(0.01) & 18.91(0.01) & 19.16(0.02) & {\it J233246.23+491709.1} & 19.02(0.06) & 18.76(0.04) & 19.04(0.04) & 19.31(0.05) & 2.96 \\ \hline \end{tabular} $^{*}$ PG\,0922+162A+B \citep{finley97} \\ $^{+}$ HS\,2220+2146A+B \citep{koester09}\\ $^{\dagger}$ HS\,2240+1234 \citep{jordan98} \\ $^{B}$ Preliminary analysis presented in \cite{baxter11}\\ $^{D1,D2,D3}$ DA + DAH pairings discussed in \citep{dobbie12a,dobbie13a} and Dobbie et al. (in prep), respectively. \normalsize \end{minipage} \end{table*} \begin{table*} \begin{minipage}{175mm} \begin{center} \label{slog1} \caption{Summary of our spectroscopic observations, including telescope/instrument combination and exposure times, of the candidate young, wide, double-degenerates within the SDSS DR7 imaging (RA=0--12h). } \begin{tabular}{lccccccccc} \hline \multicolumn{1}{c}{ID} & SpT & SDSS & Telescope/Instrument & Exposure & N$_{exp}$ \\ \hline CDDS3-A & DA & J005212.73+135301.1 & \multirow{2}{*}{WHT + ISIS} & \multirow{2}{*}{2400s} & \multirow{2}{*}{5}\\ CDDS3-B & DA & J005212.26+135302.0 & & &\\ \\ CDDS6-A & DA & J021131.52+171428.3 & \multirow{2}{*}{GEM-N + GMOS} & \multirow{2}{*}{2000s} & \multirow{2}{*}{3}\\ CDDS6-B & DA & J021131.51+171430.4 & \\ \\ CDDS8-A & DA & J033236.86-004936.9 & \multirow{2}{*}{VLT + FORS} & \multirow{2}{*}{600s} & \multirow{2}{*}{2}\\ CDDS8-B & DA & J033236.60-004918.4 & \\ \\ CDDS9-A & DA & J054519.81+302754.0 & \multirow{2}{*}{GTC + OSIRIS} & \multirow{2}{*}{2400s} & \multirow{2}{*}{3} \\ CDDS9-B & DA & J054518.98+302749.3 & & &\\ \\ CDDS14-A & DA & J080644.09+444503.2 & \multirow{2}{*}{GTC + OSIRIS} & \multirow{2}{*}{600s} & \multirow{2}{*}{3}\\ CDDS14-B & DA & J080643.64+444501.4 & & &\\ \\ CDDS15-A & DB & J084952.87+471249.4 & \multirow{2}{*}{GTC + OSIRIS}& \multirow{2}{*}{240s} & \multirow{2}{*}{3}\\ CDDS15-B & DA & J084952.47+471247.7 & & &\\ \\ CDDS16-A & DA & J085915.50+330637.6 & \multirow{2}{*}{GTC + OSIRIS} & \multirow{2}{*}{600s} & \multirow{2}{*}{3} \\ CDDS16-B & DA & J085915.02+330644.6 & & &\\ \\ CDDS17-A & DA & J085917.36+425031.6 & \multirow{2}{*}{GTC + OSIRIS} & \multirow{2}{*}{900s} & \multirow{2}{*}{3}\\ CDDS17-B & DA & J085917.23+425027.4 & & &\\ \\ CDDS18-A$^{*}$ & DA & J092513.48+160144.1 & \multirow{2}{*}{\cite{koester09}} & &\\ CDDS18-B$^{*}$ & DA & J092513.18+160145.4 & & & \\ \\ CDDS26-A & DA & J115937.82+134408.7 & \multirow{2}{*}{VLT + FORS} & \multirow{2}{*}{600s} & \multirow{2}{*}{2} \\ CDDS26-B & DA & J115937.81+134413.9 & & &\\ \\ \hline \end{tabular} \label{slog1} \end{center} $^{*}$ PG\,0922+162A+B \citep{finley97} \\ \end{minipage} \label{slog1} \end{table*} \addtocounter{table}{-1} \begin{table*} \begin{minipage}{175mm} \begin{center} \label{tab3} \caption{Summary of our spectroscopic observations, including telescope/instrument combination and exposure times, of the candidate young, wide, double-degenerates within the SDSS DR7 imaging (RA=12--24h).} \begin{tabular}{lccccccccc} \hline \multicolumn{1}{c}{ID} & SpT & SDSS & Telescope/Instrument & Exposure & N$_{exp}$ \\ \hline CDDS29-A & DA & J131332.56+203039.3 & \multirow{2}{*}{VLT + FORS} & \multirow{2}{*}{300s} & \multirow{2}{*}{2} \\ CDDS29-B & DA & J131332.14+203039.6 & & & \\ \\ CDDS30-A & DA & J131421.70+305051.4 & \multirow{2}{*}{VLT + FORS} & \multirow{2}{*}{600s} & \multirow{2}{*}{2}\\ CDDS30-B & DA & J131421.50+305050.5 & & &\\ \\ CDDS31-A & DA & J132814.36+163150.9 & \multirow{2}{*}{VLT + FORS} & \multirow{2}{*}{300s} & \multirow{2}{*}{2}\\ CDDS31-B & DA & J132814.28+163151.5 & & &\\ \\ CDDS32-A & DA & J135714.50-065856.9 & \multirow{2}{*}{VLT + FORS} & \multirow{2}{*}{600s} & \multirow{2}{*}{1}\\ CDDS32-B & sdO & J135713.14-065913.7 & & &\\ \\ CDDS34-A & DA & J151508.30+143640.8 & \multirow{2}{*}{GTC + OSIRIS} & \multirow{2}{*}{1800s} & \multirow{2}{*}{3}\\ CDDS34-B & DA & J151507.90+143635.4 & & &\\ \\ CDDS36-A & DA & J155245.19+473129.5 & \multirow{2}{*}{GTC + OSIRIS} & \multirow{2}{*}{900s} & \multirow{2}{*}{3}\\ CDDS36-B & DA & J155244.41+473124.0 & & & \\ \\ CDDS40-A & DA & J170356.77+330435.7 & \multirow{2}{*}{WHT + ISIS} & \multirow{2}{*}{1800s} & \multirow{2}{*}{7} \\ CDDS40-B & DA & J170355.91+330438.4 & & &\\ \\ CDDS48-A & DA & J222236.56-082806.0 & \multirow{2}{*}{GEM-S + GMOS} & \multirow{2}{*}{1800s} & \multirow{2}{*}{3}\\ CDDS48-B & DA & J222236.30-082808.0 & & &\\ \\ CDDS49-A$^{+}$ & DA & J222301.72+220124.9 & \multirow{2}{*}{\cite{koester09}} & & \\ CDDS49-B$^{+}$ & DA & J222301.62+220131.3 & & &\\ \\ CDDS50-A & DA & J222427.07+231537.4 & \multirow{2}{*}{WHT + ISIS} & \multirow{2}{*}{1200s} & \multirow{2}{*}{2}\\ CDDS50-B & DA & J222426.91+231536.0 & \\ \\ CDDS51-A$^{\dagger}$ & DA & J224231.14+125004.9 & \multirow{2}{*}{\cite{koester09}} & & \\ CDDS51-B$^{\dagger}$ & DA & J224230.33+125002.3 & & & \\ \\ \hline \end{tabular} \label{tab3} \end{center} $^{+}$ HS\,2220+2146A+B \citep{koester09}\\ $^{\dagger}$ HS\,2240+1234 \citep{jordan98} \end{minipage} \label{tab3} \end{table*} \begin{figure*} \includegraphics[angle=0,width=12.5cm]{0195fig2a.ps} \caption{Low resolution optical spectroscopy for the components of candidate binary systems in the range RA=0--12hr. These data have been normalised by dividing by the median flux in the interval $\lambda$=4180 -- 4220\AA.} \label{specs1} \end{figure*} \addtocounter{figure}{-1} \begin{figure*} \includegraphics[angle=0,width=12.5cm]{0195fig2b.ps} \caption{Low resolution optical spectroscopy for the components of candidate binary systems in the range RA=12--24hr. These data have been normalised by dividing by the median flux in the interval $\lambda$=4180 -- 4220\AA.} \label{specs2} \end{figure*} | We have presented spectroscopy for the components of 18 candidate young, wide, double-degenerates photometrically identified within the footprint of the SDSS DR7. On the basis of our distance estimates and our astrometry we have concluded that 16 candidates probably form physical systems. One of these is a wide DA + DB binary, only the second such system identified to date. We have determined the effective temperatures, surface gravities, masses and cooling times of the components of our 16 binaries. We have combined the sample with three similar systems previously known from the literature to lie within the DR7 footprint to construct a mass distribution for 38 white dwarfs in young, wide double-degenerate binaries. A comparison between this and the mass distribution of the isolated field white dwarf population reveals them to have broadly similar forms, each with a substantial peak around $M$$\sim$0.6$M_{\odot}$. However, there is a slight excess of the most massive white dwarfs in the binary sample which could be related to the primordial separation distribution of the progenitor systems and the expansion of binary orbits during the late stages of stellar evolution. We have shown how our sample can be exploited to probe the origins of unusual white dwarfs and found at marginal significance that the progenitor systems HFMWDs are preferentially associated with early-type stars, at least within these pairings. Finally we have used the 19 young, wide double-degenerate systems to test the stellar IFMR. Within the relatively large uncertainties, no system appears to be strongly discordant with our current understanding of the relation. | 14 | 3 | 1403.4046 |
1403 | 1403.1987_arXiv.txt | An excess of gamma rays at GeV energies has been pointed out in the Fermi-LAT data. This signal comes from a narrow region centred around the Galactic center and has been interpreted as possible evidence for light dark matter particles annihilating either into a mixture of leptons-antileptons and $b\bar{b}$ or into $b \bar{b}$ only. Focusing on the prompt gamma-ray emission, previous works found that the best fit to the data corresponds to annihilations proceeding predominantly into $b\bar{b}$. However, here we show that omitting the photon emission originating from primary and secondary electrons produced in dark matter annihilations, and undergoing diffusion through the Galactic magnetic field, can actually lead to the wrong conclusion. Accounting for this emission, we find that not only are annihilations of $\sim 10\ \rm GeV$ particles into a purely leptonic final state allowed, but the democratic scenario actually provides a better fit to the spectrum of the excess than the pure $b \bar{b}$ channel. We conclude our work with a discussion on constraints on these leptophilic scenarios based on the AMS data and the morphology of the excess. | After several decades of remarkable experimental development, evidence for dark matter (DM) particles still remains to be found. One important technique that has made dramatic progress in the last few years is indirect detection, which aims to detect the annihilation or decay products of DM particles in dense environments such as the central region of our Milky Way halo. In particular, the recent gamma-ray data from the Fermi-LAT (Large Area Telescope) experiment has enabled the community to constrain the thermal DM paradigm and set important bounds on the DM self-annihilation cross section as a function of the DM mass, for various final states (see, e.g., Refs.~\cite{Fermi_extragalactic,Fermi_dwarfs}). However, a few years ago, the possibility of a gamma-ray excess at low energies (between 1 and 10 GeV), in a narrow region around the Galactic center (GC)---smaller than $10^{\circ} \times 10^{\circ}$ \cite{Vitale_Fermi}---led several authors to speculate that this could be a manifestation of DM annihilations into either a mixture of $b \bar{b}$ and leptons-antileptons final states, or $b \bar{b}$ final states only \cite{Hooper,Hooper_Goodenough_excess,GordonMacias,Abazajian_GeV_excess,Daylan_GeV_excess}. While this excess could be attributed to astrophysical sources---like the central point source \cite{Ruchayskiy_Fermi_excess}, a burst injection of electrons \cite{electron_burst_excess}, a population of cosmic-ray protons \cite{protons_Profumo_excess}, or unresolved millisecond pulsars (see, e.g., Ref.~\cite{GordonMacias})---a DM interpretation is nevertheless possible. In the case of a pure $b \bar{b}$ final state, a DM mass of 30 GeV would be favored, while the DM mass should be about 23.5 GeV if the final state contains 45$\%$ leptons and 55$\%$ $b$ quarks \cite{GordonMacias}. In Refs.~\cite{Hooper,GordonMacias} it was also found that a DM mass of 10 GeV is required if the final state contains 90$\%$ leptons and 10 $\%$ $b$ quarks but the quality of the fit was better for the $b \bar{b}$ channel, thus leading the authors to prefer a large fraction of $b$ quarks in the final state. Note that throughout this paper, the term ``leptons" refers to democratic annihilation into leptons, i.e., a combination of the $e^{+}e^{-}$, $\mu^{+}\mu^{-}$, $\tau^{+}\tau^{-}$ final states, with 1/3 of the annihilations into each of these channels. These conclusions were obtained by only taking into account the prompt gamma-ray emission originating from these channels, namely the final-state radiation (FSR) single-photon emission, and the immediate hadronization and decay of the DM annihilation products into photons. In Refs.~\cite{Abazajian_GeV_excess,Daylan_GeV_excess}, the authors also added the bremsstrahlung contribution from electrons generated by the showering of the $b \bar{b}$ final state, but without taking electron diffusion into account. However, electrons produced in hadronization and decay processes do propagate in the Galaxy and eventually lose energy. The resulting population of electrons has an energy distribution slightly shifted towards the lower energy range but, depending on the energy propagation, is nevertheless expected to also emit photons in the GeV range through the bremsstrahlung process and inverse Compton scattering off the cosmic microwave background (CMB), UV and IR light, and starlight. Here we show that the corresponding gamma-ray emission should not be neglected as it typically induces a signal in the energy range where the excess has been observed. The importance of the contribution from inverse Compton scattering was argued in Ref.~\cite{Fermi_IC} in the general context of setting constraints on DM annihilations from the diffuse gamma-ray emission from the Galaxy. However, here we show that these contributions from diffused electrons do not simply induce corrections to the gamma-ray spectrum, but in fact they drastically change the interpretation of the excess in terms of DM. More specifically, it turns out that one can fit the data very well with leptons in the final state, in particular with a pure leptonic final state. So far, these primary pure leptonic channels have been neglected in the literature because the associated prompt gamma-ray emission does not provide a good fit to the data \cite{GordonMacias}. However, our results show that the diffuse emission component originating from primary and secondary electrons should be considered very seriously, if the excess were indeed of DM origin. In Sec.~\ref{prompt_vs_prop}, we recall the basics of the diffusion of electrons and remind the readers how these particles could contribute to the diffuse emission of gamma rays in our Galaxy. In Sec.~\ref{fits}, we fit the data and show how taking into account primary and secondary electrons can modify the interpretation of the GeV excess when the final state contains a large fraction of leptons. We provide a discussion of constraints from the AMS data and the morphology of the signal for leptophilic final states in Sec.~\ref{tests} and conclude in Sec.~\ref{conclusion}. | \label{conclusion} In this paper, we have demonstrated that taking into account the gamma-ray emission from DM-induced electrons drastically changes the interpretation of the Fermi-LAT excess, since it allows one to obtain an excellent fit to the spectrum of the excess for DM annihilations into leptons only. Therefore, $b\bar{b}$ is not the only viable channel, and we have rehabilitated the pure leptonic channel containing a combination of leptons. More specifically, we have shown that the contributions of the $e^{+}e^{-}$ and $\mu^{+}\mu^{-}$ channels to IC and bremsstrahlung are very important. The reason for this improved fit to the Fermi excess is the IC and bremsstrahlung contributions, which give a gamma-ray spectrum at slightly lower energies than the prompt emission. The effect is strong for democratic annihilation into leptons, while it gets weaker (but definitely non-negligible) for the scenarios favored by the latest constraints \cite{Bringmann_constraints}, with no electrons and a branching ratio into muons of 0.25. Possible additional constraints on this scenario involve the morphology of the gamma-ray flux at low energy: our model is not in strong tension with the morphology of the excess in the energy range of the data, but looking at lower energies may help to discriminate between the leptonic and $b\bar{b}$ scenarios. Therefore, in the absence of such a strong constraint, and should the excess be of DM origin, one would definitely need to take into account these leptonic final states to determine the DM mass and the value of the self-annihilation cross section, even though models may be harder to build than those with a pure $b \bar{b}$ final state \cite{Boehm:2014hva}. | 14 | 3 | 1403.1987 |
1403 | 1403.6105_arXiv.txt | Scaling networks of cosmic defects, such as strings and textures, actively generate scalar, vector and tensor metric perturbations throughout the history of the universe. In particular, {\em vector} modes sourced by defects are an efficient source of the CMB B-mode polarization. We use the recently released BICEP2 and POLARBEAR B-mode polarization spectra to constrain properties of a wide range of different types of cosmic strings networks. We find that in order for strings to provide a satisfactory fit on their own, the effective inter-string distance needs to be extremely large -- spectra that fit the data best are more representative of global strings and textures. When a local string contribution is considered together with the inflationary B-mode spectrum, the fit is improved. We discuss implications of these results for theories that predict cosmic defects. | 14 | 3 | 1403.6105 |
||
1403 | 1403.4100_arXiv.txt | {This paper is the first in a series undertaking a comprehensive correlation analysis between optically selected and X-ray-selected cluster catalogues. The rationale of the project is to develop a holistic picture of galaxy clusters utilising optical and X-ray-cluster-selected catalogues with well-understood selection functions.} {Unlike most of the X-ray/optical cluster correlations to date, the present paper focuses on the non-matching objects in either waveband. We investigate how the differences observed between the optical and X-ray catalogues may stem from (1) a shortcoming of the detection algorithms, (2) dispersion in the X-ray/optical scaling relations, or (3) substantial intrinsic differences between the cluster populations probed in the X-ray and optical bands. The aim is to inventory and elucidate these effects in order to account for selection biases in the further determination of X-ray/optical cluster scaling relations.} {We correlated the X-CLASS serendipitous cluster catalogue extracted from the XMM archive with the redMaPPer optical cluster catalogue derived from the Sloan Digitized Sky Survey (DR8). We performed a detailed and, in large part, interactive analysis of the matching output from the correlation. The overlap between the two catalogues has been accurately determined and possible cluster positional errors were manually recovered. The final samples comprise 270 and 355 redMaPPer and X-CLASS clusters, respectively. X-ray cluster matching rates were analysed as a function of optical richness. In the second step, the redMaPPer clusters were correlated with the entire X-ray catalogue, containing point and uncharacterised sources (down to a few $10^{-15}$ \flux\ in the [0.5-2] keV band). A stacking analysis was performed for the remaining undetected optical clusters.} {We find that all rich ($\lambda \geq 80$) clusters are detected in X-rays out to z=0.6. Below this redshift, the richness threshold for X-ray detection steadily decreases with redshift. Likewise, all X-ray bright cluster are detected by \redmapper. After correcting for obvious pipeline shortcomings (about 10\% of the cases both in optical and X-ray), $\sim$ 50\% of the redMaPPer (down to a richness of 20) are found to coincide with an X-CLASS cluster; when considering X-ray sources of any type, this fraction increases to $\sim$ 80\%; for the remaining objects, the stacking analysis finds a weak signal within 0.5 Mpc around the cluster optical centres. The fraction of clusters totally dominated by AGN-type emission appears to be a few percent. Conversely, $\sim$ 40\% of the X-CLASS clusters are identified with a redMaPPer (down to a richness of 20) - part of the non-matches being due to the X-CLASS sample extending further out than redMaPPer ($z<1.5$ vs $z<0.6$), but extending the correlation down to a richness of 5 raises the matching rate to $\sim$ 65\%. } {This state-of-the-art study involving two well-validated cluster catalogues has shown itself to be complex, and it points to a number of issues inherent to blind cross-matching, owing both to pipeline shortcomings and cluster peculiar properties. These can only been accounted for after a manual check. The combined X-ray and optical scaling relations will be presented in a subsequent article. } | The abundance of galaxy clusters is a powerful cosmological probe (e.g. \cite{henry09}, \cite{vikhlinin09}, \cite{mantz10a}, \cite{rozo10}, \cite{pierre11}, \cite{clerc12a}). Indeed, galaxy clusters have provided the first line of evidence for dark matter (\cite{zwicky33}) and evidence that the matter density of the universe was sub-critical ($\Omega_m < 1$, \cite{gott74}). Historically, galaxy clusters were first identified in the optical (\cite{abell58}). Early optical cluster catalogues were constructed utilising single-band photometric data and were therefore extremely susceptible to selection effects. With the advent of the ROSAT All Sky Survey (RASS, \cite{voges99}), cluster detection was primarily pursued in the X-ray, because the detection of X-ray photons provided unambiguous evidence of a deep potential well and therefore of the reality of the detected galaxy clusters. This led to generating a plethora of RASS X-ray catalogues (e.g. \cite{ebeling00}, \cite{bohringer00}, \cite{reiprich02}, and many others), which have since been complemented both by targeted \citet{pacaud07} and serendipitous (\cite{barkhouse06,lloyd-davies11,clerc12b,takey13}) cluster searches with the XMM-Newton or Chandra observatory. At the same time, the advent of multi-band photometric data has led to dramatic improvements in optical cluster finding and an explosion of algorithms (e.g. \cite{gladders05}, \cite{koester07}, \cite{wen13}, \cite{hao10}, \cite{szabo11}, and many others).\\ To date, cluster searches in the X-ray, optical, and now in infrared wave band for the z>1 range are still conducted independently, although simultaneous multi-band approaches are being proposed (e.g. \cite{cohn09}, \cite{bellagambda11} - assuming basic relations between the cluster observables). These catalogues are subsequently correlated, possibly with the goals of searching for extreme objects (e.g. \cite{andreon11}) but, more generally, for establishing a correspondence (i.e. a scaling relation) between mass proxies, such as X-ray gas temperature and optical richness (e.g. \cite{popesso04, popesso05, rykoff08, gal09, wen12, takey13, rozo14}). These cross-correlations can involve up to a few thousand objects and are performed in a so-called blind way with little attention to the objects left out by the procedure. This occurs in a general astrophysical context, where the use of clusters of galaxies as cosmological probes has again come under scrutiny. Most of the criticisms concern our actual ability to perform cluster mass measurements suitable for cosmological studies - i.e. to an accuracy that matches today's precision cosmology requirements (e.g. \cite{vondenlinden14}, \cite{israel14}). The main arguments invoked are: instrumental calibration issues (\cite{rozo14b, planck13}), biases in hydrostatic mass estimates, reliability of the mass proxy used (e.g. is the gas mass fraction truly universal?), biases introduced by galaxy-colour selections, uncontrolled projection effects in the optical or infrared cluster searches. \\ In parallel, recent analyses have insisted on the inability of cluster-based cosmology to be disconnected from determination of the cluster scaling relations and from a detailed account of the selection biases affecting the samples (\cite{pacaud07,mantz10b,allen11}). The three aspects are intricately related and must be handled in a self-consistent way. Even at a simpler level, for a fixed cosmology, the determination of the scaling relations must include modelling of the selection, unless the objects of interest lie well above the survey detection limits. Furthermore, one of the key parameters entering the analysis is the intrinsic scatter of the scaling relations. This quantity has a critical effect on the predicted number of detected clusters and how the samples are biased towards, for instance, more luminous objects with respect to the mean (given the steepness of the mass function). Scatter values are hardly known in the local universe because they require large samples to be determined and, consequently, should be left as supplementary free parameters in the cosmological analyses.\\ In this context, we have undertaken an extensive correlation study between an X-ray catalogue and an optical one, namely X-CLASS extracted from the XMM all-sky archives and redMaPPer based on the SSDS data set. The two catalogues were independently constructed, both aiming at very low false-detection rates. By comparing the two-catalogues against each other, the present paper investigates a number of practical issues critical for cluster studies and, therefore, goes much beyond the blind correlation analyses. In particular, we performed an interactive screening of the clusters found NOT to have either an X-ray or an optical counterpart, in order to disentangle possible technical detection problems from astrophysical biases and thus better understand the selection functions of the two samples. Among the questions we address, we cite: What fraction of the non-matches can be ascribed to detection pipeline failures? Do the X-ray and optical detection pipelines miss any massive cluster? Do we find any optically rich cluster without X-ray gas beyond what is expected given scaling relations with log-normal scatter? To what extent is the optical sample contaminated by projection effects? How many X-ray clusters are missed because of the presence of a bright central AGN? \\ The paper is organised as follows. The next section summarises the properties of the X-CLASS and redMaPPer catalogues; Section 3 describes the adopted correlation procedures; Sections 4 and 5 scrutinise the correlation statistics for the optical to X-ray and X-ray-to-optical directions, respectively; the results are discussed in Section 6 and last section draws the conclusions. Throughout the article we assume the WMAP7 cosmology (\cite{komatsu11,larson11}). | We have undertaken a non-blind generic comparison between two cluster samples defined in the X-ray and optical wavebands, concentrating on the left-out. The overlap samples involve some 270 (optical) and 355 (X-ray) objects and have well-defined selection functions, which does not a priori imply a one-to-one correspondence: the C1 clusters constitute a high X-ray surface brightness sample out to a redshift of $z<1.5$, the redMaPPer objects are red-sequence clusters limited to $z \sim 0.5-0.6$. The analysis of the non-matched objects has benefited from extensive human inspection. Main conclusion is that we found no evidence for any optically rich cluster to be devoid of X-ray emitting gas and vice versa. For SDSS imaging, and given the observational depth of the XCLASS catalogue, we find that all $\lambda > 80$ galaxy clusters in the redshift range z<0.6 are detected by both algorithms. This corresponds roughly to M200c $\sim 4\times 10^{14} h^{-1} M_{\odot}$. This is a reasonable match to the X-ray luminosity redMaPPer detection threshold of $\sim 2\times 10^{44}$ ergs/s derived in \citet{rozo14b}. Mass detection limits will be discussed in Paper III, which will present the X-ray/optical scaling relations.\\ The comparison has not only usefully enlightened a few shortcomings of both detection methods but also, most importantly, enabled us to pinpoint key issues for future cluster science. It is difficult to define a unique matching radius that takes all specificities of the two samples into account, both from the instrumental and from the cluster-physics points of view, hence the need for an interactive approach. Moreover, the limited XMM field of view, vignetting, and PSF clearly set practical limits to the stacking analysis. Similar to optical cluster catalogues, X-ray serendipitous catalogues show significant differences between each other. In any case, the selection functions have to be explicitly involved in the process. All these aspects have a critical impact on any X-ray--optical scaling-relation work. This leads us to stress again that cluster evolution, selection effects, and cosmology cannot be worked out independently. In Paper III, we shall present the joint X-CLASS and redMaPPer catalogue along with scaling relations.\\ This very instructive approach has only provided an overview of the difficulties and promises of dedicated X-ray/optical cluster studies involving hundreds of objects and could be easily extended to X-ray/X-ray, optical/optical optical/S-Z (e.g. \citet{rozoetal14}), etc... comparisons. The current lack of redshifts for a large number of the southern X-CLASS clusters is being addressed by systematic multi-band observations with the GROND instrument on the MPG/2.2m telescope at La Silla (\cite{greiner08}), to obtain images and reliable photo-z for a large portion of the catalogue in the southern sky (Clerc et al. in prep). The next steps are obviously to extend the comparison to optical catalogues based on other detection methods and going deeper in the optical and IR wavebands, as well as using ancillary, deeper XMM observations when available. It is nevertheless anticipated that projection effects and cluster evolution issues will get more severe with increasing redshift, hence the need for a truly multi-wavelength approach. It is also obvious that numerical simulations are to play a growing role in cluster detection and subsequent matching studies. In this respect, the XXL project provides a unique data set (Pierre et al in prep). | 14 | 3 | 1403.4100 |
1403 | 1403.6934_arXiv.txt | {We investigate the effect of a braneworld expansion era on the relic density of asymmetric dark matter. We find that the enhanced expansion rate in the early universe predicted by the Randall-Sundrum II (RSII) model leads to earlier particle freeze-out and an enhanced relic density. This effect has been observed previously by Okada and Seto (2004) for symmetric dark matter models and here we extend their results to the case of asymmetric dark matter. We also discuss the enhanced asymmetric annihilation rate in the braneworld scenario and its implications for indirect detection experiments.} \begin{document} | \label{sec:intro} Despite the overwhelming astrophysical and cosmological evidence for the existence of Dark Matter (DM)~\cite{Hooper1}, very little is known about its particle nature. Many particle candidates have been proposed which are capable of explaining the observational data yet none have been conclusively verified. The data favour cold (non-relativistic) DM for which the most popular theoretical candidates are WIMPs (Weakly Interacting Massive Particles) with mass $m_{\chi} \sim \mathcal{O}(10 - 1000)$ GeV. Supersymmetric extensions of the Standard Model (SM) in which $R$-parity is conserved provide a viable DM candidate, the neutralino, which is the lightest supersymmetric particle formed from higgsinos and weak gauginos and is stable against decay into SM particles. A popular framework for the origin of dark matter is provided by the thermal relic scenario; at early times, when the temperature of the universe is high, frequent interactions keep the dark matter (anti) particles, $(\bar{\chi})\chi$, in equilibrium with the background cosmic bath. As the universe expands and cools the dark matter interaction rate drops below the expansion rate and the particles fall out of equilibrium. Eventually both creation and annihilation processes cease, and the number density redshifts with the expansion. This is known as particle freeze-out, and the remaining 'relic' particles constitute the dark matter density we observe today. A combined analysis of the \textit{Planck} satellite + WP + highL + BAO measurements gives the present dark matter density as ($68\%$ C.L.)~\cite{Lahav} \begin{equation} \Omega_{DM}h^2 = 0.1187\pm 0.0017,\label{eq:dm_abun} \end{equation} where $h = 0.678\pm 0.008$ and is defined by the value of the Hubble constant, $H_0 = 100\,h$ km/s/Mpc. Due to the Boltzmann suppression factor in the equilibrium number density, the longer the DM (anti) particles remain in equilibrium the lower their number densities are at freeze-out. Thus species with larger interaction cross sections which maintain thermal contact longer, freeze out with diminished abundances. Thermal relic WIMPS are excellent DM candidates as their weak scale cross section $\sigma \sim G_{\mathrm{F}}^{2}m_{\chi}^{2}$ gives the correct order of magnitude for $\Omega_{DM}h^{2}$ for a standard radiation-dominated early universe. However, if the universe experiences a non-standard expansion law during the epoch of dark matter decoupling, freeze-out may be accelerated and the relic abundance enhanced~\cite{Catena,Barrow}. The physics of the early universe, prior to the era of Big Bang Nucleosynthesis (BBN), is relatively unconstrained by current observational datasets. Dark matter particles, which decouple from the background thermal bath at early times, carry the signature of these earliest moments and therefore provide an excellent observational probe. If the properties of the dark matter particles are ever discovered, either through direct/indirect detection experiments or via particle creation in particle accelerators~\cite{Bauer}, then relic abundance calculations could provide a valuable insight into the conditions of the universe prior to BBN. The majority of dark matter models assume symmetric dark matter for which the particles are Majorana fermions with $\chi = \bar{\chi}$, i.e. they are self-conjugate. Given that most known particles are not Majorana, it is natural to consider asymmetric dark matter models in which the particle $\chi$ and antiparticle $\bar{\chi}$ are distinct, i.e. $\chi\neq\bar{\chi}$, and to assume an asymmetry between the number densities of the DM particles and antiparticles. Indeed, a similar asymmetry exists in the baryonic matter sector between the number of observed baryons $n_{b}$ and antibaryons $n_{\bar{b}}$. This baryonic asymmetry is \begin{equation} \eta_{b} = \frac{n_{B}}{n_{\gamma}} = \frac{n_b - n_{\bar{b}}}{n_\gamma} \approx 6\times 10^{-10}, \end{equation} where $n_{\gamma}$ is the number density of photons. Several models have been proposed~\cite{Kumar} that relate the asymmetries in the baryonic and dark sectors. These models typically assume~\cite{Graesser} either a primordial asymmetry in one sector which is transferred to the other sector, or that both asymmetries are generated by the same physical process such as the decay of a heavy particle. Kaplan et al.~\cite{Kaplan} consider a baryonic $B-L$ asymmetry generated by baryogenesis at high temperatures that is transferred to the DM sector by interactions arising from higher dimension operators which then decouple at a temperature above the DM mass and freeze in the asymmetry. If the asymmetries in the dark and baryonic matter sectors share a common origin then their number densities will be related $n_{DM}\sim n_b$, as will their densities $\Omega_{DM}\sim (m_{\chi}/m_b)\Omega_b$~\cite{Kaplan}. This could explain the approximate equality of the observed dark and baryonic abundances ($\Omega_{DM}/\Omega_b \sim 5$) and suggests a WIMP mass in the range $m_\chi \sim 5 - 15$ GeV. Interestingly, this mass range is favored by a number of observational datasets~\cite{DAMA,CoGeNT,Hooper2} providing further motivation for asymmetric DM. Cosmological, astrophysical and collider constraints on light thermal DM ($m_{\chi} \sim $1 MeV - 10 GeV) have been examined by~\cite{Lin} for both symmetric and asymmetric models of DM and~\cite{Kim} have considered flavour constraints on, and collider signatures of, asymmetric DM produced by decays of supersymmetric particles in the minimal supersymmetric Standard Model. The relic abundance of asymmetric DM has been studied~\cite{Gelmini,Iminniyaz1,Iminniyaz2} in the standard cosmological scenario and for the non-standard quintessence scenario in which a non-interacting scalar field is present in its kination phase. Gelmini et al.~\cite{Gelmini} also considered a simple scalar-tensor cosmology parameterized as a multiplicatively modified Hubble expansion. The enhanced expansion rate predicted by each non-standard scenario led to earlier particle freeze-out and an enhanced relic abundance. As a result, the asymmetry between the particles and antiparticles was essentially 'washed out'. In this study we consider the effect of an early time braneworld expansion era on the present density of asymmetric dark matter. Braneworld models are toy models arising from string theory where additional spacetime dimensions are incorporated in an attempt to unify the fundamental forces of nature. In the braneworld scenario, our universe is modeled as a $3(+1)$ dimensional surface (the brane) embedded in a higher dimensional spacetime known as the bulk. The standard model particles are confined to the surface of the brane whilst gravity resides in the higher dimensional bulk. This offers an explanation for the apparent weakness of gravity with respect to the other fundamental forces~\cite{Langlois}. The effect of a braneworld expansion era on the relic abundance of symmetric DM has been studied by~\cite{Okada1,Nihei1,Nihei2,Dahib, Guo}. They found that the modified expansion rate in braneworld models led to earlier particle freeze-out and an enhanced relic abundance (provided the five-dimensional Planck mass $M_{5}$ is low enough), similar features to those of the quintessence and scalar-tensor scenarios. We report here an extension of these studies to the case of asymmetric DM. In the next section we introduce the Randall-Sundrum type II braneworld model and its relevant parameters. Then in section~\ref{sec:ADM} we present the Boltzmann equations which describe the time evolution of the asymmetric DM number densities and give both numerical and analytical solutions for the braneworld case. We constrain the possible parameter combinations using the observed DM density~\eqref{eq:dm_abun} in section~\ref{sec:parbound} before discussing the asymmetric DM annihilation rate and prospects for indirect detection of asymmetric DM in sections~\ref{sec:annrate} and~\ref{sec:Fermi} respectively. Finally we summarize our findings in section~\ref{sec:concl}. | \label{sec:concl} In this article we have investigated the relic density and observational signatures of asymmetric dark matter models in the braneworld cosmology. We have found that the decoupling of asymmetric dark matter and the evolution of the particle/antiparticle number densities is modified in the RSII braneworld scenario. In particular, the enhanced braneworld expansion rate leads to earlier particle freeze-out and an enhanced relic density. Additionally, the asymmetry between the DM particles and antiparticles is 'washed' out by the modified expansion rate and the relic abundance is determined by the annihilation cross section $\langle\sigma v \rangle$. In this respect the nominally asymmetric model behaves like symmetric dark matter. This effect was demonstrated analytically in section~\ref{sec:analsol} where we derived approximate expressions for the densities of the majority and minority DM components in the braneworld scenario and is consistent with other investigations of asymmetric dark matter in non-standard cosmologies which predict an enhanced expansion rate during the epoch of dark matter decoupling~\cite{Gelmini,Iminniyaz2}. Importantly, we have also accounted for the variation in the number of relativistic degrees of freedom as a function of temperature, $g_\star(T)$, in our numerical solution of the Boltzmann equation. This variation is particularly pertinent for the RSII braneworld model in which asymptotic behaviour of the number densities is not achieved until well after decoupling. Fixing $g_\star(T) =$ const. in the numerical integration can result in relic density predictions which are out by a factor of $\sim 2$ depending on the magnitude of $M_5$. Finally, in order to provide the observed relic density~\eqref{eq:dm_abun} the annihilation cross section must be boosted to compensate for the enhanced expansion rate. In this instance we find that the annihilation rate, $\Gamma_{\chi(\bar{\chi})}\propto \langle\sigma v\rangle$, of asymmetric DM in the braneworld cosmology is larger than the symmetric signal in the standard scenario despite the suppressed abundance of the minority DM component. This effect is contrary to the usual expectation which presumes a weaker asymmetric detection signal. We have also shown that it is possible to produce an amplified asymmetric detection signal which satisfies the observational constraints from Fermi-LAT~\cite{Fermi}. Additional data from Fermi-LAT and other indirect detection probes~\cite{AMS,PAMELA} may further constrain the dark matter particle properties and in the process shed light on the cosmological conditions prior to BBN. Very recently new information about the very early universe has been provided by the BICEP2 experiment~\cite{BICEP2} which detected $B$-mode polarization of the CMB, indicating a large tensor-to-scalar ratio $r=0.20^{+0.07}_{-0.05}$. These primordial tensor perturbations are strong evidence of gravitational waves and are a generic prediction of inflationary cosmological models with large energy density~ \cite{Ellis2014a,Ellis2014b}. Whereas the BICEP2 result of $n_{s} \simeq 0.96$ for the infrared tilt of the scalar perturbations $n_{s}$ is in agreement with Planck~\cite{Planck22} and WMAP and supports slow roll models of inflation, the BICEP2 result for $r$ is in some tension with the Planck observations \cite{Planck14} that give $r < 0.11$ and which favour Starobinsky $R+R^{2}$ inflation~ \cite{Starobinsky80,Starobinsky83} that predicts $r=0.003$. The BICEP2 value for $r$ is consistent with simple single scalar field $\phi $ models of chaotic inflation with a quadratic self-interaction $V(\phi) =m \phi^{2}/2$~\cite{Linde1985} that predict $r \simeq 0.16$. Although aspects of the BICEP2 analysis require further investigation, if the BICEP2 result for $r$ is confirmed then it is of interest to note that such a value follows naturally from RSII braneworld cosmology. Slow roll chaotic inflation in RSII braneworld cosmologies for a single field with a monomial potential $V(\phi )=V_{0}\phi^{p}$ has been the subject of several investigations. Although early studies~ \cite{Maartens2000,Langois2000} found that the ratio of tensor-to-scalar modes was suppressed for inflation at high energy scales $\rho \gg \sigma$, subsequent investigations~\cite{Liddle2003,Tsujikawa2004a,Tsujikawa2004b,Zarrouki2011,Calcagni2013} found instead an enhancement. Specifically, the spectral index, $n_s$, and tensor-to-scalar ratio, $r$, in the RSII braneworld scenario are given respectively by \begin{align} n_s - 1 &= -\frac{2(2p + 1)}{N(p + 2)},\nonumber\\ r &= \frac{24p}{N(p+2)}, \quad \rho \gg \sigma ,\nonumber \end{align} where $N$ is the number of $e$-folds\footnote{Slightly different forms of the denominator in the expression for $r$ are given in the literature. For example, \cite{Liddle2003} give $N(p+2)+p-1$ whereas~\cite{Zarrouki2011,Calcagni2013} give $N(p+2)+p$.}. For a quadratic potential and $N \approx 55-60$, the values of $n_s$ and $r$ agree closely with the BICEP2 result. | 14 | 3 | 1403.6934 |
1403 | 1403.7045_arXiv.txt | Regions of rapid variation in the internal structure of a star are often referred to as acoustic glitches since they create a characteristic periodic signature in the frequencies of p modes. Here we examine the localized disturbance arising from the helium second ionization zone in red giant branch and clump stars. More specifically, we determine how accurately and precisely the parameters of the ionization zone can be obtained from the oscillation frequencies of stellar models. We use models produced by three different generation codes that not only cover a wide range of stages of evolution along the red giant phase but also incorporate different initial helium abundances. To study the acoustic glitch caused by the second ionization zone of helium we have determined the second differences in frequencies of modes with the same angular degree, $l$, and then we fit the periodic function described by Houdek \& Gough to the second differences. We discuss the conditions under which such fits robustly and accurately determine the acoustic radius of the second ionization zone of helium. When the frequency of maximum amplitude of the p-mode oscillations was greater than $40\,\rm\mu Hz$ a robust value for the radius of the ionization zone was recovered for the majority of models. The determined radii of the ionization zones as inferred from the mode frequencies were found to be coincident with the local maximum in the first adiabatic exponent described by the models, which is associated with the outer edge of the second ionization zone of helium. Finally, we consider whether this method can be used to distinguish stars with different helium abundances. Although a definite trend in the amplitude of the signal is observed any distinction would be difficult unless the stars come from populations with vastly different helium abundances or the uncertainties associated with the fitted parameters can be reduced. However, application of our methodology could be useful for distinguishing between different populations of red giant stars in globular clusters, where distinct populations with very different helium abundances have been observed. | Asteroseismology uses the natural resonant oscillations of stars to study their interiors. With the launch of the \textit{Convection, Rotation and planetary Transits (CoRoT)} and \textit{Kepler} satellites, asteroseismology can now be carried out on a vast and diverse range of stars. Here we study red giant stars which are cool, highly luminous stars that are more evolved than our Sun. However, like our Sun, they have a convective envelope that can stochastically excite acoustic oscillations, which are known as p modes since the main restoring force of these oscillations is a gradient of pressure. Regions of rapid variation in the internal structure (or sound speed) of a star create a characteristic periodic signature in the frequencies of p modes \citep[e.g.][]{Vorontsov1988, Gough1990, Basu1994, Roxburgh1994, Basu2004, Houdek2007}. By analysing this signal, information on the localized disturbance can be obtained. For example, the periodicity of the signal is related to the sound travel time from that region to the surface \citep[e.g.][]{Vorontsov1988, Gough1990}. Regions of rapid variation, which are known as acoustic glitches, can occur in zones of rapidly changing chemical composition, ionization zones of major chemical elements, and regions where energy transport switches from radiative to convective. Studies of periodic signatures in the frequencies of solar p modes have allowed the depths of both the convective envelope and the second ionization zone of helium to be determined, the extent of the overshoot at the base of the convection zone to be ascertained and, additionally, the abundance of helium in the envelope to be estimated \citep[e.g.][]{JCD1991, JCD1995, Basu1994, Monteiro1994, Basu1995, Basu1997, Basu1997a, Basu2001, Monteiro2005, JCD2011}. Furthermore, it has been proposed that these techniques can be used to study acoustic glitches in stars other than the Sun \citep[e.g.][]{Monteiro1998, Perez1998, Monteiro2000, Lopes2001, Ballot2004, Basu2004, Verner2004, Verner2006, Houdek2007, Hekker2011} and the first studies of this kind have been conducted \citep[e.g.][]{miglio2010, Mazumdar2012, Mazumdar2014}. Here we examine the localized disturbance arising from the second ionization zone of helium, which causes a distinct bump in the first adiabatic exponent, $\gamma_1$ \citep[e.g.][and references therein]{Basu2004}. The first adiabatic exponent is defined as the logarithm of the derivative of the pressure ($P$) with respect to the density $(\rho)$ evaluated at constant entropy $(s)$, i.e. \begin{equation}\label{equation[gamma 1]} \gamma_1=\left(\frac{\textrm{d}\ln P}{\textrm{d}\ln\rho}\right)_s. \end{equation} $\gamma_1$ can be related to the adiabatic sound speed, which is assumed to vary only with depth, by \begin{equation}\label{equation[c gamma]} c_\textrm{\scriptsize{s}}^2=\frac{\gamma_1P}{\rho}. \end{equation} Therefore, the distinct bump in $\gamma_1$ caused by the second ionization zone of helium has a corresponding effect on the sound speed and it is this localized perturbation to $c_\textrm{\scriptsize{s}}$ that causes the oscillatory signature in the p-mode frequencies. To investigate the acoustic glitch caused by the second ionization zone of helium we have studied the second differences in the p-mode frequencies, which were defined by \citet{Gough1990} as \begin{equation}\label{equation[second diffs]} \Delta_2\nu_{n,l}\equiv\nu_{n-1, l}-2\nu_{n,l}+\nu_{n+1,l}. \end{equation} Here $\nu_{n,l}$ is the frequency of the $n\rm th$ overtone of the p mode with spherical harmonic degree $l$. Any localized region of rapid variation of the sound speed will cause an oscillatory component in $\Delta_2\nu_{n,l}$ with a cyclic frequency of approximately twice the acoustic depth of the variation. In this paper we characterize the second ionization zone of helium by examining the periodic variations observed in the second differences. The $\Delta_2\nu_{n,l}$ were used because the first differences are subject to smoothly varying components which introduce additional parameters to be determined from the fitting process: The mode frequencies are susceptible to near-surface effects that are smoothly varying with frequency, such as the ionization of hydrogen and non-adiabatic processes. These effects are largely reduced by taking the second differences, and this is particularly true close to the frequency at which the amplitude of the oscillations is a maximum ($\nu_{\textrm{\scriptsize{max}}}$), where the trend is approximately flat with frequency. In this paper we have restricted our analysis to modes with frequencies close to $\nu_{\textrm{\scriptsize{max}}}$ and therefore satisfactory results can be obtained simply by fitting a constant offset to the second differences in addition to the oscillatory term caused by the glitch. Furthermore, the second differences are less susceptible to the propagation of uncertainties in mode frequencies than higher order differences. When determining the second-order differences we require the frequencies of three consecutive overtones. Higher order differences require the frequencies of an increasing number of overtones. In real data the number of overtones for which we can accurately and precisely obtain frequencies will be limited (by the signal-to-noise ratio of the power in the frequency spectrum). Therefore, higher order differences run the risk of there not being enough observable overtones to allow the signature of the second ionization zone of helium to be characterized, and so higher order differences are not suitable for this study. In stars like the Sun the periodicities caused by the second ionization zone of helium and by abrupt variations in the derivatives of the sound speed at the base of the convection zone (BCZ) are similar enough that both components must be studied simultaneously. However, in red giant stars, which are the main focus of this study, the base of the convection zone is located deep within the stellar interior (typically at a radius of $0.1R_\star$). Therefore, the acoustic radius of the base of the convection zone and the acoustic radius of the second ionization zone of helium are very different. Furthermore, if the glitch due to BCZ is near the inner turning point of the oscillation, as is the case with red giant branch (RGB) stars, its effect on the oscillation frequencies may not be represented as a periodic component. As a result one can examine the effect of the second ionization zone of helium in isolation, without contamination from the signature of the BCZ. The main aim of this paper is to determine the effectiveness and robustness of our methodology at characterizing the second ionization zone of helium. We concentrate on two main parameters: the acoustic radius of the second ionization zone of helium and the amplitude of the oscillatory signal at $\nu_\textrm{\scriptsize{max}}$. The location of the ionization zone is important for understanding a star's internal stratification and can potentially be used to help constrain the mass and radius of the star \citep{Mazumdar2005, miglio2010} . The amplitude of the signal at $\nu_\textrm{\scriptsize{max}}$ is a proxy for the helium abundance of the star. With this in mind we have used a wide selection of models to test the limits of our analysis and these models are described in Section \ref{section[models]}. The method by which we examine the periodicity is then described in detail in Section \ref{section[method]}. In Section \ref{section[glitch location]} we discuss the complications that arise when comparing parameters obtained from the p-mode frequencies with those obtained directly from the models. In Section \ref{section[results]} we discuss the reliability of the method at obtaining the acoustic radius of the second ionization zone of helium accurately and robustly. We also discuss the amplitude of the signal at $\nu_\textrm{\scriptsize{max}}$. In Section \ref{section[model comparison]} we compare estimated values of the acoustic depth of the second ionization zone of helium from different models. Finally, concluding summary and discussion is provided in Section \ref{section[discussion]}. | \label{section[discussion]} We have successfully used second differences of p-mode frequencies to gain information about the second ionization zone of helium for a wide range of model stars. Our methodology, particularly when $\nu_{\textrm{\scriptsize{max}}}>40\,\rm\mu Hz$, has been shown to be both robust and accurate. However, this study has highlighted the fact that comparing results obtained from the model-produced frequencies with those obtained directly from the models is not straightforward. Comparisons between results obtained from actual observed data and models would be even more uncertain. Comparisons of the acoustic depth of the ionization zone appear to be inconsistent raising questions over the treatment of the near-surface effects and even the definition of the acoustic surface. To avoid these uncertainties we have instead compared estimates of the acoustic radius of the second ionization zone of helium, $t_{\textrm{\scriptsize{HeII}}}$. However, even this is not straightforward: our results indicate that the acoustic radius of the glitch obtained by fitting frequencies is consistent with that of a local maximum in $\gamma_1$, rather than the local depression. We note here that other authors have also found discrepancies between $\tau_{\textrm{\scriptsize{HeII}}}$ defined in the models as the local minimum in $\gamma_1$ and those obtained from the fit \citep{Houdek2007, Mazumdar2014}. The signature of the glitch in $\Delta_2\nu_{n,l}$ was difficult to fit at low $\nu_{\textrm{\scriptsize{max}}}$ ($<40\,\rm\mu Hz$) because the number of overtones was low and the periodicity of the glitch was similar in magnitude to the resolution of the second differences. Including higher order modes, particularly $l=1$ modes, did, in general, aid the fitting process but only when the modes were not mixed. Mixed modes are influenced by the glitch in a different manner to p modes and so the signature observed in the $\Delta_2\nu_{n,l}$ is different. Importantly, though, we found that for RGB stars the $l=1$ modes did behave like p modes at low $\nu_{\textrm{\scriptsize{max}}}$, where our methodology struggled to produce robust fits using the $l=0$ modes alone. However, we note that this is unlikely to be true for clump stars with the same $\nu_{\textrm{\scriptsize{max}}}$. One might naively think that simply including more $l=0$ modes in the analysis, particularly at low frequencies where the amplitude of the signal is largest, might improve the quality of the fitted results. However, care must be taken not to stray outside the asymptotic regime as then the fitted function described by equation (\ref{equation[fitted function]}) becomes inappropriate: A higher order background function is required instead of the simple constant, $K$. The acoustic radius is not the only parameter that can be obtained by fitting the signature of the acoustic glitch. The amplitude of the envelope of the signature at $\nu_{\textrm{\scriptsize{max}}}$ is correlated with the initial helium abundance of the star, $Y$. We note, however, that the amplitude at $\nu_{\textrm{\scriptsize{max}}}$ is not a straight measure of $Y$. Whether the amplitude at $\nu_{\textrm{\scriptsize{max}}}$ can be used to discriminate between stars of different $Y$ depends on the uncertainties associated with the p-mode frequencies. For the majority of this paper we have used uncertainties typical of 1460\,d of data (or $0.02\,\rm\mu Hz$ on mode frequencies). In this case it is possible to discriminate between stars with well-separated $Y$, such as 0.250 and 0.400, but it is not possible to discriminate between stars with a difference in $Y$ of 0.040. In order to differentiate between stars whose $Y$ differ by 0.040 the size of the uncertainties on the mode frequencies must be less than $0.005\,\rm\mu Hz$. A simple scaling implies this would require more than 60\,yr of continuous high-quality observations. However, we must remember that this is only a rough estimate and the true size of the errors will also depend on factors such as the signal-to-noise ratio of the data and the lifetimes of the oscillations. Furthermore, these factors also mean that the size of the errors is not uniform across the range of modes considered. The above estimate for the length of time required to discriminate between stars with $Y$ that differ by 0.040 is, therefore, a worst case scenario. \begin{figure*} \includegraphics[width=0.42\textwidth, clip]{clump_Y_results_l0_numax_vs_HeIIradius_range_err_1460d.eps} \includegraphics[width=0.42\textwidth, clip]{clump_Z_results_l0_numax_vs_HeIIradius_range_err_1460d.eps}\\ \hspace{0.4cm}\includegraphics[width=0.4\textwidth, clip]{clump_Y_results_l0_numax_vs_amplitude_range_err_1460d.eps}\hspace{0.4cm} \includegraphics[width=0.4\textwidth, clip]{clump_Z_results_l0_numax_vs_amplitude_range_err_1460d.eps}\\ \caption{Top left: acoustic radius of the helium second ionization zone, $t_{\textrm{\scriptsize{HeII}}}$, as a function of $\nu_{\textrm{\scriptsize{max}}}$ for clump stars with two different initial helium abundances, $Y$ (see legend). M1 models with $Z=0.020$ were used. Top right: acoustic radius of the helium second ionization zone, $t_{\textrm{\scriptsize{HeII}}}$, as a function of $\nu_{\textrm{\scriptsize{max}}}$ for clump stars with two different initial heavy element abundances, $Z$ (see legend). M1 models with $Y=0.278$ were used. Bottom left: amplitude of the acoustic glitch at $\nu_{\textrm{\scriptsize{max}}}$ for the same models as plotted in the top left-hand panel of this figure. Bottom right: amplitude of the acoustic glitch at $\nu_{\textrm{\scriptsize{max}}}$ for the same models as plotted in the top right-hand panel of this figure. }\label{figure[clump]} \end{figure*} Although dependent on the helium abundance, the location of the acoustic glitch cannot be used (with the uncertainties assumed here) to discriminate between populations of stars with different $Y$. However, the results do imply that, using the amplitude of the signal at $\nu_{\textrm{\scriptsize{max}}}$, we will be able to discriminate between models with $Y=0.250$ and $0.400$ using 1460\,d of $\nu_\textrm{\scriptsize{max}}>50\,\rm\mu Hz$. Such a comparison may be important for testing scenarios of high helium enrichment, such as the enrichment that may occur from the ejecta of massive asymptotic giant branch stars \citep[see][and references therein]{Gratton2012}. Examples of split populations within globular clusters that have very different $Y$ are becoming more frequent. One prominent example is the globular cluster $\omega$ Centauri, the most massive globular cluster in the Milky Way, which is believed to contain at least two distinct stellar populations, one of which is assumed to have the primordial helium abundance ($Y=0.25$), another population within the cluster is believed to have $Y=0.38$ \citep{Piotto2005} and there is even the possibility of another metal-rich component that may have $Y$ as high as 0.40 \citep{Lee2005, Sollima2005}. There is also evidence for similarly split populations in, for example, NGC 2808 \citep{DAntona2004, Piotto2007} and NGC 6441 \citep[][and references therein]{Caloi2007}. In fact, although not always with such widely separated $Y$ as in the above examples \citep[e.g][and references therein]{Milone2009} multiple stellar populations have been observed in numerous globular clusters. At present no seismic data are available for clusters with distinct, well-separated helium abundance populations. However, if, in the future, such data do become available use of seismic techniques to distinguish between red giants with different helium abundances would be particularly useful given that spectroscopic determinations of $Y$ are not possible due to their low effective temperatures. One cluster that may be observed in the near future by \textit{Kepler's} K2 mission \citep{Chaplin2013} is the M4 globular cluster. The helium abundance of stars in this cluster appears to be enhanced by approximately 0.04 compared with the primordial helium content of the Universe \citep{Villanova2012}. Although this enhancement is small with respect to the differences in $Y$ we can reliably detect here it will be interesting to verify whether the differences in $Y$ estimated from the morphology of the colour-magnitude diagram and from the spectroscopic data of horizontal branch stars are at least compatible with the asteroseismic data. Finally, we note that the $3\sigma$ difference required here is both reasonable and yet stringent. However, one need not restrict oneself to definitively stating that stars do or do not have the same initial helium abundance. Instead, it would be possible to extend the work done here in a statistically rigorous manner to determine the likelihood that two stars have the same initial helium abundance. Application of our methodology could, therefore, be instrumental in discriminating between RGB stars of populations with different $Y$ within globular clusters. \appendix | 14 | 3 | 1403.7045 |
1403 | 1403.5795_arXiv.txt | Only a few percent of cool, old white dwarfs (WDs) have infrared excesses interpreted as originating in small hot disks due to the infall and destruction of single asteroids that come within the star's Roche limit. Infrared excesses at 24 \micron~were also found to derive from the immediate vicinity of younger, hot WDs, most of which are still central stars of planetary nebulae (CSPN). The incidence of CSPN with this excess is 18\%. The Helix CSPN, with a 24 $\mu$m excess, has been suggested to have a disk formed from collisions of Kuiper belt-like objects (KBOs). In this paper, we have analyzed an additional sample of CSPN to look for similar infrared excesses. These CSPN are all members of the PG 1159 class and were chosen because their immediate progenitors are known to often have dusty environments consistent with large dusty disks. We find that, overall, PG 1159 stars do not present such disks more often than other CSPN, although the statistics (5 objects) are poor. We then consider the entire sample of CSPN with infrared excesses, and compare it to the infrared properties of old WDs, as well as cooler post-AGB stars. We conclude with the suggestion that the infrared properties of CSPN more plausibly derive from AGB-formed disks rather than disks formed via the collision of KBOs, although the latter scenario cannot be ruled out. We finally remark that there seems to be an association between CSPN with a 24-$\mu$m excess and confirmed or possible binarity of the central star. % | There are three indicators of planetary debris around white dwarfs (WDs): (1) metal pollution, discovered decades ago \citep[e.g.,][]{1960ApJ...131..638W}, but originally thought to be accretion from the ISM \citep[e.g.,][]{1993AJ....105.1033A, 1993ApJS...84...73D}, (2) an infrared (IR) excess due to warm dust and first detected around G29-38 \citep{1987Natur.330..138Z}, (3) gaseous disks \citep[e.g.,][]{2006Sci...314.1908G}, which unambiguously showed that the debris material is an accretion disk lying within the tidal disruption radius of the WD \citep{2005ApJ...635L.161R}. The debris material comes from the disruption of asteroids \citep{1990ApJ...357..216G,2003ApJ...584L..91J}. These three indicators of debris accretion are hierarchical. All WDs that have gaseous disks also have dust/IR excesses, and all WDs that have dust also have metal-polluted atmospheres \citep{2009ApJ...696.1402B, 2012ApJ...750...86B}. However, this does not work in the opposite direction, i.e., not all metal-polluted WDs have dust, and not all dusty WDs have gas. The fraction of cool WDs with IR excesses is $\sim$1--3\% \citep{2009ApJ...694..805F,2011MNRAS.416.2768S,2011MNRAS.417.1210G,2012ApJ...760...26B}. These disks are typically found very close ($<$1 R$_{\sun}$) to cool ($<$25,000 K) WDs. The dust in these disks is quite warm ($\sim$1000 K), and so the dust emission shows up strongly in the Spitzer IRAC bands \citep{2012ApJ...745...88X}. Many questions still remain about the structure and longevity of these disks and what they are telling us about the planetary systems that once orbited these stars, but other than that, the debris nature of these disks seems to be a reasonable interpretation. In 2007, a dust disk was detected around the central star of the Helix planetary nebula (PN), also known as NGC~7293 \citep{2007ApJ...657L..41S}. This disk differed greatly from those previously found around WDs. In the Helix system, the star is much hotter (110,000 K vs. $<$25,000 K of the typical WD with debris disks); the dust is much colder ($\sim$100 K vs. 1000 K for the WD disks), and lies much farther from the star \citep[$\sim$50 AU vs. $<$0.01 AU for the WD disks;][]{2007ApJ...657L..41S,2012ApJS..200....3B}. Nonetheless, this object was also interpreted as having a debris disk, but the suggested origin was dust production by collisions among Kuiper belt-like objects (KBOs), rather than individual asteroids pulverized by entering the Roche limit. Further surveys have found cold dust disks around a number of hot WDs and central stars of PN (CSPN). \citet{2011AJ....142...75C} looked at seventy-one stars and found nine disks. Seven of these nine stars with disks were CSPN. The other two are hot WDs likely to have been PN in the very recent past \citep{1994ApJ...433L..93T,1997A&A...327..721W}. Out of the 35 CSPN analyzed by \citet{2011AJ....142...75C}, 7 detections represent a detection rate of 20\%. \citet{2012ApJS..200....3B} searched an additional sample of CSPN. Out of 56 viable candidates, they detected disks in 17-20\% of their sample. For the combined samples the incidence of disks is 18\%. The much higher frequency of disks around CSPN than around cool WDs raised questions as to whether their nature is also that of debris disks or whether these disks are formed by mass loss from the stars during the asymptotic giant branch (AGB) phase. Dusty outflows, disks, and shells are, of course, known and expected to be associated with cool AGB and post-AGB stars, but one might expect them to be destroyed by the heating of their central stars. The presence and longevity of disks around CSPN and young, hot WDs can inform us about the recent past of these stars, including the presence of a binary companion that may have influenced the AGB mass loss process, and also about dust formation and survival. These questions led us to look for more disks around CSPN. We selected the PG 1159 stars that constitute about 10-20\% of the entire CSPN group. These stars are intermediate between the the Wolf-Rayet central stars of PN (also known as the [WC] stars) and WDs. The reason for targeting these stars is that some of their immediate progenitors, the [WC] stars, are known to have spectacular, dusty environments, with large silicate disks as well as carbon rich outflows \citep[e.g., CPD -56\degree 8032,][]{1999ApJ...513L.135C}\footnote{All known examples of oxygen-and-carbon chemistry in [WC] stars are in the [WCL] (`L' for late) class, in the immediate post-AGB phase, with temperatures of 30-50kK. Although there have been claims that at least one of the hotter, earlier [WC] objects also shows the dual-dust signature of a disk, we note that the only claim of dual-dust for a [WCE] central star (NGC 5315) is by \citet{2002PASP..114..602D}, who cited a private communication. A similar claim was made by \citet{2009A&A...495L...5P} with no citation. We examined the ISO SWS spectrum of NGC 5315 and there is no sign of silicate features. The ISO spectrum was also classified as 4/5.Eu: by \citet{2004ApJS..151..299H}, which indicates atomic and PAH emission but no silicate features.}. Abundance measurements of debris around cool metal-polluted WDs imply that the material is very carbon-poor material \citep[e.g.,][]{2012MNRAS.424..333G,2012ApJ...750...69J}. PG 1159 stars follow the [WC] central stars by no more than 10$^3$--10$^4$ yr (using a stellar evolutionary track for a 0.60-0.63~M$_\odot$ star from \citet{1993ApJ...413..641V} and known PG 1159 stellar parameters from \citet{1998A&A...334..618D}), while the oldest PG 1159 stars, those with no PNe, are another 10$^4$--10$^5$ yr older than that. Since the dust around Wolf-Rayet central stars may be distributed in long lived, Keplerian disks, it is possible that these disks have survived the hottest phases of the star and are present in the PG 1159 stage. We have therefore obtained \emph{Spitzer}/IRS spectra of a sample of nine PG 1159 stars, all associated with PNe to look for evidence of dusty disks, increase the statistics of disks around CSPN, compare them further with debris disks around old WDs as well as with that around the Helix CSPN and study the disk longevity around CSPN. | Dusty disks have been detected around about 18\% of CSPN. One out of five PG 1159 stars with PNe \citep{2011AJ....142...75C,2012ApJS..200....3B} show evidence of a disk, in line with the rest of the PN group. This may indicate that PG~1159 stars do not show disks more often than non-PG 1159 CSPN. On the other hand, the statistics of PG1195 CSPN are extremely weak. It is interesting that none of the PG1159 stars, which have no PN and are therefore 10$^4$--10$^5$ yr older, shows evidence for a dusty disk. The characteristics of the disks, such as mass, radius, composition, and temperature will yield their origin. Currently, however, we have only partial information on disks around a range of object classes, which makes it difficult to draw accurate conclusions. While the rare dust disks found around old DZ WDs are certainly of a different nature from those around CSPN, it is still unclear what the nature of the CSPN disks is: do they derive from the pulverization of KBOs, as is plausible for the Helix PN, or are they dust formed in the ejecta of AGB stars? Based on the colors and brightness of the CSPN disks relative to those of stars that have just left the AGB, one may argue that the origin of CSPN disks is from AGB ejecta. One may also argue that CSPN disks may themselves have more than one origin. Disks around hydrogen deficient [WC] and PG 1159 CSPN could derive from a different evolution from those around hydrogen-normal CSPN and cooler post-AGB stars (with or without PN). On the other hand, on the color-magnitude diagram, the Helix CSPN disk resides close to other hot CSPN, possibly indicating a common origin. Finally, it came as a surprise that 8 out of 13 CSPN with disks detected because of 8 or 24-\micron~excess are either binaries or likely/possible binaries. This connection, which should be further explored, may argue for an AGB origin. | 14 | 3 | 1403.5795 |
1403 | 1403.5276_arXiv.txt | We study the $\gamma$-ray variability of 13 blazars observed with the Fermi Large Area Telescope (LAT). These blazars have the most complete light curves collected during the first 4 years of the Fermi sky survey. We model them with the Ornstein-Uhlenbeck (OU) process or a mixture of the OU processes. The OU process has power spectral density (PSD) proportional to $1/f^{\alpha}$ with $\alpha$ changing at a characteristic time scale, $\tau_{\rm 0}$, from 0 ($\tau \gg \tau_{\rm 0}$) to 2 ($\tau \ll \tau_{\rm 0}$). The PSD of the mixed OU process has two characteristic time scales and an additional intermediate region with $0<\alpha<2$. We show that the OU model provides a good description of the Fermi/LAT light curves of three blazars in our sample. For the first time we constrain characteristic $\gamma$-ray time scale of variability in two BL Lac sources, 3C 66A and PKS 2155-304 ($\tau_{\rm 0} \simeq 25$\,day and $\tau_{\rm 0} \simeq 43$\,day, respectively, in the observer's frame), which are longer than the soft X-ray time scales detected in blazars and Seyfert galaxies. We find that the mixed OU process approximates the light curves of the remaining 10 blazars better than the OU process. We derive limits on their long and short characteristic time scales, and infer that their Fermi/LAT PSD resemble power-law functions. We constrain the PSD slopes for all but one source in the sample. We find hints for sub-hour Fermi/LAT variability in four flat spectrum radio quasars. We discuss the implications of our results for theoretical models of blazar variability. | A significant fraction of active galactic nuclei (AGN) produce powerful relativistic jets, which are prominent sources of non-thermal radiation. In blazars, where one of the jets is closely aligned with our line of sight, this non-thermal radiation component is relativistically boosted to the point that it easily outshines the entire host galaxy. Spectral energy distributions of many blazars peak in the gamma-ray band, making data from the Fermi Large Area Telescope (Fermi/LAT) essential for studying the jet physics. The gamma-ray emission of blazars is well known for its strong and incessant variability, indicating the complex structure of the underlying dissipation and particle acceleration sites. It proved to be very difficult to provide a satisfactory statistical description of these variations that would be useful for constraining such basic parameters as, e.g., the location and size of the dissipation sites along the jet. Before the launch of Fermi, thorough blazar variability studies were impaired mainly due to the lack of $\gamma$-ray blazar monitoring data, and robust statistical methods to model the variable blazar emission. The Fermi/LAT instrument has been performing continuous observations of the $\gamma$-ray sky since 2008, which provided good quality light curves of a sample of bright blazars, and boosted the blazar $\gamma$-ray variability studies. Recently, it has been demonstrated that the $\gamma$-ray power spectral densities (PSD) of blazars appear to be in the form of a power-law function (e.g. Abdo et al. 2010; Nakagawa \& Mori 2013, hereafter NM13), indicating a stochastic nature of the high energy blazar variability. On the other hand, the highest flux states of blazars are commonly described as flares, defined using the concept of the flux doubling and halving time scales (e.g. Nalewajko 2013, Saito et al. 2013). Such an approach suggests that flares have an origin that is distinct from that of the bulk of the $\gamma$-ray blazar variability. An important characteristic of a variability process is a frequency or a time scale at which the properties of its PSD (e.g. the slope) change. This time scale may be related to, e.g., the size of the emitting region, and used to constrain the process triggering the variability. However, the featureless power-law like Fermi/LAT blazar PSD have been so far preventing this kind of inference, with 3C 454.3 being the only blazar with a $\gamma$-ray PSD break reported in the literature. Ackermann et al. (2010) performed a PSD and structure function analyses of a 120\,day flaring section of the Fermi/LAT light curve of this source. They revealed a break at frequency corresponding to a specific time scale $t\sim6.5$\,day. Subsequently, NM13 found a break frequency corresponding to $t\sim7.9$\,days in the first 4-year Fermi/LAT light curve of 3C 454.3. Ackermann et al. (2010) cautioned that their PSD break may not indicate a characteristic time scale but a frequency at which two PSD components become equally strong. NM13 interpreted their characteristic time scale in terms of the internal shock model for the $\gamma$-ray blazar emission, in which blob ejecta collide in the internal shock of the blazar jet (Kataoka et al. 2001). This assumption allowed them to estimate the black hole mass in 3C 454.3 to be in the $10^8$--$10^{10}$\,M$_{\odot}$ range. The PSD slopes below and above the breaks estimated by Ackermann et al. (2010) and NM13 as well as the location of the PSD breaks were inconsistent with each other which may suggest either a non-stationarity of the variability process in this source, or indeed distinct variability properties of the $\gamma$-ray flares. The methods relying on the PSD extraction require that the light curves are uniformly sampled, or a number of biases are introduced and have to be accounted for, which is not trivial. Models of variability applied directly to the light curves avoid these biases. Kelly et al. (2009, 2011) developed and advocated for stochastic models for luminosity fluctuations of accreting black holes, motivated by PSD proportional to $1/f^{\alpha}$ with one or two breaks that have been commonly observed in X-rays in the black hole binaries (e.g. Pottschmidt et al. 2003, Axelsson et al. 2005, Belloni et al. 2005, Reig et al. 2013), and in optical and X-ray bands in AGN (e.g. Markowitz et al. 2003, Kelly et al. 2009). The models of Kelly et al. are based on the Ornstein-Uhlenbeck process or a linear superposition of the OU processes. Thus, they explicitly assume a power-law PSD with one or two characteristic time scales. Kelly et al. derive the likelihood function for their statistical models and perform statistical inference within a Bayesian framework. This allows them to obtain the probability distributions of the model parameters, such as the characteristic time scales and the slopes of the intermediate part of the PSD, given the data. They fully account for the measuring errors, irregular sampling, red noise leak, and aliasing. In addition, direct modeling of the light curves allows to combine easily different sampling time scales. All these advantages make the Kelly et al. models particularly attractive for constraining the PSD of AGN in all energy bands. In this paper we apply the stochastic models of Kelly et al. to the first 4-year Fermi/LAT light curves of 13 bright blazars. This is the first systematic analysis of the $\gamma$-ray blazar light curves in the time domain using the parametric methods which are not sensitive to the observational biases. The paper is organized as follows. In Section~\ref{sec:sample} we describe the blazar sample and data reduction procedure. The models are summarized in Section~\ref{sec:model}. In Section~\ref{sec:results} we present our results on the derived constraints on the PSD parameters for the sources in our sample. In Section~\ref{sec:discussion} we discuss our findings on the $\gamma$-ray blazar variability and perform a comparison with with the X-ray variability properties of blazars and non-blazar AGN. We formulate our conclusions in Section~\ref{sec:conclusions}. | \label{sec:conclusions} We have applied the stochastic models of luminosity fluctuations developed in Kelly et al. (2009, 2011) to the Fermi/LAT light curves of 13 well observed blazars in order to study their time variability properties. The light curves of 3 blazars were consistent with the OU process which is characterized with a PSD $\propto 1/f^{\alpha}$ featuring a bend at a characteristic times scale, $\tau_{\rm 0}$, where the slope, $\alpha$, changes from 0 ($\tau \gg \tau_{\rm 0}$) to 2 ($\tau \ll \tau_{\rm 0}$). We constrained $\tau_{\rm 0}$ in two BL Lac type blazars, 3C 66A ($\tau_{\rm 0}\simeq25$\,day, or $\simeq17$\,day in the rest frame) and PKS 2155-304 ($\tau_{\rm 0}\simeq43$\,day, or $\simeq38$\,day in the rest frame). Thus, the inferred $\gamma$-ray BL Lac characteristic time scales were longer than those observed in the soft X-ray blazar and Seyfert light curves. In addition, the low- and high-frequency PSD slopes in the BL Lacs fitted well with the OU process were flatter than their counterparts in the soft X-rays. These discrepancies indicate either the energy dependence of the PSD or different origins of the BL Lac X-ray and $\gamma$-ray variability. In 10 of 13 sources a better agreement with the data was obtained with the mixed OU process characterized with a PSD featuring two bends and an intermediate part of PSD where $0<\alpha<2$. For these sources we derived respective limits on the long and short characteristic time scales. We concluded that their underlying PSD has likely the form of a power-law function over the sampled range of temporal frequencies. The upper limit on the short characteristic time scale indicates sub-hour variability in 4 FSRQ sources. This finding needs to be addressed by present and future theoretical models of blazar $\gamma$-ray variability. We constrained the PSD slopes, $\alpha$, for all sources except 3C 454.3. Our stochastic approach is particularly well suited for the variability studies of various classes of AGN because it accounts self-consistently for irregular sampling, measurement errors, red noise leak, and aliasing. Thus, it provides an alternative to the variability methods based on the PSD construction. | 14 | 3 | 1403.5276 |
1403 | 1403.1796_arXiv.txt | We show that the low ratios of $\alpha$ elements (Mg, Si, and Ca) to Fe recently found for a small fraction of extremely metal-poor stars can be naturally explained with the nucleosynthesis yields of core-collapse supernovae, i.e., $13-25M_\odot$ supernovae, or hypernovae. For the case without carbon enhancement, the ejected iron mass is normal, consistent with observed light curves and spectra of nearby supernovae. On the other hand, the carbon enhancement requires much smaller iron production, and the low [$\alpha$/Fe] of carbon enhanced metal-poor stars can also be reproduced with $13-25M_\odot$ faint supernovae or faint hypernovae. Iron-peak element abundances, in particular Zn abundances, are important to put further constraints on the enrichment sources from galactic archaeology surveys. | The observed elemental abundances of metal-poor stars can be used to constrain the physics of supernovae \citep[see][for a review]{nom13}. Although the nature of the first stars is not well understood, if they explode as supernovae the first chemical enrichment is imprinted in the elemental abundances of the second generation of stars. During the early stages of galaxy formation, the interstellar medium (ISM) is highly inhomogeneous, and it is likely that these elemental abundance patterns are determined only by a few supernovae \citep{aud95}. In the inhomogeneous enrichment, metallicity is not a time indicator anymore but merely reflects the metallicity of the cloud in which the second generation of stars formed. This metallicity is often estimated with an analytic formula \citep{tom07}, but detailed hydrodynamical simulations with radiative cooing are necessary to predict the metallicity distribution function of the second generation of stars. Alternatively, we assume that metal poor stars with [Fe/H] $\ltsim -3$ are enriched by a single supernova, and study the properties of the supernova by comparing observed elemental abundances and nucleosynthesis yields (abundance profiling/fitting). Thanks to large scale surveys and follow-up high resolution spectroscopy, intensive observations of metal-poor stars have revealed the existence of extremely-, ultra-, and hyper- metal-poor (EMP, UMP, and HMP) stars with [Fe/H] $=(-4,-3),(-5,-4),(-6,-5)$, respectively \citep{bee05}. The elemental abundance patterns have a distinct signature: Including two HMP stars, $10-25$\% of stars with [Fe/H] $\ltsim -2$ show carbon enhancement relative to iron ([C/Fe] $\gtsim1$, \citealt{aok10}). Such carbon-enhanced metal-poor (CEMP) stars often show enhancement of $\alpha$ elements (O, Mg, Si, S, and Ca), although one star at [Fe/H]$=-4.99$ does not show enhancement of C, N, and Mg \citep{caf11}. There are various scenarios to explain the carbon enhancement, including rotating massive stars \citep{mey06}, asymptotic giant branch stars in binary systems \citep{sud04,lugaro08}, and black-hole-forming core-collapse supernovae (faint supernovae; \citealt{ume02,iwa05,tom07}). \citet{caf13} recently presented different types of EMP stars that show lower [$\alpha$/Fe] ratios with/without carbon enhancement than other EMP stars. The variation of [$\alpha$/Fe] ratios of EMP stars has been known. As the quality of data improved, \citet{cay04} concluded that without CEMP, the scatter of elemental abundances is so small that the ISM is well mixed at the early stages of galaxy formation. However, a significant scatter is seen in other observational data \citep[e.g.,][]{hon04,yon13,coh13} and a small fraction of stars show lower [$\alpha$/Fe] ratios than $\sim$ 0.2. An intrinsic variation of [$\alpha$/Fe] ratios can be caused from the following enrichment sources: (i) The most popular source is Type Ia Supernovae (SNe Ia), which produce more Fe than $\alpha$ elements. Depending on the fractional contribution of core-collapse supernovae from previous populations, [$\alpha$/Fe] can vary between $\sim 0.5$ to $\sim -0.6$. However, there is a time-delay of the enrichment in the case of SNe Ia, which depends on the progenitor systems \citep{kob09}, and is $\sim 34$ Myr at the shortest for $8 M_\odot$ primary stars. Hence, it is unlikely that many EMP stars are affected by SNe Ia. (ii) $\sim 10-20 M_\odot$ supernovae have a smaller mantle mass that contains $\alpha$ elements than more massive stars, and thus give lower [$\alpha$/Fe] ratios than the initial mass function (IMF) weighted values of core-collapse supernova yields, i.e., the plateau values of [$\alpha$/Fe]-[Fe/H] relations \citep{kob06,kob11agb}. These supernovae will leave a neutron star behind, and should be very common for the standard IMF weighted for the low-mass end \citep[e.g.,][]{kro08}. (iii) Hypernovae ($E_{51} \equiv E/10^{51} {\rm erg}\gtsim10$ for $\gtsim25M_\odot$) are observationally known to produce more iron than normal supernovae ($E_{51}\sim1$ for $\gtsim10M_\odot$) \citep[e.g.,][]{nom03,sma09}. Therefore, hypernovae can give lower [$\alpha$/Fe] ratios than supernovae at a given progenitor mass. The hypernova rate is not very high at present, but can be high for low-metallicity stars because of small angular momentum loss. (iv) Faint supernovae are proposed to explain the elemental abundance patterns of CEMP stars from carbon to zinc. The central parts of supernova ejecta that contain most of iron fall back onto the black hole, while the stellar envelopes that contain carbon are ejected as in normal supernovae. Therefore, the [C/Fe] ratio of faint supernovae is as large as that of CEMP stars. Among $\alpha$ elements, O and Mg are synthesized during hydrostatic burning and located in the outskirts of ejecta. Therefore, faint supernovae often have high [(O, Mg)/Fe] ratios, depending on mixing-fallback processes. The faint supernovae scenario is also the best explanation of the observed carbon-enhanced damped Lyman $\alpha$ (DLA) system \citep{kob11dla}. (v) Primordial stars with initial masses of $\sim 140-270 M_\odot$ enter into the electron-positron pair-instability region during the central oxygen-burning stages, where most of O and Mg are transformed into Si, S, and Fe. Pair-instability supernovae produce a much larger amount of iron and higher [(Si,S)/(O,Mg)] ratios than core-collapse supernovae. Such abundance patterns have been found neither in EMP stars \citep[e.g.,][]{cay04} nor in DLA systems \citep{kob11dla}. In this Letter, we explore our supernova and hypernova models with/without mixing-fallback over a wide range of progenitor mass. We then perform abundance fitting to observed low-$\alpha$ stars and discuss the enrichment sources (\S2). In \S3 we give more general discussion on supernovae and chemical enrichment, and summarize our main conclusions in \S4. \begin{table*}[tbh] \begin{center} \begin{tabular}{lccccccccccc} \hline Name & [Fe/H] & [Mg/Fe] & [Ca/Fe] & $M$ & $E_{51}$& $M_{\rm cut}$ & $M_{\rm mix}$ & $\log f$& $M_{\rm rem}$ & $M(^{56}{\rm Ni})$ & $\chi^{2}/N$\\ & (dex) & (dex) & (dex) & ($M_{\odot}$)&($10^{51}$ erg) & ($M_{\odot}$) & ($M_{\odot}$) & & ($M_{\odot}$) & ($M_{\odot}$) &\\ \hline J144256$-$001542 & $-4.09\pm0.21$ & $0.27$ & $0.29$ & 15 & 1 & 1.5 & - & $0.0$ & 1.5 & 0.06 &0.12\\ & & & & 15 & 1 & 1.4 & 1.5 & $-1.2$ & 1.5 & 0.06 &0.12\\ & & & & 25 & 1 & 1.7 & 2.9 & $-0.4$ & 2.4 & 0.10 &$<$0.01\\ & & & & 40 & 30 & 2.2 & 5.9 & $-0.8$ & 5.3 & 0.33 &$<$0.01\\ J153346$+$155701 & $-3.34\pm0.26$ & $0.06$ & $0.08$ & 15 & 1 & 1.4 & - & $0.0$ & 1.4 & 0.14 &0.38\\ & & & & 15 & 1 & 1.4 & 1.5 & $-0.3$ & 1.4 & 0.09 &0.09\\ & & & & 25 & 1 & 1.7 & 3.6 & $-0.3$ & 2.6 & 0.13 &0.19\\ & & & & 25 & 10 & 1.8 & 3.5 & $-0.4$ & 2.8 & 0.24 &$<$0.01\\ J161956$+$170539 & $-3.57\pm0.25$ & $0.04$ &$-0.35$ & 15 & 1 & 1.4 & 3.2 & $-4.0$ & 3.2 & 1.4$\times 10^{-5}$ &0.30\\ & & & & 25 & 10 & 1.7 & 6.5 & $-4.0$ & 6.5 & 6.7$\times 10^{-5}$ &0.71\\ \hline HE0305-5442 & $-3.30\pm0.20$ & $0.22$ & $-0.04$ & 15 & 1 & 1.4 & 1.6 & $-0.5$ & 1.5 & 0.04 &1.37\\ & & & & 25 & 10 & 1.7 & 6.2& $-1.6$ & 6.1 & 0.02 &1.14\\ HE1416-1032 & $-3.20\pm0.16$ & $0.18$ & $0.03$ & 15 & 1 & 1.4 & 4.3 & $-2.4$ & 4.3 & 5.4$\times 10^{-4}$ &3.10\\ & & & & 25 & 10 & 1.9 & 9.1 & $-3.6$ & 9.1 & 1.3$\times 10^{-4}$ &3.67\\ HE2356-0410 & $-3.06\pm0.20$ & $0.11$ &$0.16$ & 15 & 1 & 1.4 & 3.1 & $-3.7$ & 3.1 & 2.7$\times 10^{-5}$ &8.64\\ & & & & 25 & 10 & 1.9 & 7.1 & $-4.7$ & 7.1 & 1.0$\times 10^{-5}$ &6.01\\ \hline \end{tabular} \caption{\rm Observed abundances, the parameters of nucleosynthesis models (progenitor mass $M$, explosion energy $E_{51}$, inner boundary $M_{\rm cut}$, outer boundary $M_{\rm mix}$, and ejection fraction $f$ of the mixing region), the outputs (remnant mass $M_{\rm rem}$, and ejected iron mass M$(^{56}{\rm Ni})$), and the $\chi^{2}$ of the abundance fitting.} \end{center} \end{table*} | In the early stages of chemical enrichment, the interstellar medium is supposed to be highly inhomogeneous, so that the properties of the first supernovae can be directly extracted from the comparison between the observed elemental abundances and nucleosynthesis yields. We show that the low [$\alpha$/Fe] ratios recently found for a small fraction of extremely metal-poor stars can be naturally explained with the nucleosynthesis yields of core-collapse supernovae, i.e., (1) $13-25M_\odot$ supernovae, or (2) hypernovae. If we allow an enhanced mixing and a large fallback, $40M_\odot$ supernova models (3) could be consistent with the observed low [$\alpha$/Fe] ratios. For the case without carbon enhancement, the ejected iron masses of these favored models (1-3) are normal, $M({\rm Fe})=0.05-0.15M_\odot$ for supernovae and $M({\rm Fe})=0.1-1.4M_\odot$ for hypernovae, consistent with observed light curves and spectra of nearby supernovae. The first source (1) has been included in the standard set of nucleosynthesis yields that have been applied to galactic chemical evolution models, while the other sources are different from those in galactic chemical evolution models and rarer. On the other hand, the carbon enhancement requires much smaller iron production, and the low [$\alpha$/Fe] of carbon enhanced metal-poor stars can also be reproduced with faint supernovae or faint hypernovae. The ejected iron mass is $M({\rm Fe})<0.001M_\odot$, much smaller than normal supernovae. These enrichment sources are similar to those proposed for typical carbon-enhanced EMP stars and DLAs with [$\alpha$/Fe]$\gtsim0.5$, but the progenitor mass is as low as $13-25 M_\odot$, or more extended mixing and larger fallback occur in $25-40M_\odot$ stars. The former case implies that $\ltsim 25M_\odot$ stars may form black holes. Iron-peak element abundances, in particular Zn abundances, are important to put further constraints on the enrichment sources. $25-40M_\odot$ supernova (not hypernova) models may disagree with the observed high Co abundances of low [$\alpha$/Fe] stars. The frequency of these sources should be examined with future galactic archaeology surveys. | 14 | 3 | 1403.1796 |
1403 | 1403.1275_arXiv.txt | {Extrasolar-planet searches that target very low-mass stars and brown dwarfs are hampered by intrinsic or instrumental limitations. Time series of astrometric measurements with precisions better than one milli-arcsecond can yield new evidence on the planet occurrence around these objects.} {We present first results of an astrometric search for planets around 20 nearby dwarf stars with spectral types M8--L2.} {Over a time-span of two years, we obtained $I$-band images of the target fields with the FORS2 camera at the Very Large Telescope. Using background stars as references, we monitored the targets' astrometric trajectories, which allowed us to measure parallax and proper motions, set limits on the presence of planets, and to discover the orbital motions of two binary systems.} {We determined trigonometric parallaxes with an average accuracy of 0.09 mas ($\simeq$\,0.2\,\%), which resulted in a reference sample for the study of ultracool dwarfs at the M/L transition, whose members are located at distances of 9.5--40 pc. This sample contains two newly discovered tight binaries (\dwtwo\ and \dwnine) and one previously known wide binary (\dwfift). Only one target shows $I$-band variability $>$5 mmag r.m.s. We derived planet exclusion limits that set an upper limit of 9 \% on the occurrence of giant planets with masses $\gtrsim$\,5\,$M_J$ in intermediate-separation (0.01--0.8 AU) orbits around M8--L2 dwarfs.} {We demonstrate that astrometric observations with an accuracy of 120 $\mu$as over two years are feasible from the ground and can be used for a planet-search survey. The detection of two tight very low-mass binaries shows that our search strategy is efficient and may lead to the detection of planetary-mass companions through follow-up observations.} | Extrasolar planets are common around stars in the solar neighbourhood \citep{Mayor:1995lr, Mayor:2011fj}, but little is known about their existence around very low-mass stars and brown dwarfs, also known as ultracool dwarfs (UCDs) with spectral types M7 and later \citep{Martin:1999yf}, because of their low luminosities and the associated observational limitations. The presence of planets is expected because UCDs provide the necessary ingredients for planet formation and are commonly surrounded by disks in which grain growth and dust settling has been observed \citep{Apai:2005kx, Riaz:2012ys, Ricci:2012fk, Luhman:2012vn}. The potential planet mass depends on the amount of material available in the disk, which is generally lower than for main-sequence stars. Extended disks with masses higher than Jupiter-mass are observed, but not common \citep{Scholz:2006vl, Harvey:2012vn}, and smaller disk masses are found frequently, which provides the material for the formation of sub-Jupiter-mass planets \citep{Payne:2007ad}. The discovery of giant planets around UCDs can on one hand be used to probe the predictions of planet formation theories. {According to the core-accretion theory, giant-planet occurrence scales with central star mass and is expected be low around M dwarfs \citep{Laughlin:2004uq}, hence especially low around UCDs. Disk instability may be able to form giant planets around UCDs if their disks are \emph{suitably unstable} \citep{Boss:2006kx}.} On the other hand, the search for planets with Neptune-mass and lighter is a first step towards characterising the population of small and terrestrial planets around UCDs, some of which may reside in the habitable zones and therefore become prime targets for future attempts to detect the constituents of their atmospheres \citep{Belu:2013aa,Bolmont:2011lr}. % Radial-velocity measurements of UCDs were used to exclude a large population of giant planets $\gtrsim$2\,{Jupiter-mass} ($M_J$) on very tight orbits $<0.05$ AU \citep{Blake:2010lr, Rodler:2012uq}. At wider separations $\gtrsim$2\,AU, direct-imaging searches equally excluded a large population of giant planets \citep{Stumpf:2010lr}. Two very low-mass stars were found to host Earth-mass (\citealt{Kubas:2010fk}, using microlensing) and Mars-sized (\citealt{Muirhead:2012fk}, using \textit{Kepler}) planets. Recently, a $\sim$$2\,M_J$ planetary mass object\footnote{The assignment of planet status to individual objects in the literature is debated because of their unknown formation paths and the observed overlapping mass range of planets and brown dwarfs. For UCDs, we propose to use mass ratio {and separation thresholds} of 0.1 and 10 AU, respectively, below which companions may be called planets.} was discovered at 0.87 AU around a 0.022 $M_\sun$ brown dwarf using gravitational microlensing \citep{Han:2013aa}. \subsection{Very low-mass binaries} Ultracool dwarfs are thought to form like stars, but this view is challenged by the apparent properties of UCD binaries that show significant differences to stellar binaries \citep{Bouy:2006aa, Burgasser:2007ix, Duchene:2013aa}. In particular, the secondary-to-primary mass ratio ($q=M_2/M_1$) distribution is strongly skewed towards unity in contrast to Sun-like stars and M dwarfs that show a nearly uniform $q$-distribution, which may suggest different formation mechanisms \citep{Goodwin:2013fk}. By mapping for example the $q$- and orbital-eccentricity distribution, the discovery and characterisation of UCD binaries {yields new observational results that can help to examine very low-mass binary formation.} {In this work, we consider binaries to be \emph{tight} if their relative semi-major axis is $\lesssim 1$\,AU.} \subsection{Astrometric planet search} Astrometry consists of measuring the apparent sky-position of stars and is a powerful method for the discovery and characterisation of extrasolar planets, provided that the achieved accuracy is better than 1 milli-arcsecond (mas), {a threshold that} corresponds to the reflex motion amplitude induced on a Sun-like star at 10 pc by a 5 $M_J$ giant planet on a three-year orbit \citep{Sozzetti:2005qy, Sahlmann:2012fk2}. Ultracool dwarfs have been targeted by several astrometric planet searches \citep{Pravdo:1996fk, Boss:2009ff, Forbrich:2013aa}, but have not yet yielded new exoplanet discoveries. So far, the importance of astrometry for UCD research stemmed therefore primarily from its ability to yield precise trigonometric distances, which are central to determine the luminosity, mass, and age relationships for UCDs and required to understand the physics of these objects \citep{Dahn:2002zr, Andrei:2011lr, Dupuy:2012fk, Dupuy:2013aa, Smart:2013aa}. The currently most precise ground-based instrument for astrometry of faint ($\gtrsim$10th mag) optical sources is {\small FORS2} at the Very Large Telescope, achieving accuracies of 50--100 micro-arcsecond ($\mu$as) \citep{Lazorenko:2009ph}. At this level, astrometry opens a new observational window to low-mass companions of UCDs at small-to-intermediate separations ($\sim$\,0.05\,--\,2\,AU). {For instance, the reflex motion amplitude induced on a 0.08 $M_\sun$ object at 10 pc by a Neptune-mass planet on a three-year orbit is 60 $\mu$as.} We therefore began an astrometric survey of UCDs using {\small FORS2} in 2010 ({also known as the PALTA project: planets around L-dwarfs with astrometry.}) and announced its first discovery, a low-mass companion to an L\,dwarf, in \cite{Sahlmann:2013kk}, hereafter \citetalias{Sahlmann:2013kk}. Here, we report first results of the survey covering a time-span of two years, which allowed us to screen the target sample, measure parallaxes, and discover new binaries. The paper is structured as follows: Sections \ref{sec:targetsel} and \ref{sec:obs} describe the target selection and the observations. The astrometric data analysis is detailed in Sect.~\ref{sec:analysis} and the results are presented in Sect.~\ref{sec:results}. We conclude in Sect.~\ref{sec:concl}. In an accompanying paper, we describe the data reduction procedures in detail and present a new deep astrometric catalogue of reference stars in the target fields. | \label{sec:concl} We presented the first results of an astrometric survey targeting 20 ultracool dwarfs at the M/L transition obtained after two years. The project's primary goal is to detect planetary companions, but the {\small FORS2} observations provide us with a rich dataset that covers a variety of science cases. We determined trigonometric parallaxes of 20 nearby ultracool dwarfs at the M/L transition with unprecedented accuracy of 0.09 mas ($\sim$0.2 \%) on average. Most targets are located at distances of 15--25 pc, and the closest member is at 9.5 pc. In the future, this sample can serve as a reference for the study of ultracool dwarfs at the M/L transition, in particular for the refinement of theoretical models and the search for small transiting planets. Applying the planet-search strategy and dedicated tools for the detection and adjustment of astrometric orbits, we discovered two new tight ultracool binary systems and fully characterised their orbital motions. In particular, the low-mass companion of \dwnine\ indicated that tight binary systems with low mass-ratios may not be as rare as previously thought \citepalias{Sahlmann:2013kk}. The overall binary fraction of $15^{+11}_{-5}$\,\% that we found in our sample is compatible with previous surveys using different observing techniques. The astrometry data collected during the two-year initial phase of the project yielded limits on the occurrence of giant planets around M/L dwarfs in a previously unexplored separation range of $\sim$0.1--0.8 AU and thus closed a gap in detection space left by radial-velocity and direct-imaging planet searches. For the first time, we showed that the upper limit for the occurrence of giant planets $\gtrsim$$5\, M_J$ in this separation range is 9 \%. {This is consistent with the theoretical expectations of planet formation through core accretion that predicts a low occurrence rate of giant planets around M/L-transition dwarfs. If giant planets form via gravitational instability, our results indicate that the occurrence rate of UCD disks that are massive enough to become unstable is low.} Constraining the planet population around UCDs and obtaining their high-precision distances is relevant for future searches for small, close-in planets that transit their ultracool hosts (e.g.\ \citealt{Triaud:2013aa}). In this context, we also found that optical variability at the M/L transition may not be as widespread as previous studies have indicated: only $5^{+10}_{-2}$\,\% of the M8--L2 dwarfs in our sample of field objects show an $I$-band variability higher than 5 mmag r.m.s. over time-scales of minutes to $\sim$500 days. Finally, we demonstrated that astrometric trajectories of faint optical sources can be determined with an accuracy of 120--150~$\mu$as using ground-based observations with an 8 m telescope. The photocentre measurement precision corresponds to 1/1000 of the {\small FORS2} CCD pixel size and is similar to the precision of the spectrum position determination with radial-velocity spectrographs \citep{Pepe:2008kx}. In \citetalias{Lazorenko:2013kk}, we show that the discrepancy between the above value and the 50 $\mu$as demonstrated by \cite{Lazorenko:2009ph} is due to compromises we had to make to implement the survey. Our observations are executed in queue-scheduling service mode to guarantee good seeing conditions. The exposure times are set to avoid saturation even in the best seeing conditions, consequently, the S/N during an epoch of normal seeing is sub-optimal. Therefore, the performance demonstrated here is not the limit for this type of ground-based astrometry work. In the future, we will expand this planet-search survey towards lower detectable planet masses and longer periods by continuing the astrometric monitoring and increasing the number of measurements and their time-span. The advent of the \emph{Gaia} mission will not supersede our project. On the contrary, the \emph{Gaia} survey will be complementary in the astrometric search for exoplanets around ultracool dwarfs. | 14 | 3 | 1403.1275 |
1403 | 1403.1872_arXiv.txt | Fundamental stellar properties, such as mass, radius, and age, can be inferred using asteroseismology. Cool stars with convective envelopes have turbulent motions that can stochastically drive and damp pulsations. The properties of the oscillation frequency power spectrum can be tied to mass and radius through solar-scaled asteroseismic relations. Stellar properties derived using these scaling relations need verification over a range of metallicities. Because the age and mass of halo stars are well-constrained by astrophysical priors, they provide an independent, empirical check on asteroseismic mass estimates in the low-metallicity regime. We identify nine metal-poor red giants (including six stars that are kinematically associated with the halo) from a sample observed by both the \textit{Kepler} space telescope and the Sloan Digital Sky Survey-III APOGEE spectroscopic survey. We compare masses inferred using asteroseismology to those expected for halo and thick-disk stars. Although our sample is small, standard scaling relations, combined with asteroseismic parameters from the APOKASC Catalog, produce masses that are systematically higher ($\left<\Delta \mathrm{M}\right>=0.17\pm0.05$ M$_\odot$) than astrophysical expectations. The magnitude of the mass discrepancy is reduced by known theoretical corrections to the measured large frequency separation scaling relationship. Using alternative methods for measuring asteroseismic parameters induces systematic shifts at the 0.04 M$_\odot$ level. We also compare published asteroseismic analyses with scaling relationship masses to examine the impact of using the frequency of maximum power as a constraint. Upcoming APOKASC observations will provide a larger sample of $\sim100$ metal-poor stars, important for detailed asteroseismic characterization of Galactic stellar populations. | Accurate determinations of fundamental stellar properties are required to improve our understanding of stellar populations and Galactic formation. Inferring these properties is notoriously difficult, unless stars are members of clusters or eclipsing binary systems. However, we can probe stellar interiors through global oscillations. After several ground-based studies \citep{Bedding2011_Observational_Perspective} and serendipitous space-based observations (HST and WIRE; e.g.\ \citealt{Stello2009} and references therein), the space-based telescopes CoRoT \citep{Michel2008} and \textit{Kepler} \citep{Borucki2010} made asteroseismic characterization possible for thousands of stars. In an asteroseismic analysis, the average spacing between consecutive overtones of the same angular degree (average large frequency separation, $\Delta\nu$) and the peak in the Gaussian-like envelope of mode amplitudes (frequency of maximum oscillation power, $\nu_\mathrm{max}$) is derived from the frequency power spectrum. For oscillations driven by surface convection, empirical scaling relations \citep[hereafter SRs;][and references therein]{Kjeldsen1995} connect these asteroseismic observables to mass, radius, and effective temperature: \begin{eqnarray} \frac{\Delta \nu}{\Delta \nu_{\odot}} \simeq& \left(\frac{\mathrm{M}}{\mathrm{M}_{\odot}}\right)^{1/2} \left(\frac{\mathrm{R}}{\mathrm{R}_{\odot}}\right)^{-3/2}\label{eq:delta nu scaling relation} \\ \frac{\nu_\mathrm{max}}{\nu_{\mathrm{max},\odot}} \simeq& \left(\frac{\mathrm{M}}{\mathrm{M}_{\odot}}\right)\left(\frac{\mathrm{R}}{\mathrm{R}_\odot}\right)^{-2}\left(\frac{\mathrm{T}_\mathrm{eff}}{\mathrm{T}_{\mathrm{eff,\odot}}}\right)^{-1/2}, \label{eq:nu max scaling relation} \end{eqnarray} where $\Delta \nu_\odot=135.0 \pm 0.1$ $\mu$Hz, $\nu_{\mathrm{max},\odot}=3140 \pm 30$ $\mu$Hz, and T$_{\mathrm{eff},\odot}=5777$ K (Pinsonneault et al., \textit{in prep.}). Solving for mass and radius yields \begin{eqnarray} \frac{\mathrm{M}}{\mathrm{M}_{\odot}}\simeq& \left(\frac{\nu_\mathrm{max}}{\nu_{\mathrm{max},\odot}}\right)^3 \left(\frac{\Delta \nu}{\Delta \nu_{\odot}}\right)^{-4} \left(\frac{\mathrm{T}_\mathrm{eff}}{\mathrm{T}_{\mathrm{eff},\odot}}\right)^{3/2} \label{eq:mass scaling relation} \\ \frac{\mathrm{R}}{\mathrm{R}_{\odot}}\simeq& \left(\frac{\nu_\mathrm{max}}{\nu_{\mathrm{max},\odot}}\right) \left(\frac{\Delta \nu}{\Delta \nu_{\odot}}\right)^{-2} \left(\frac{\mathrm{T}_\mathrm{eff}}{\mathrm{T}_{\mathrm{eff},\odot}}\right)^{1/2}. \label{eq:radius scaling relation} \end{eqnarray} The SRs take no account of metallicity dependence and they were developed for stars like the Sun, so it is not obvious that they should work for red giant branch (RGB) stars, which have a different internal structure. There are observational and theoretical problems with defining and measuring $\nu_\mathrm{max}$ and $\Delta\nu$. Empirical tests of the radius and mass from SRs have been restricted to metallicities near solar ($-0.5\lesssim\mathrm{[Fe/H]}\lesssim +0.4$). Asteroseismic radii agree within $<$5\% when compared with interferometry \citep{Huber2012}, \textit{Hipparcos} parallaxes \citep{SilvaAguirre2012}, and RGB stars in the open cluster NGC6791 \citep{Miglio2012}. SR masses are less precise than SR radii and fundamental mass calibration is also intrinsically more difficult. \citet{Brogaard2012} anchored the mass scale of the super-solar cluster NGC6791 to measurements of eclipsing binaries at the main-sequence turn-off (MSTO) and inferred M$_\mathrm{RGB} = 1.15 \pm 0.02$ M$_\odot$, lower than masses derived from standard SRs (M$_\mathrm{RGB}=1.20 \pm 0.01$ M$_\odot$ and $1.23 \pm 0.02$ M$_\odot$ from \citealt{Basu2011} and \citealt{Miglio2012}, respectively). This is not conclusive evidence that the SR are in error because the mass estimates are sensitive to temperature scale and bolometric corrections. Even using a new less-temperature sensitive SR, \citet{Wu2014} found M$_\mathrm{RGB}=1.24\pm0.03$ M$_\odot$ in NGC6791. The \textit{Kepler} Asteroseismic Science Consortium (KASC) detected solar-like oscillations in 13,000+ red giants \citep[e.g.,][]{Stello2013}. As part of the Sloan Digital Sky Survey III \citep[SDSS-III;][]{Eisenstein2011}, the Apache Point Observatory Galaxy Evolution Experiment (APOGEE; Majewski et al., \textit{in prep.}) is obtaining follow-up spectra of these asteroseismic targets. APOGEE uses a high-resolution ($R\sim 22,500$), $H$-band, multi-object spectrograph whose seven square-degree field-of-view \citep{Gunn2006} is well-matched to the size of one of \textit{Kepler}'s 21 CCD modules. The APOKASC Catalog (Pinsonneault et al., \textit{in prep.}) reports asteroseismic and spectroscopic results for stars in the \textit{Kepler} field observed in APOGEE's first year of operations. Pinsonneault et al.\ (\textit{in prep.}) describe the asteroseismic analysis, including the preparation of raw \textit{Kepler} light curves \citep{Garcia2011}, measurement of $\Delta \nu$ and $\nu_\mathrm{max}$, and outlier rejection procedures. We used up to five methods to extract $\Delta\nu$ and $\nu_\mathrm{max}$ from the frequency-power spectrum (\citealt{Huber2009,Hekker2010}, OCT; \citealt{Kallinger2010,Mathur2010,Mosser2011}, COR). Because the OCT method had the highest overall completion fraction, the APOKASC Catalog reports $\Delta \nu$ and $\nu_\mathrm{max}$ from OCT with uncertainties that combine, in quadrature, the formal OCT uncertainty, the standard deviation of results from all methods, and an allowance for known issues with the SR \citep[e.g.][]{Miglio2012}. We perform with the APOKASC sample the first test of asteroseismic SR mass estimates in the low-metallicity regime where strong priors on stellar ages and masses exist. For this, we identify rare halo stars, explicitly targeting high--proper motion stars and low-metallicity candidates selected using Washington photometry (Harding et al., \textit{in prep.}) and low-resolution spectroscopy. | \label{sec:Conclusions} We identified six halo and three metal-poor thick-disk giants in the \textit{Kepler} field. Using independent constraints on the mass of halo and thick-disk stars, we performed the first test of asteroseismic SR masses in the metal-poor regime. We find that SR masses calculated with APOKASC Catalog parameters are $\left<\Delta \mathrm{M}\right>=0.17\pm0.05$ M$_\odot$ higher than expected for metal-poor stars. Published modifications of the $\Delta\nu$ SR reduce inferred masses by as much as 5\%. Additionally, masses derived for RGB stars from $\nu_\mathrm{max}$-independent methods are systematically lower than those from SR. This motivates future detailed frequency analyses of APOKASC metal-poor stars. Additionally, theoretical models from \citet{White2011_Temperature_Correction} suggest a metallicity-dependence in Equation \ref{eq:delta nu scaling relation} for RGB stars over the range [Fe/H$]=-0.2$ to $+0.2$. These theoretical predictions should be extended to [Fe/H$]<-1$ and lower $T_\mathrm{eff}$. Similarly, the reliability of the $\nu_\mathrm{max}$ determination and the impact of the $\nu_\mathrm{max}$-scaling on mass estimates requires investigation. We will use a larger sample of halo stars from additional APOGEE observations to better understand this mass offset. | 14 | 3 | 1403.1872 |
1403 | 1403.6725_arXiv.txt | A large fraction of the smallest transiting planet candidates discovered by the \kepler\ and \emph{CoRoT} space missions cannot be confirmed by a dynamical measurement of the mass using currently available observing facilities. To establish their planetary nature, the concept of planet validation has been advanced. This technique compares the probability of the planetary hypothesis against that of all reasonably conceivable alternative false-positive (FP) hypotheses. The candidate is considered as validated if the posterior probability of the planetary hypothesis is sufficiently larger than the sum of the probabilities of all FP scenarios. In this paper, we present PASTIS, the Planet Analysis and Small Transit Investigation Software, a tool designed to perform a rigorous model comparison of the hypotheses involved in the problem of planet validation, and to fully exploit the information available in the candidate light curves. PASTIS self-consistently models the transit light curves and follow-up observations. Its object-oriented structure offers a large flexibility for defining the scenarios to be compared. The performance is explored using artificial transit light curves of planets and FPs with a realistic error distribution obtained from a \kepler\ light curve. We find that data support for the correct hypothesis is strong only when the signal is high enough (transit signal-to-noise ratio above 50 for the planet case) and remains inconclusive otherwise. PLATO shall provide transits with high enough signal-to-noise ratio, but to establish the true nature of the vast majority of \kepler\ and \emph{CoRoT} transit candidates additional data or strong reliance on hypotheses priors is needed. | Transiting extrasolar planets have provided a wealth of information about planetary interiors and atmospheres, planetary formation and orbital evolution. The most successful method to find them has proven to be the wide-field surveys carried out from the ground \citep[e.g.][]{pollacco2006, bakos2004} and from space-based observatories like \emph{CoRoT} \citep{auvergne2009} and \kepler\ \citep{koch2010}. These surveys monitor thousands of stars in search for periodic small dips in the stellar fluxes that could be produced by the passage of a planet in front of the disk of its star. The detailed strategy varies from survey to survey, but in general, since a large number of stars has to be observed to overcome the low probability of observing well-aligned planetary systems, these surveys target stars that are typically fainter than 10th magnitude. The direct outcome of transiting planet surveys are thousands of transit light curves with depth, duration and shape compatible with a planetary transit \citep[e.g.][]{batalha2012}. However, only a fraction of these are produced by actual transiting planets. Indeed, a planetary transit light curve can be reproduced to a high level of similarity by a number of stellar systems involving binary or triple stellar systems. From isolated low-mass-ratio binary systems to complex hierarchical triple systems, these "false positives" are able to reproduce not only the transit light curve, but also, in some instances, even the radial velocity curve of a planetary-mass object \citep[e.g.][]{mandushev2005}. Radial-velocity observations have been traditionally used to establish the planetary nature of the transiting object by a direct measurement of its mass\footnote{As far as the mass of the host star can be estimated, the actual mass of a transiting object can be measured without the inclination degeneracy inherent to radial-velocity measurements, since the light curve provides a measurement of the orbital inclination.}. A series of diagnostics such as the study of the bisector velocity span \citep{queloz2001}, or the comparison of the radial velocity signatures obtained using different correlation masks \citep{bouchy2009, diaz2012} are used to guarantee that the observed radial-velocity signal is not produced by an intricate blended stellar system. In addition, these observations allow measuring the eccentricity of the planetary orbit, a key parameter for constraining formation and evolution models \citep[e.g.][]{ida2013}. Most of the transiting extrasolar planets known to date have been confirmed by means of radial velocity measurements. However, this technique has its limitations: the radial-velocity signature of the smallest transiting companions are beyond the reach of the existing instrumentation. This is particularly true for candidates detected by \emph{CoRoT} or \kepler, whose photometric precision and practically uninterrupted observations have permitted the detection of objects of size comparable to the Earth and in longer periods than those accessible from the ground\footnote{The detection efficiency of ground-based surveys quickly falls for orbital periods longer than around 5 days \citep[e.g.][]{vonBraun2009, charbonneau2006}.}. Together with the faintness of the typical target of transiting surveys, these facts produce a delicate situation, in which transiting planets are detected, but cannot be confirmed by means of radial velocity measurements. Radial velocity measurements are nevertheless still useful in these cases to discard undiluted binary systems posing as giant planets \citep[e.g.][]{santerne2012}. Confirmation techniques other than radial velocity measurements can sometimes be used. In multiple transiting systems, the variation in the timing of transits due to the mutual gravitational influence of the planets can be used to measure their masses \citep[for some successful examples of the application of this technique, see][]{holman2010, lissauer2011, ford2012, steffen2012, fabricky2012}. Although greatly successful, only planets in multiple systems can be confirmed this way and only mutually-resonant orbits produce large enough timing variations \citep[e.g.][]{agol2005}. Additionally, the obtained constraints on the mass of the transiting objects are usually weak. A more generally-applicable technique is "planet validation". The basic idea behind this technique is that a planetary candidate can be established as a \emph{bona fide} planet if the Bayesian posterior probability (i.e. after taking into account the available data) of this hypothesis is significantly higher than that of all conceivable false positive scenarios \citep[for an exhaustive list of possible false positives see][]{santerne2013}. Planet validation is coming of age in the era of the \kepler\ space mission, which delivered thousands of small-size candidates whose confirmation by "classical" means in unfeasible. In this paper, we present the Planet Analysis and Small Investigation Software (PASTIS), a software package to validate transiting planet candidates rigorously and efficiently. This is the first paper of a series. We describe here the general framework of PASTIS, the modeling of planetary and false positives scenarios, and test its performances using synthetic data. Upcoming articles will present in detail the modeling and contribution of the radial velocity data (Santerne et al., in preparation), and the study of real transiting planet candidates (Almenara et al., in preparation). The rest of the article is organized as follows. In Section~\ref{sect.planetvalidation} we describe in some detail the technique of planet validation, present previous approaches to this problem and the main characteristics of PASTIS. In Section~\ref{sect.bayesian} we introduce the bayesian framework in which this work is inscribed and the method employed to estimate the Bayes factor. In Section~\ref{sect.mcmc} we present the details of the MCMC algorithm used to obtain samples from the posterior distribution. In Section~\ref{sect.priors} we briefly describe the computation of the hypotheses priors, in Section~\ref{sect.models} we describe the models of the blended stellar systems and planetary objects. We apply our technique to synthetic signals to test its performance and limitations in Sect.~\ref{sect.application}, we discuss the results in Section~\ref{sect.discussion}, and we finally draw our conclusions and outline future work in Section~\ref{sect.conclusions}. | At present, planet validation is the only technique capable of establishing the planetary nature of the smallest transiting candidates detected by the \emph{CoRoT} and \kepler\ space missions. The planetary hypothesis is compared with all possible false positives, and the planet is considered validated if it is found to be much more probable than all the others. Unless one of the competing hypotheses can be rejected as a possible explanation for the data, which is rarely the case, a rigorous comparison of the different hypotheses has to be made in a Bayesian framework. We have presented a method to self-consistently model most of data usually available on a given candidate --the discovery light curve, the radial velocity follow-up observations, light curves obtained in different photometric filters, absolute photometric observations of the target star-- under different competing hypotheses relevant to the problem of planet validation. Using these models, we compute the Bayesian odds ratio via the importance sampling technique. This procedure has been implemented in a \emph{python} package named PASTIS (Planet Analysis and Small Transit Investigation Software). The posteriors of the model parameters are sampled with a MCMC algorithm. MCMC algorithms are much more efficient in sampling the posterior distribution of multidimensional problems than other more straightforward methods, such as grid evaluation. Therefore, we can use models with an arbitrary number of parameters. This allows us to add complexity to our models (such as limb-darkening parameters, or planetary and stellar albedos) at virtually no cost. Furthermore, the samples obtained with the MCMC algorithm are used to estimate the Bayesian evidence via importance sampling. The MCMC algorithm implemented in PASTIS deals with parameter correlations by regularly performing a Principal Component Analysis, and takes into account the correlated nature of MCMC samples by thinning the chains using the measured correlation length. This method was shown to produce satisfactory results by comparing it to another existing MCMC code \citep[\emph{emcee};][]{emcee}. The entire PASTIS planet-validation procedure was tested using synthetic light curves of transiting planets and background eclipsing binaries (BEBs) whose eclipses are diluted by a brighter star. We separated the analysis in two parts, naturally present in Bayesian model comparison: a) the computation of the Bayes factor, which contains all the support the data give to one model over the other, and b) the computation of the odds ratio, which includes the prior odds, independent of the data. For part a), we have found that the light curves of BEBs posing as transiting planets strongly support the BEB model if the dilution level is such that the secondary eclipse has S/N above about 5. Light curves with secondary eclipse S/N of 2, on the other hand, give marginal support for the correct model (see Fig.~\ref{fig.resultsBEB}). The dependence on the mass ratio $q$ seen in Fig.~\ref{fig.resultsBEB} is dominated by the S/N of the primary eclipse for the cases with low dilution (secondary S/N of 5 and 7). The curves with secondary S/N = 2 show the opposite trend --i.e. data support increasing for larger $q$-- because the primary eclipse S/N varies proportionally less in this case. The light curves of planetary transits give a varying level of support for the PLANET model over the BEB model, depending on the radius of the planet, the impact parameter and the transit S/N (Fig.~\ref{fig.resultsPLANET}). The Bayes factor conclusively support the PLANET model if the transit is (close to) central and the transit signal-to-noise ratio is higher than about 50 - 100. For a given S/N and impact parameter, the Bayes factor of smaller planets is larger because the short ingress/engress times of the transit are difficult to reproduce by the BEB model. A systematic effect in the light curve, located close to the transit ingress for $b = 0.5$ provoke a strong decrease in the support for the PLANET hypothesis for $b = 0.5$, and hinders the interpretation of the dependence of the Bayes factor with impact parameter. For $b = 0.75$, only Earth-sized or Neptune-sized planets with high-S/N transits are supported strongly by the data. For Earth-size planets, we computed further the Bayes factor between the PLANET model and models representing other false positive hypothesis. We found that triple hierarchical systems are discarded by the data, as already noted, for example, by \citet{torres2011}. On the other hand, the scenario consisting of a background star hosting a transiting (giant) planet whose light is diluted by the target star, received equal, or slightly stronger, support than the correct PLANET model. Similarly, the Bayes factor between the PLANET model and the model including a transiting planet orbiting the secondary component of a wide-orbit binary is too close to unity to allow preference for one model over the other. This is true even for transit light curves with S/N = 150. Part b) of the analysis shows that the hypotheses prior odds, computed for conditions typical of \kepler\ candidates, strongly favor the PLANET model over the BEB model. This is mainly due to the strong constrain brought forth by the adaptive optics follow-up observations. As a consequence, the final odds ratio for the PLANET hypothesis based on planetary light curves is above 150 for all the planet scenarios (Fig.~\ref{fig.oddsratioPLA}), with the exception of the scenarios with $b=0.5$, affected by the systematic effect discussed above, for which the S/N needs to be above 100 - 150. On the contrary, the odds ratio in favour of the BEB model based on BEB light curves is consequently reduced. BEB scenarios with low and intermediate dilution level can still be correctly identified, but for scenarios with secondary S/N = 2 the odds ratio does not conclusively support one model over the other (Fig.~\ref{fig.oddsratioBEB}). Given this result, one might wonder if it is possible for an actual BEB to be identified as a transiting planet because the prior odds $\frac{\prob{PLANET}{I}}{\prob{BEB}{I}}$ is large enough. For example, if field around the target was less crowded that the one assumed here, or if the AO contrast curve was stronger, this might occur. However, one might argue that actual BEBs could not produce in this case the follow-up observations (in particular the AO contrast curve) used to compute the prior odds. In any case, our simulations coupled with the computation of the hypotheses ratios show the relative weight given by the Bayesian model comparison to data and priors. Furthermore, we have shown that radial velocity follow-up observations can lead to the correct identification of the cases where the light curve alone does not suffice to conclude, even if a measurement of the mass of the transiting object is unattainable. Indeed, many of these false positive scenarios exhibit radial-velocity and bisector signals that could be detected with a relatively small number of RV observations, compared to those that would be needed to detect the reflex motion of the star \citep[e.g.][]{queloz2009}. Because the RV and bisector signal produced by a false positive can occur at any moment of the orbit, depending on the relative velocities of the target star and the blended system, the scheduling constraints usually associated with the follow-up of transiting candidates become less stringent. The background transiting planet scenario exhibits RV signals less frequently than other scenarios. This kind of false positive will certainly prove among the hardest to discard. In any case, our results underline the importance of intensive follow-up observations of transiting candidates, in particular of ground-based velocimetry measurements. PASTIS has already been employed for analyzing datasets of transiting planets and brown dwarfs \citep[e.g.][]{hebrard2013, diaz2013}, and to validate transiting candidates for which no reflex motion of the parent star is detected (such as CoRoT-22 b, Moutou et al. submitted). It is currently being used to study real unresolved candidates from the \emph{CoRoT} space mission. A thorough comparison with BLENDER and the \citetalias{morton2012} procedure, based on the analysis of already validated candidates, will be presented in a forthcoming paper. We have already identified a few features of PASTIS that will be improved in the future. The treatment of systematic errors in the data might be an important issue to deal with in order to improve the method presented here. For the moment, systematic effects are treated as a source of additional gaussian noise whose amplitude is a model parameter. However, the example of the systematic feature affecting planetary light curves with $b = 0.5$ shows that this simple model is not sufficient. It also highlights the interest of performing these simulations using real data. A more realistic error distribution modeling, using techniques such as the autoregressive-moving-average model \citep[e.g.][]{tuomi2013} could permit detecting more subtle effects in the available data, and lead to a more robust determination of the Bayesian odds ratio. Another example of the need of a more sophisticated noise model is the case of Kepler-68 c \citep{gilliland2013}, whose validation is conditional to the nature of a small eclipse exactly in opposite phase to the transits. The authors claim that similar features are present in the light curve, which would render that particular "eclipse" non-significant. An adequate noise model could permit to quantify this statement. Some observations that are usually available for transiting candidates, such as high-angular resolution imaging, or centroid motion --as provided, for example, by the \kepler\ pipeline-- are currently not modeled by PASTIS as is done for the light curves, radial velocities, etc. For the time being, PASTIS includes the information provided by these datasets in the prior odds computation, but self-consistently modeling these data is envisaged. This should increase the robustness of our determination of the evidences used for model comparison. Future space missions such as PLATO will provide transits of small-size planet candidates at very high signal-to-noise ratio, due to the brightness of their target stars. Fully exploiting these data for statistical validation will require detailed physical modeling of the light curve. We have shown that PASTIS should be able to validate these candidates. Subsequent ground-based radial-velocity observations, focused on already validated candidates, would provide a measurement of their mass. Combined with the precise measurement of the radius from the space-based discovery light curve, the bulk density of Earth-size objects would be known with unprecedented precision. | 14 | 3 | 1403.6725 |
1403 | 1403.2276_arXiv.txt | \par We constructed a 6-degrees of freedom rotational model of Titan as a 3-layer body consisting of a rigid core, a fluid global ocean, and a floating ice shell. The ice shell exhibits partially-compensated lateral thickness variations in order to simultaneously match the observed degree-two gravity and shape coefficients. The rotational dynamics are affected by the gravitational torque of Saturn, the gravitational coupling between the inner core and the shell, and the pressure coupling at the fluid-solid boundaries. Between $10$ and $13\%$ of our model Titans have an obliquity (due to a resonance with the $29.5$-year periodic annual forcing) that is consistent with the observed value. \par The shells of the successful models have a mean thickness of $130$ to $140$ km, and an ocean of $\approx$250 km thickness. Our simulations of the obliquity evolution show that the {\em Cassini} obliquity measurement is an instantaneous one, and does not represent a mean value. Future measurements of the time derivative of the obliquity would help to refine the interior models. We expect in particular a variation of roughly 7 arcmin over the duration of the {\em Cassini} mission. | } \par The {\em Cassini} spacecraft, in orbit around Saturn since July 2004, has allowed huge progress on modelling of the internal structure and the rotational dynamics of Titan. An internal ocean is consistent with the measurements of the tidal Love number $k_2\approx 0.6$ \citep{ijdslaarrt2012} and was theoretically predicted by \citet{ls1987}, this prediction being supported by several following studies, e.g. \citep{gs1996,gsd2000,tglms2005,fgtv2007}. A comparison between the shape of Titan \citep{zshlkl2009} (Tab.\ref{tab:titanshape}) and its gravity field \citep{irjrstaa2010} (Tab.\ref{tab:titangravity}) suggests either variations in the thickness of a floating ice shell \citep{nb2010,hnzi2013} or lateral variations in the shell's density \citep{cs2012}. \begin{table}[ht] \centering \caption[The shape of Titan.]{The shape of Titan, from \citep{zshlkl2009}.\label{tab:titanshape}} \begin{tabular}{lr} \hline Parameter & Value \\ \hline Subplanetary equatorial radius $a$ & $2575.15\pm0.02$ km \\ Along orbit equatorial radius $b$ & $2574.78\pm0.06$ km \\ Polar radius $c$ & $2574.47\pm0.06$ km \\ Mean radius $R$ & $2574.73\pm0.09$ km \\ \hline \end{tabular} \end{table} \placetable{tab:titanshape} \begin{table}[ht] \centering \caption[The 2 solutions for the gravity field of Titan.]{The 2 solutions for the gravity field of Titan \citep{irjrstaa2010}. SOL1 is a single multiarc solution obtained from 4 flybys of {\em Cassini} dedicated to the determination of the gravity field, while SOL2 is a more general approach, in which all available radiometric tracking and optical navigation imaging data from the Pioneer and Voyager Saturn encounters and astronomical observations of Saturn and its satellites are considered. The uncertainties correspond to $1\sigma$. The global solution SOL2 could be consistent with the hydrostatic equilibrium ($J_2/C_{22}\approx10/3$), but the shape is not \citep{zshlkl2009}. In our study of a triaxial Titan, the only coefficients we use are $\mathcal{G}M_6$, $J_2$ and $C_{22}$.\label{tab:titangravity}} \begin{tabular}{lrr} \hline & SOL1 & SOL2 \\ \hline $\mathcal{G}M_6$ & -- & $8978.1394$ $km^3.s^{-2}$ \\ $J_2$ & $(3.1808\pm0.0404)\times10^{-5}$ & $(3.3462\pm0.0632)\times10^{-5}$ \\ $C_{21}$ & $(3.38\pm3.50)\times10^{-7}$ & $(4.8\pm11.5)\times10^{-8}$ \\ $S_{21}$ & $(-3.52\pm4.38)\times10^{-7}$ & $(6.20\pm4.96)\times10^{-7}$ \\ $C_{22}$ & $(9.983\pm0.039)\times10^{-6}$ & $(1.0022\pm0.0071)\times10^{-5}$ \\ $S_{22}$ & $(2.17\pm0.41)\times10^{-7}$ & $(2.56\pm0.72)\times10^{-7}$ \\ $J_3$ & $(-1.879\pm1.019)\times10^{-6}$ & $(-7.4\pm105.1)\times10^{-8}$ \\ $C_{31}$ & $(1.058\pm0.260)\times10^{-6}$ & $(1.805\pm0.297)\times10^{-6}$ \\ $S_{31}$ & $(5.09\pm2.02)\times10^{-7}$ & $(2.83\pm3.54)\times10^{-7}$ \\ $C_{32}$ & $(3.64\pm1.13)\times10^{-7}$ & $(1.36\pm1.58)\times10^{-7}$ \\ $S_{32}$ & $(3.47\pm0.80)\times10^{-7}$ & $(1.59\pm1.05)\times10^{-7}$ \\ $C_{33}$ & $(-1.99\pm0.09)\times10^{-7}$ & $(-1.85\pm0.12)\times10^{-7}$ \\ $S_{33}$ & $(-1.71\pm0.15)\times10^{-7}$ & $(-1.49\pm0.16)\times10^{-7}$ \\ $J_2/C_{22}$ & $3.186\pm0.042$ & $3.339\pm0.067$ \\ \hline \end{tabular} \end{table} \placetable{tab:titangravity} \par {\em Cassini} observed Titan's rotation as well. The most recent measurements suggest the expected synchronous rotation \citep{mi2012} and a pretty high obliquity of $\approx0.3^{\circ}$ at the mean date March $11^{th}$ 2007, already detected by \citep{sklhloacgiphjw2008}. If we assume that the rotation of Titan has reached its most probable dynamical equilibrium state, i.e. Cassini State 1, then this obliquity is not consistent with a rigid Titan \citep{nlv2008,bn2008,bn2011}. However, the presence of an internal ocean can lead to a resonant process raising the obliquity of Titan \citep{bvyk2011}, making the high obliquity a possible signature of a global subsurface ocean. \par In this paper, we simulate the rotation of Titan, considering both the internal structure and all the dynamical degrees of freedom. Our Titan is a 3-layer body composed of a rigid inner core, a global ocean and rigid shell with a variable thickness. For each of the 2 rigid layers, we simulate at the same time the longitudinal motion, the orientation of the angular momentum, and of the figure polar axis. The dynamics of these 2 layers will be affected by the gravitational pull of Saturn, the pressure coupling at the interface with the ocean and the gravitational coupling between them. The pressure coupling is modelled after \citet{bvyk2011} and the gravitational coupling after \citet{sx1997}. In calculating the torques, we take into account variations in the thickness of the ice shell \citep{nb2010} consistent with the gravity and topography constraints. We then identify interior structures for which the predicted rotation state is consistent with the observations, before simulating the expected behavior of the obliquity of Titan. Our model confirms the conclusion of \citet{bvyk2011} that the unexpectedly high obliquity of Titan could be due to a resonance with the periodic annual forcing. We go further, however, in showing that the obliquity is predicted to be time-variable (Fig~\ref{fig:leftrightcassini}): a prediction which analysis of {\em Cassini} radar observations \citep{bsk2013} should be able to test. | \par The goal of this study was to investigate the constraint that the rotation of Titan could provide on its interior. Supporting the suggestion originally by \citep{bvyk2011}, we find that between 10 and 13$\%$ of our realistic Titans fall into a resonance with the annual forcing, raising the obliquity of the shell. These Titans have a 130 to 140 km mean thickness shell overlying a $\approx$250 thick ocean, and include shell thickness variations (bottom loading) that are from $80\%$ to $92\%$ compensated, consistent with the gravity and topography constraints. A better determination of the gravity field would help to refine these numbers. \par The quasi-resonant behavior results in two solutions to explain the observed obliquity of Titan, that could be discriminated by measuring the time derivative of the obliquity. A detection by {\em Cassini} of a time-variable obliquity would thus provide strong evidence for the analysis presented here. | 14 | 3 | 1403.2276 |
1403 | 1403.0155_arXiv.txt | The Herschel Space Observatory's recent detections of water vapor in the cold, dense cloud L1544 allow a direct comparison between observations and chemical models for oxygen species in conditions just before star formation. We explain a chemical model for gas phase water, simplified for the limited number of reactions or processes that are active in extreme cold ($<$ 15 K). In this model, water is removed from the gas phase by freezing onto grains and by photodissociation. Water is formed as ice on the surface of dust grains from O and OH and released into the gas phase by photodesorption. The reactions are fast enough with respect to the slow dynamical evolution of L1544 that the gas phase water is in equilibrium for the local conditions thoughout the cloud. We explain the paradoxical radiative transfer of the H$_2$O ($1_{10}-1_{01}$) line. Despite discouragingly high optical depth caused by the large Einstein A coefficient, the subcritical excitation in the cold, rarefied H$_2$ causes the line brightness to scale linearly with column density. Thus the water line can provide information on the chemical and dynamical processes in the darkest region in the center of a cold, dense cloud. The inverse P-Cygni profile of the observed water line generally indicates a contracting cloud. This profile is reproduced with a dynamical model of slow contraction from unstable quasi-static hydrodynamic equilibrium (an unstable Bonnor-Ebert sphere). | Observations of water vapor in the interstellar medium (ISM) by the Infrared Space Observatory \citep{vD1999} and the Submillimeter Wave Astronomy Satellite (SWAS) \citep{Bergin2000} show general agreement with chemical models for warm ($> 300$ K) conditions in the ISM \citep{Melnick2000,Neufeld2000}. However, in cold conditions, most of the water is frozen onto dust grains \citep{Viti2001,vanDishoeck2013}, and the production of water occurs mainly on the grain surfaces. In order to test chemical models that include grain-surface chemistry we used the Heterodyne Instrument for the Far-Infrared (HIFI) \citep{deGraauw2010} on the Herschel Space Observatory to observe the H$_2$O ($1_{10}-1_{01}$) line in the cold, dense cloud L1544 \citep{Caselli2010, Caselli2012}. The first of these two Herschel observations was made with the wide-band spectrometer (WBS) and detected water vapor in absorption against the weak continuum radiation of dust in the cloud. Follow-up observations with higher spectral resolution and sensitivity, made with the high resolution spectrometer (HRS), confirmed the absorption and detected a blue-shifted emission line that was predicted by theoretical modeling \citep{Caselli2010}, but too narrow to be seen by the WBS in the first observation. With the better constraints provided by the second observation, we improved the chemical and radiative transfer modeling in our previous papers. We modified the radiative transfer code MOLLIE to calculate the line emission in the approximation that the molecule is sub-critically excited. This assumes that the collision rate is so slow that every excitation leads immediately to a radiative de-excitation and the production of one photon which escapes the cloud, possibly after many absorptions and re-emissions, before another excitation. The emission behaves as if the line were optically thin with the line brightness proportional to the column density. This approximation can be correct even at very high optical depth as long as the excitation rate is slow enough, C $<$ A/$\tau$, where C is the collision rate, A is the spontaneous emission rate and $\tau$ the optical depth \citep{Linke1977}. \citet{Caselli2012} presented the observations and the results of this modeling. In this paper, we discuss in detail the theory behind the modeling. A comparison of the spectral line observation with theory requires three models. First, we require a hydrodynamical model to describe the density, velocity, and temperature across the cloud. We use a model of slow contraction in quasi-static unstable equilibrium that we developed in our previous research \citep{KF05,KC10}. Second, we require a chemical model to predict the molecular abundance across the varying conditions in the cloud. Following the philosophy for simplified chemical networks in \citet{KC08} or \citet{BethellBergin2009}, we extract from a general chemical model for photo-dissociation regions \citep{Hollenbach2009} a subset of reactions expected to be active in cold conditions, principally grain-surface reactions as well as freeze-out and photodissociation. Third, we require a radiative transfer model to generate a simulated molecular line. We modify our non-LTE radiative transfer code MOLLIE to use the escape probability approximation. This allows better control of the solution in extreme optical depth. The three models are described in more detail in three sections below. The relevant equations are included in the appendices. | A simplified chemical model for cold oxygen chemistry primarily by grain surface reactions is verified by comparing the simulated spectrum of the H$_2$O ($1_{10}-1_{01}$) line against an observation of water vapor in L1544 made with HIFI spectrometer on the Herschel Space Observatory. This model reproduces the observed spectrum of H$_2$O, and also approximates the abundances calculated by a more complete model that includes gas-phase neutral-neutral and ion-neutral reactions. The gas phase water is released from ice grains by ultraviolet (UV) photodesorption. The UV radiation derives from two sources: external starlight and collisions of cosmic rays with molecular hydrogen. The latter may be important deep inside the cloud where the visual extinction is high enough ($>50$ mag) to block out the external UV radiation. Water is removed from the gas phase by photodissociation and freeze-out onto grains. The former is important at the boundary where the UV from external starlight is intense enough to create a photodissociation region. Here, atomic oxygen replaces water as the most abundant oxygen species. In the center where the external UV radiation is completely attenuated, freeze-out is the significant loss mechanism. Time dependent chemistry is not required to match the observations because the time scale for the chemical processes is short compared to the dynamical time scale. The molecular cloud L1544 is bounded by a photodissociation region. The water emission derives only from the central few thousand AU where the gas density approaches the critical density for collisional de-excitation of the water line. In the model of hydrostatic equilibrium, the gas density in the center is rising with decreasing radius more steeply than the abundance of water is decreasing by freeze-out. Thus the water spectrum provides unique information on the dynamics in the very center. The large Einstein A coefficient ($3\times 10^{-3}$ s$^{-1}$) of the 557 GHz H$_2$O ($1_{10}-1_{01}$) line results in extremely high optical depth, several hundred to a thousand. However, the density ($< 10^7$ cm$^{-3}$) and temperature ($<15$ K) are low enough that the line is subcritically excited. The result is that the line brightness under these conditions is directly proportional to the column density. | 14 | 3 | 1403.0155 |
1403 | 1403.2961_arXiv.txt | We solve a general equation describing the lowest order corrections arising from quantum gravitational effects to the spectrum of cosmological fluctuations. The spectra of scalar and tensor perturbations are calculated to first order in the slow roll approximation and the results are compared with the most recent observations. The slow roll approximation gives qualitatively new quantum gravitational effects with respect to the pure de Sitter case. | The latest Planck mission results \cite{cmb} provide the most accurate constraints available till now to inflationary dynamics \cite{inflation}. So far the slow roll (SR) mechanism has been confirmed to be a paradigm capable of reproducing the observed spectrum of cosmological fluctuations and the correct tensor to scalar ratio \cite{Stewart:1993bc}. In spite of the increased precision of observations no evident signals for quantum gravity can be extracted from the Planck data. The Inflationary period is the cosmological era describing the transition from the quantum gravitational scale down to the hot big bang scale and should, somewhere, exhibit related peculiar features. During such a transition the cosmological perturbations with the longest wavelength are expected to be affected more by quantum gravitational effects since they exit the horizon at the early stages of inflation and are exposed to high energy and curvature effects for a longer period of time. Quite interestingly a loss of power with respect to the expected flatness for the spectrum of cosmological perturbations can be extrapolated from the data at large scales. Unfortunately such a feature (evident already in the WMAP results) exhibits large errors due to cosmic variance and, till now, its relevance seems to have been overlooked. \\ In this paper we estimate the effects of quantum gravity using the Wheeler-DeWitt equation \cite{DeWitt}. We calculate, for a realistic inflationary model, the spectrum of scalar and tensor perturbations to the first order in the SR approximation. Our approach is formally analogous to that introduced in a previous paper \cite{K} where the quantum effects on scalar perturbations evolving on a de Sitter background were estimated (similar results for the de Sitter background were also obtained in \cite{Kiefer} using a different approach). Finally the results are compared with observations. Let us emphasize than we consider a canonical quantization of Einstein gravity leading to the WDW equation, this is what we mean by quantum gravity. This is quite distinct to the introduction of so-called trans-Planckian effects through ad hoc modifications of the dispersion relation \cite{Martin} and/or the initial conditions \cite{inicond}.\\ The article is organized as follows: in section 2 we review the main equations describing the dynamics of cosmological perturbations and introduce the master equation governing the dynamics of such perturbations in the presence of quantum gravitational effects. In section 3 we introduce the slow-roll (SR) formalism. In section 4 we evaluate the quantum gravitational corrections to the master equation for scalar perturbations and obtain a general approximate solution to this equation. Subsequently some particular solutions associated with different initial conditions (vacuum choices) are discussed. In section 5 the case of tensor perturbations is addressed. In section 6 our general results are compared with observations and the effects of the quantum gravitational corrections are estimated. Finally in section 7 we illustrate our conclusions. | In this paper we solved the general master equation ({\ref{qfx}}) describing the lowest order corrections coming from quantum gravitational contributions to the spectrum of cosmological fluctuations on assuming an inflationary evolution generically described by SR dynamics. This letter is a generalization of the previous article \cite{K} where such an equation was obtained through a Born-Oppenheimer decomposition of the inflaton-gravity system and solved exactly for the two point function of the scalar fluctuation for the case of a de Sitter evolution. The more realistic case of an inflationary SR dynamics has been addressed here. The quantum gravitational corrections for the SR case have peculiar features and are very different from the de Sitter case. In particular, for the case of the scalar fluctuations, their form is not simply a deformation of the de Sitter result proportional to the SR parameters. New contributions arise due to SR and their effect dominates over the de Sitter-like contributions for very small and very large wavelengths. The small wavelength region is that which affects the initial state (vacuum) of each mode of the perturbations. The long wavelength region is that associated with the observations of the spectrum of perturbations. The new contributions are proportional to $\ep{SR}-\eta_{SR}$ and are zero for the de Sitter and power-law cases. They can lead to a power-loss term for low $k$ in the spectrum of the scalar curvature perturbations at the end of inflation providing the difference $\ep{SR}-\eta_{SR}>0$. Furthermore the evolution of the primordial gravitational waves has also been addressed. The quantum gravitational corrections also affect the dynamics of tensor perturbations and determine a deviation from the standard results in the low multipole region. Finally our analytical results are compared with observations. The quantum gravitational corrections originate a power loss in the scalar spectrum compatible with the Planck constraint on $\ep{SR}-\eta_{SR}$. Another possible source of the power loss is related to the perturbed vacuum choice. An accurate analysis of the possible outcome of some non-standard choices of the vacuum is beyond the scope of this paper and is not addressed here. The amplitude of the quantum gravitational effects depends on the product $A\cdot k$. Unfortunately within the present approach an estimate of this amplitude during inflation leads to a tiny result. Such an estimates has been performed in a conservative manner \cite{Calcagni} by introducing a length scale $\bar k^{-1}$ associated to the size of the observable universe today. Should such corrections freeze at the end of inflation they are probably invisible to present day experiments. Different choices of $\bar k$ should however lead to very different estimates. Of course the choice of a smaller length scale ($\bar k^{-1}$) will lead to stronger quantum gravitational effects.\\ \\ | 14 | 3 | 1403.2961 |
1403 | 1403.7405_arXiv.txt | We present {\it UBVRI} photometry of the supernova 2014J in M82, obtained in the period from January 24 until March 3, 2014, as well as two spectra, taken on February 4 and March 5. We derive dates and magnitudes of maximum light in the {\it UBVRI} bands, the light curve parameters $\Delta m_{15}$ and expansion velocities of the prominent absorption lines. We discuss colour evolution, extinction and maximum luminosity of SN 2014J. | Supernova (SN) 2014J, located at $\alpha=9^{\rm h}55^{\rm m}42^{\rm s}.14, \delta=+69^{\circ}40'26''.0$ (2000.0) in the galaxy M82, was discovered by Steve J. Fossey on UT 2014 January 21.8. The description of discovery and early observations were presented by Goobar {\it et al.} (2014). The prediscovery observations and early spectra were also reported by Zheng {\it et al.} (2014). These sets of data show that SN 2014J is a spectroscopically normal Type Ia SN, although it exhibited high-velocity features in the spectrum and was heavily reddened by the dust in the host galaxy. At a distance of 3.5 Mpc (Karachentsev and Kashibadze, 2006) SN 2014J is the nearest SNIa since SN 1972E, and it offers the unique possibility to study a thermonuclear SN over a wide range of the electromagnetic spectrum. | We present the light and colour curves of SN 2014J starting 9 days before the $B$-band maximum and continuing until day 29 past maximum. The spectra were obtained at phases 2 days and 30 days after the $B$-band maximum. The light and colour curves for SN 2014J show that it belongs to the "normal" subset of type Ia SNe, but is heavily reddened by the dust in the host galaxy. We estimate the decline rate parameter $\Delta m_{15}(B)=1.01$ which is close to the mean value for SNe Ia. The comparison of colour excess and the luminosity, expected from Pskovskiy-Phillips relation, results in low value of the ration of selective to total extinction, similar to the values found for other highly reddened type Ia SNe. The spectral evolution is typical for this class of SNe, with expansion velocities higher than the mean values. We continue the observations of SN 2014J, the results and more detailed analysis of the data will be presented in a subsequent paper. | 14 | 3 | 1403.7405 |
1403 | 1403.1709_arXiv.txt | Cosmic ray radiation is mostly composed, at sea level, by high energy muons, which are highly penetrating particles capable of crossing kilometers of rock. Cosmic ray radiation constituted the first source of projectiles used to investigate the intimate structure of matter and is currently and largely used for particle detector test and calibration. The ubiquitous and steady presence at the Earth's surface and the high penetration capability has motivated the use of cosmic ray radiation also in fields beyond particle physics, from geological and archaeological studies to industrial applications and civil security. In the present paper, cosmic ray muon detection techniques are assessed for stability monitoring applications in the field of civil engineering, in particular for static monitoring of historical buildings, where conservation constraints are more severe and the time evolution of the deformation phenomena under study may be of the order of months or years. As a significant case study, the monitoring of the wooden vaulted roof of the ``Palazzo della Loggia" in the town of Brescia, in Italy, has been considered. The feasibility as well as the performances and limitations of a monitoring system based on cosmic ray tracking, in the considered case, have been studied by Monte Carlo simulation and discussed in comparison with more traditional monitoring systems. Requirements for muon detectors suitable for this particular application, as well as the results of some preliminary tests on a muon detector prototype based on scintillating fibers and silicon photomultipliers SiPM are presented. | When primary cosmic rays, mainly composed of high energy protons coming from the sun and from the outer Galaxy, strike the Earth's atmosphere, a cascade of many types of subatomic particles is created~\cite{Beringer2012}. Hadronic particles produced in the shower either interact or decay, and electrons and photons lose energy rapidly through pair production and Bremsstrahlung, so that, by the time the charged component of this particle shower reaches the Earth surface, it comprises primarily positive and negative muons. The flux reaching the surface of the Earth is about 10,000~$\mu$/(min m$^2$) and the mean muon energy is 3-4~GeV. Since muons are heavy particles and do not undergo nuclear interactions, they are highly penetrating in matter and their average energy is sufficient to penetrate tens of meters of rock. Cosmic radiation has been known since the first decades of the 20$^{th}$ century and, until the construction of the first particle accelerators, it constituted the best source of projectiles to investigate the fundamental structure of matter and the fundamental interactions between elementary particles. Nowadays, cosmic rays are largely exploited in nuclear and elementary particle physics for detector testing and calibration and for the alignment of detectors in the very complex apparatuses used in this field~\cite{ALICE2010}. Making practical use of this natural flux of highly penetrating particles, continuous, free and ubiquitously present on the entire Earth surface, has always been an attractive idea. As the spectrum of cosmic ray muons is continuous and the average range is long, differential attenuation can be used to produce radiographies of large and dense objects. E.P.~George~\cite{George1955} measured, in 1955, the depth of rock above an underground tunnel by making use of the attenuation of cosmic ray muons. With the same technique, L.W.~Alvarez performed the radiography of the Second Pyramid of Giza~\cite{Alvarez1970} seeking for the possible presence of hidden chambers. Over the following years, muon radiography has been used to perform inspection of large inaccessible systems or even of geographic structures. Several groups are actively working in the imaging of the interior of volcanoes and in the prediction of volcanic eruptions~\cite{Nagamine1995,Tanaka2003,Tanaka2005,Tanaka2007a,Tanaka2007b,Shinohara2012}, \cite{Ambrosi2011,Anastasio2013a,Anastasio2013b}, \cite{Gibert2010,Marteau2012}. Proposals have been presented to obtain radiographic images of the interior of large vessels with dimensions over many tens of meters, where storage or long-term structural integrity is an important issue~\cite{Jenneson2004,Jenneson2007}. Potential uses of cosmic ray muon radiography in industrial applications have been explored~\cite{Gilboy2007a,Gilboy2007b, Tanaka2008, Grabski2008}, including the inspection of nuclear waste containers~\cite{Stanley2008} and of the inner structure of a blast furnace~\cite{Nagamine2005,Shinotake2009,Sauerwald2012}. In 2003 a new method has been proposed~\cite{Borozdin2003,Priedhorsky2003,Schultz2004,Schultz2007}, the muon tomography, in which the angular scattering that every muon undergoes when crossing matter is exploited. The scattering angles have a Gaussian distribution, with variance proportional to the traversed thickness and to the average ``scattering density" of the material crossed by the muons. The scattering density is roughly proportional to the product of the material mass density times its atomic number. This technique needs a more complex apparatus. While the absorption technique requires the measurement of the muon position and direction only downstream of the object to be inspected, the technique based on muon scattering requires the measurement of muon position and direction both upstream and downstream, to measure the single muon angular deviation. This technique has been proposed for the detection of radioactive ``orphan" sources hidden in scrap metal containers~\cite{Pesente2009,Musteel2010,Furlan2013,Benettoni2013}, to inspect commercial cargoes in ports seeking for hidden ``special nuclear materials"~\cite{Riggi2010,Riggi2013a,Riggi2013b,Armitage2013}, to inspect legacy nuclear waste containers~\cite{Mahon2013,Clarkson2013} and to obtain tomographic images of the interior of blast furnaces~\cite{Mublast2013}. In a recent study, the method has been proposed to perform a diagnosis of the damaged cores of the Fukushima reactors~\cite{Borozdin2012}. In 2007 cosmic ray muon detection techniques were assessed~\cite{Bodini2007} for measurement application in civil and industrial engineering for the monitoring of alignment and stability of large civil and mechanical structures. Situations where environmental conditions are weakly controlled and/or where the pieces whose relative positions are to be monitored are hardly accessible were specifically addressed. A Monte Carlo analysis was developed concerning the case of the alignment of an industrial press. Expected measurement uncertainty and its dependence on the geometry of the set-up, on the presence of materials interposed between the muon detectors and on elapsed time available for the measurement were obtained. In the present paper, the same general idea of exploiting the cosmic ray natural source of radiation for monitoring of alignment and stability of large structures (muon alignment and stability monitoring) is applied, with an improved detector scheme, to the case of civil buildings. The study is devoted especially to historical buildings, whose cultural and artistic value puts often severe constraints of non-invasiveness to the monitoring techniques that may be employed. In particular, the ability of cosmic ray radiation to penetrate large thicknesses of material suffering only small trajectory deviations offers the possibility to overcome the problem of monitoring the relative positions of different points that are physically and optically separated by solid structures as walls or floors. Monitoring systems widely employed, as laser scanner and theodolites, make use of visible light and need complete optical transparency between different reference points. In addition, considering that the entire volume of the building is continuously crossed by the flux of the cosmic ray radiation, with directions spanning several tens of degrees from the zenith, a more complex system of muon detectors distributed in different positions of the building may allow a global and simultaneous stability monitoring of the building to be performed. Limiting features of the cosmic ray radiation are the stochastic nature of the deviations of the muon trajectories, due to angular scattering suffered in crossing materials, and the rather low rate of events when detectors of reasonable size are employed. The former feature imposes the use of statistical distributions to extract the measured quantities by statistical inference and, consequently, needs the collection of sufficiently large number of events to obtain adequate measurement precisions. The latter implies the need of rather long data taking times. In the case of the monitoring of historical buildings, these negative features might not constitute a severe limitation for the use of the proposed method. Indeed, historical building structures are, in general, characterized by slow processes of deformation, which need to be monitored with fair precision over long periods of time, of the order of months or even years. \begin{figure} \begin{center} \includegraphics[width=0.7\textwidth]{figure1.jpg} \caption{\label{Fig1}The ``Palazzo della Loggia" of the town of Brescia (1574).} \end{center} \end{figure} The study of the application of the muon stability monitoring method to historical buildings was performed with a Monte Carlo technique and was applied to a realistic situation, the exemplary case of the ``Palazzo della Loggia", seat of the Mayor, in the town of Brescia, Italy~\cite{Donzella2014}. The ``Palazzo della Loggia" (see~\fref{Fig1}) was built in 1574 by the Venetian Government of the town and suffered, since the first years after the construction, of several structural problems. In recent years, from 1990 to 2001, a campaign of measurements was performed to monitor the stability and progressive deformation of its wooden vaulted roof, completely reconstructed in 1914, by means of a mechanical monitoring system based on the elongation of metallic wires~\cite{Giuriani1993,Giuriani2000}. In~\Sref{Loggia}, the methodology adopted in the diagnostic phase to understand the static anomalies of the wooden vaulted roof of the ``Palazzo della Loggia" and the results of the analysis are shortly illustrated. In~\Sref{Muon}, the application of the method of muon stability monitoring to the case of the wooden vaulted roof of the ``Palazzo della Loggia" is described. The measurement system is composed of a number of muon position detectors of given size and given precision that are located in the points of the building structure whose relative positions must be monitored. By means of the GEANT4 toolkit for the simulation of the passage of particles through matter~\cite{Agostinelli2003}, the structure of the building (as far as needed) the cosmic ray muon flux, the muon detectors located in appropriate positions inside the building have been modeled. With a campaign of simulations in different conditions, the performances of the monitoring system in terms of precision of the measurement of relative displacements and time needed to perform the measurement have been evaluated. In~\Sref{Other}, these performances are compared with the performances of the mechanical methods adopted in~\cite{Giuriani2000}. The possibility of constructing a monitoring system as the one described in~\Sref{Muon} strongly depends on the availability of muon detectors with the needed characteristics and performances. In \Sref{Detector}, these requested features are illustrated and a project for the construction of a detector system featuring the requirements is described. Preliminary results of experimental tests of the detector elements are presented. In~\Sref{Summary}, summary and conclusions are drawn. | \label{Summary} Cosmic ray radiation, known since the first decades of the 20$^{th}$ century, has been widely used in the field of nuclear and particle physics as a source of high energy projectiles for the investigations of the fundamental laws of nature and as a tool, naturally available at the Earth surface, for particle detector testing and calibration and for detector mechanical alignment in complex apparatus. Thanks to the high penetrability of cosmic ray radiation, it has been also applied in fields different from nuclear and particle physics as geological research, archaeological studies, industrial and civil security applications for the inspection and imaging of the content of large, dense and inaccessible volumes. The principal techniques utilized are muon radiography and muon tomography, the latter particularly in the search for high-Z materials in cargoes and containers. Recently, it has been suggested that, due to its capability of crossing very thick layers of material suffering only small deviations, cosmic ray radiation may be used for measurement applications in mechanical and civil engineering, with specific reference to situations where environmental conditions are weakly controlled and/or when the parts to be monitored are not mutually visible. In the present paper, the application of the muon stability monitoring method to historical buildings has been studied, by Monte Carlo technique. A realistic situation was considered: the exemplary case of the wooden vaulted roof of the ``Palazzo della Loggia" in the town of Brescia, for which a stability monitoring campaign was performed, for more than ten years, by means of traditional mechanical methods. A measurement system formed of a ``muon telescope", to be located on a fixed part of the building (the reference system) and by a ``muon target", to be positioned in the part of the building whose position has to be monitored, has been designed. The ``muon telescope" is composed of three muon detector modules, axially aligned 50~cm apart from each other, whose sensitive volume is a square 40~cm $\times$ 36~cm side and 6.0~mm thickness, made of two orthogonal layers of square scintillating fibers 3.0~mm~$\times$~3.0~mm cross section. The ``muon target" is composed of one detector module of the same size. In a Monte Carlo calculation program based on GEANT4 toolkit, the proposed muon stability monitoring system, the geometry and relevant structure of the building and a realistic cosmic ray muon generator have been modeled. The procedure of measurement of the positions of three different points in the wooden vaulted roof, relative to the fixed reference system separated by a bulky wooden ceiling 15~cm thick, has been simulated. Position measurement uncertainties of the designed muon monitoring system have been calculated as a function of the data taking time. The calculations demonstrated that the designed system may perform measurement precisions consistent with the amount of displacements under observation, with data taking times compatible with the time scale characteristic of the deformation phenomenon. Both cyclic and systematic displacements observed in the ``Palazzo della Loggia" during several years of observation could have been observed, with suitable precision of less than 1.0~mm, by a monitoring system based on the tracking of cosmic ray muons. In addition, it was pointed out that the efficiency of the designed system, and consequently data taking times, may easily be improved of a factor 2 to 3 by improving the data analysis. Consistent improvements in performances can also be obtained by modifying some geometrical parameters of the proposed measurement system. In conclusion, cosmic ray muon detection techniques are assessed for measurement applications in the field of civil engineering and may be particularly suitable for static monitoring of historical buildings, where the evolution of the deformation phenomena under study is of the order of months or years. Appealing features of the proposed monitoring system are: (i)~the use of a natural and ubiquitous source of radiation; (ii)~the applicability also in presence of horizontal and/or vertical building structures interposed between the reference system and the parts to be monitored; (iii)~the limited invasiveness, and the flexibility and easiness of installation of the monitoring system devices; (iv)~the possibility to design a global monitoring system, where the position of different points of the building may be simultaneously monitored relative to the same reference system; (v)~the use of well known physical principles and established technologies in the field of nuclear and particle physics. Limiting features are the intrinsic stochastic nature of the behavior of the radiation utilized, which requests to cumulate statistical distributions to be treated by statistical inference methods, and the low rate of cosmic ray radiation, which makes this technique generally unfit for applications where promptness of response is requested. The performances of such measurement system strongly depend on the particular application under study, geometries and interposed materials. However, the system performances in the specific situation may be easily evaluated by Monte Carlo calculations and the system can be designed accordingly. The availability of muon detector modules featuring characteristics suitable to satisfy the specific application requirements is essential for the described technique to be realistically proposed. Scintillating fibers of square cross section few millimeter side read by silicon photomultipliers SiPM appear as very promising candidates, coping well the requirements of robustness, efficiency, stability and reliability, absence of any hazard, low cost that such a system should necessarily perform to be proposed for potential applications. \ack The authors gratefully acknowledge Prof. Ezio Giuriani and Prof. Alessandra Marini of the Department of Civil, Architectural, Land and Environmental Engineering and Mathematics of the Brescia University for the information provided on the long-lasting study performed of the ``Palazzo della Loggia" and for the invaluable advice concerning the problem of historical building monitoring. This work has been done thanks to a special funding of the Department of Mechanical and Industrial Engineering of the University of Brescia. | 14 | 3 | 1403.1709 |
1403 | 1403.6403_arXiv.txt | {We provide an update on five relatively well motivated inflationary models in which the inflaton is a Standard Model singlet scalar field. These include i) the textbook quadratic and quartic potential models but with additional couplings of the inflaton to fermions and bosons, which enable reheating and also modify the naive predictions for the scalar spectral index $n_s$ and $r$, ii) models with Higgs and Coleman-Weinberg potentials, and finally iii) a quartic potential model with non-minimal coupling of the inflaton to gravity. For $n_s$ values close to 0.96, as determined by the WMAP9 and Planck experiments, most of the considered models predict $r\gtrsim0.02$. The running of the scalar spectral index, quantified by $|\ud n_s/\ud\ln k|$, is predicted in these models to be of order $10^{-4}$--$10^{-3}$.} \begin{document} | \label{intro} The dramatic announcement of a B-mode polarization signal possibly due to inflationary gravitational waves by the BICEP2 experiment \cite{Ade:2014xna} brought new attention to a class of inflationary models in which the energy scale during inflation is on the order of $10^{16}$ GeV. Subsequent results by the Planck experiment \cite{Adam:2014bub,Planck:2015xua} and the joint Planck -- BICEP analysis \cite{Ade:2015tva} indicate that most (if not all) of the signal observed by the BICEP experiment was caused by galactic dust. However, a significant contribution from inflationary gravitational waves is not ruled out. The joint Planck -- BICEP analysis provides a best fit value around $0.05$ for the tensor to scalar ratio $r$. Although this result is not statistically significant as it stands, it will soon be tested by forthcoming data. Motivated by these rapid developments in the observational front, in this paper we briefly review and update the results of five closely related, well motivated and previously studied inflationary models which are consistent with values of $r$ around 0.05, a signal level which will soon be probed. The first two models employ the very well known quadratic ($\phi^2$) and quartic ($\phi^4$) potentials \cite{Linde:1983gd}, supplemented in our case by additional couplings of the inflaton $\phi$ to fermions and/or scalars, so that reheating becomes possible. These new interactions have previously been shown \cite{NeferSenoguz:2008nn,Rehman:2010es,Martin:2014vha} to significantly modify the predictions for the scalar spectral index $n_s$ and $r$ in the absence of these new interactions. The next two models exploit respectively the Higgs potential \cite{Rehman:2010es,Martin:2014vha,Destri:2007pv,Smith:2008pf,Okada:2013vxa} and Coleman-Weinberg potential \cite{Smith:2008pf,Rehman:2008qs,Shafi:1983bd,Shafi:2006cs}. With the SM electroweak symmetry presumably broken by a Higgs potential, it seems natural to think that nature may have utilized the latter (or the closely related Coleman-Weinberg potential) to also implement inflation, albeit with a SM singlet scalar field. Finally, we consider a class of models \cite{Okada:2010jf,Okada:2011en} which invokes a quartic potential for the inflaton field, supplemented by an additional non-minimal coupling of the inflaton field to gravity \cite{Martin:2014vha,Salopek:1988qh}. Our results show that the predictions for $n_s$ and $r$ from these models are generally in good agreement with the BICEP2, Planck and WMAP9 measurements, except the radiatively corrected quartic potential which is ruled out by the current data. We display the range of $r$ values allowed in these models that are consistent with $n_s$ being close to 0.96. Finally, we present the predictions for $|\ud n_s/\ud\ln k|$ which turn out to be of order $10^{-4}$--$10^{-3}$. Before we discuss the models, let's recall the basic equations used to calculate the inflationary parameters. The slow-roll parameters may be defined as (see \ocite{Lyth:2009zz} for a review and references): \begin{equation} \epsilon =\frac{1}{2}\left( \frac{V^{\prime} }{V}\right) ^{2}\,, \quad \eta = \frac{V^{\prime \prime} }{V} \,, \quad \zeta ^{2} = \frac{V^{\prime} V^{\prime \prime\prime} }{V^{2}}\,. \end{equation} Here and below we use units $m_P=2.4\times10^{18}\rm{~GeV}=1$, and primes denote derivatives with respect to the inflaton field $\phi$. The spectral index $n_s$, the tensor to scalar ratio $r$ and the running of the spectral index $\alpha\equiv\mathrm{d} n_s/\mathrm{d} \ln k$ are given in the slow-roll approximation by \begin{equation} n_s = 1 - 6 \epsilon + 2 \eta \,,\quad r = 16 \epsilon \,,\quad \alpha = 16 \epsilon \eta - 24 \epsilon^2 - 2 \zeta^2\,. \end{equation} The amplitude of the curvature perturbation $\Delta_\mathcal{R}$ is given by \begin{equation} \label{perturb} \Delta_\mathcal{R}=\frac{1}{2\sqrt{3}\pi}\frac{V^{3/2}}{|V^{\prime}|}\,, \end{equation} which should satisfy $\Delta_\mathcal{R}^2= 2.215\times10^{-9}$ from the Planck measurement \cite{Ade:2013zuv} with the pivot scale chosen at $k_0 = 0.05$ Mpc$^{-1}$. The number of e-folds is given by \begin{equation} \label{efold1} N=\int^{\phi_0}_{\phi_e}\frac{V\rm{d}\phi}{V^{\prime}}\,, \end{equation} where $\phi_0$ is the inflaton value at horizon exit of the scale corresponding to $k_0$, and $\phi_e$ is the inflaton value at the end of inflation, defined by max$(\epsilon(\phi_e) , |\eta(\phi_e)|,|\zeta^2(\phi_e)|) = 1$. The value of $N$ depends logarithmically on the energy scale during inflation as well as the reheating temperature, and is typically around 50--60. | We have restricted our attention in this paper to models based on relatively simple non-supersymmetric inflationary potentials involving a SM (or even GUT) singlet scalar field. In the framework of slow-roll inflation, a tensor to scalar ratio $r \sim 0.02$--0.1 for spectral index $n_s\simeq0.96$ is readily obtained in these well motivated models. This range of $r$ is of great interest as it is experimentally accessible in the very near future. The running of the spectral index in all these models is predicted to be fairly small, $|\alpha|$ being of order few$\times 10^{-4}$--$10^{-3}$. For the Higgs and Coleman-Weinberg potentials, a more precise measurement of $r$ should enable one to ascertain whether the inflaton field was larger or smaller than its VEV during the last 60 or so e-folds (the current data favors the latter). For the quadratic and quartic inflationary potentials we have emphasized, following earlier work, that the well-known predictions for $n_s$ and $r$ can be significantly altered if the inflaton couplings to additional fields, necessarily required for reheating, are taken into account. Despite these radiative corrections, the predictions for the quartic potential are not compatible with the current data. A more precise determination of $n_s$ and $r$ should enable one to also test the radiatively corrected quadratic model. We also explored inflation driven by a quartic potential with an additional non-minimal coupling of the inflaton field to gravity. With plausible values for the new dimensionless parameter $\xi$ associated with this coupling, the predictions for $n_s$ and $r$ are in good agreement with the observations. | 14 | 3 | 1403.6403 |
1403 | 1403.4406_arXiv.txt | We present a new formula which models the rate of decline of supernovae (SN) as given by the light curve in various bands. The physical basis is the conversion of the flux of kinetic energy into radiation. The two main components of the model are a power law dependence for the radius--time relation and a decreasing density with increasing distance from the central point. The new formula is applied to SN 1993J, SN 2005cf, SN 1999ac, and SN 2001el in different bands. | The light curve (LC) of supernovae (SN) at a given wavelength $\lambda$ denotes the luminosity--time relation. The astronomers work in terms of apparent/absolute magnitude and therefore the LC in SN is usually presented as a magnitude versus time relation. We have two great astronomical classifications for the LC: type I SN and type II SN. The type I has a fast decrease in magnitude followed by a nearly linear increase. In luminosity terms, the SN has a fast increase followed by a nearly exponential decay. The type II has a fast decrease in magnitude followed by oscillations, type IIb, or a plateau, type IIp; a decay follows the plateau. In this complex morphology, we will always specify the type of SN under consideration. The luminosity is usually modeled by the formula \begin{equation} L = L_{\lambda,0} \exp (- \frac{t}{\tau}) \quad , \end{equation} where $L$ and $L_{\lambda,0}$ are the luminosity at time $t$ and at $t=0$ respectively, and $\tau$ is the typical lifetime, see \cite{deeming}. As an example, the radioactive isotope $^{56}$Ni has $\tau$ = 8.767 days. On introducing the apparent magnitude $m_{\lambda}$, the previous formula becomes \begin{equation} m_{\lambda} = k^{\prime}_{\lambda} +1.0857 (\frac{t}{\tau}) \quad , \label{mstandard} \end{equation} where $k^{\prime}_{\lambda}$ is a constant. The absolute magnitude $M_{\lambda}$ scales in the same way: \begin{equation} M_{\lambda} = k^{\prime\prime}_{\lambda} +1.0857 (\frac{t}{\tau}) \quad , \label{mgreatstandard} \end{equation} where $k^{\prime\prime}_{\lambda}$ is another constant. The observational fact that, as an example, in IC 4182 the LC has a half-life of 56 days, requires the production of $^{56}$Co, see \cite{vanHise1974}. The previous formula is an empirical relation which is based solely on observations rather than theory. The theory for SNII LCs was first developed by \cite{Grasberg1971} and later analytically and numerically explored by \cite{Falk1973,Arnett1980,Arnett1989}. A model for the luminosity in $H\alpha$ of supernovae as a function of time can be found in Figure 7 of \cite{Chevalier1994}. The LCs of type Ia SN have been explained (including the secondary maximum) by a time-dependent multigroup radiative transfer calculation, see \cite{Kasen2006}. A model for type II supernovae explosions has been built including progenitor mass, explosion energy, and radioactive nucleosynthesis, see \cite{Kasen2009}. The model atmosphere code PHOENIX was used to calculate type Ia supernovae, see \cite{Jack2011}. The previous works leave a series of questions unanswered or merely partially answered. \begin{itemize} \item Given the observational fact that the radius--time relation in young SNRs follows a power law, is it possible to find a theoretical law of motion which fits the observations? \item Can a model of an expansion in the framework of the thin layer approximation produce the observed radius--time relation? \item Can we express the flux of kinetic energy in the framework of an approximate law of motion and a medium characterized by a decreasing density? \item Can we parametrize the conversion of the flux of kinetic energy into total observed luminosity? \item Can we parametrize the fraction of conversion of the total luminosity into the optical bands? \end{itemize} In order to answer these questions, in Section \ref{motion} we analyze the existing equations of motion for \snr as well a new adjustable equation. Section \ref{syncro} reviews the basic formulas of synchrotron emission and reports the conversion of flux of kinetic energy into an observed band. Section \ref{application} reports the application of the new formulas to different SNs in various bands. | The SN's are classified as spherical SN, as an example \snr, and as aspherical SN, see as an example \cite{Racusin2009} for \sn1987a. The theory here developed treats the spherical SN using classical dimensional arguments. The conversion of the flux of kinetic energy into luminosity after the maximum in the LC explains the curve of SNs in a direct form, see Equations (\ref{kineticflux}) and (\ref{kineticfluxastro}) as well as in a logarithmic version, see Equation (\ref{defmagnostra}). The overall LC before and after the maximum can be built by introducing two different physical regimes, see Equation (\ref{piecewise}). The initial rise in intensity in the V-band is characterized by a typical time scale of $t_a \approx 5$ days and the decrease can be theoretically fitted for $t \approx 3500$ days. This large range in time is also the great advantage of our model: the existing nuclear models cover $\approx$ 100 days, see Figure 2 in \cite{Leibundgut2003}. The standard approach of formula (\ref{mstandard}), which predicts a linear increase in the apparent magnitude with time, does not correspond to the observations because the observed and theoretical magnitudes scale as $m =a +b\, \ln(t)$ where $a$ and $b$ are two constants. As an example, Figure (\ref{2001elmagvnuclear}) reports two commonly accepted sources which are the radioactive isotopes $^{56}$Co, see \cite{Georgii2000,Pluschke2001,Georgii2002}, and $^{56}$Ni, see \cite{Truran2012,Dessart2012}: the radioactive fit is acceptable only for the first few days. The application of the new formulas to three SNs in different bands gives acceptable results. As an example, Figure \ref {1993magtime} reports the LC in the R-band for \snr and Figure \ref{1993halfatime} reports the LC for the $H\alpha$ of \snr. An example of the two phase model as given by Equation (\ref{piecewise}) is reported in Figure (\ref{2005magvtutto}) for \snrcinque in the V-band. A careful analysis of the previous figures shows that the theoretical and observed curves present different concavities in the transition from small to large times. Similar results can be obtained assuming that all $\gamma$-rays produced by the decay of $^{56}$Ni and $^{56}$Co are converted into optical emission, see Figure 2 in \cite{Leibundgut2003}. The observational fact that the initial velocity can be $\approx$ 30000 km s$^{-1}$ requires a relativistic treatment that is necessary for future progress. The analysis here performed treats the SN as a single object and therefore is not connected with various types of recent cosmologies, see \cite{Astier2012,Chavanis2013,ElNabulsi2013}. We conclude with a list of not yet solved problems: \begin{itemize} \item The observational fact that the initial velocity can be $\approx 30000$ km s$^{-1}$ requires a relativistic treatment of the flux of kinetic energy that is left for future research; \item The connection between the cosmic ray production and the $\gamma$-rays in SNR, see \cite{Dermer2013}, requires an analysis of the temporal behavior of the magnetic field. \end{itemize} | 14 | 3 | 1403.4406 |
1403 | 1403.4295_arXiv.txt | The remarkable HST datasets from the CANDELS, HUDF09, HUDF12, ERS, and BoRG/HIPPIES programs have allowed us to map the evolution of the rest-frame UV luminosity function from $z\sim 10$ to $z\sim 4$. We develop new color criteria that more optimally utilize the full wavelength coverage from the optical, near-IR, and mid-IR observations over our search fields, while simultaneously minimizing the incompleteness and eliminating redshift gaps. We have identified 5859, 3001, 857, 481, 217, and 6 galaxy candidates at $z\sim 4$, $z\sim 5$, $z\sim 6$, $z\sim 7$, $z\sim 8$, and $z\sim 10$, respectively from the $\sim$1000 arcmin$^{2}$ area covered by these datasets. This sample of $>$10000 galaxy candidates at $z\geq4$ is by far the largest assembled to date with HST. The selection of $z\sim4$-8 candidates over the five CANDELS fields allows us to assess the cosmic variance; the largest variations are at $z\geq7$. Our new LF determinations at $z\sim4$ and $z\sim5$ span a 6-mag baseline and reach to $-$16 AB mag. These determinations agree well with previous estimates, but the larger samples and volumes probed here result in a more reliable sampling of $>L^*$ galaxies and allow us to re-assess the form of the UV LFs. Our new LF results strengthen our earlier findings to $3.4\sigma$ significance for a steeper faint-end slope of the $UV$ LF at $z>4$, with $\alpha$ evolving from $\alpha=-1.64\pm0.04$ at $z\sim4$ to $\alpha= -2.06\pm0.13$ at $z\sim7$ (and $\alpha=-2.02\pm0.23$ at $z\sim8$), consistent with that expected from the evolution of the halo mass function. We find less evolution in the characteristic magnitude $M^*$ from $z\sim7$ to $z\sim4$; the observed evolution in the LF is now largely represented by changes in $\phi^*$. No evidence for a non-Schechter-like form to the $z\sim4$-8 LFs is found. A simple conditional luminosity function model based on halo growth and evolution in the M/L ratio $(\propto (1+z)^{-1.5})$ of halos provides a good representation of the observed evolution. | Arguably the most fundamental and important observable for galaxy studies in the early universe is the luminosity function. The luminosity function (LF) gives us the volume density of galaxies as a function of their luminosity. By comparing the luminosity function with the halo mass function -- both in shape and normalization -- we can gain insight into the efficiency of star formation as a function of halo mass and cosmic time (e.g., van den Bosch et al.\ 2003; Vale \& Ostriker 2004; Moster et al.\ 2010; Behroozi et al.\ 2013; Birrer et al.\ 2014). These comparisons then provide us with insight into the halo mass scales where gas cooling is most efficient, where feedback from AGN or SNe starts to become important, and how these processes vary with cosmic time. In the rest-frame $UV$, the luminosity of galaxies strongly correlates with the star formation rates for all but the most dust-obscured galaxies (e.g., Wang \& Heckman 1996; Adelberger \& Steidel 2000; Martin et al.\ 2005). Establishing the $UV$ LF at high redshift is also essential for assessing the impact of galaxies on the reionization of the universe (e.g., Bunker et al.\ 2004; Yan \& Windhorst 2004; Oesch et al.\ 2009; Bouwens et al.\ 2012; Kuhlen \& Faucher-Gigu{\'e}re 2012; Robertson et al.\ 2013). \begin{deluxetable*}{ccccccccccccccc} \tablewidth{0cm} \tabletypesize{\footnotesize} \tablecaption{Observational Data Utilized in Deriving the $z\sim4$-10 LFs.\tablenotemark{*}\label{tab:obsdata}} \tablehead{ \colhead{} & \colhead{Area} & \colhead{Redshift} & \multicolumn{8}{c}{$5\sigma$ Depth (\# of orbits for HST, \# of hours for IRAC)\tablenotemark{a}}\\ \colhead{Field} & \colhead{(arcmin$^2$)} & \colhead{Sel. Range} & \colhead{u\tablenotemark{b}} & \colhead{B\tablenotemark{b}} & \colhead{$B_{435}$} & \colhead{g\tablenotemark{b}} & \colhead{V\tablenotemark{b}} & \colhead{$V_{606}$} & \colhead{r\tablenotemark{b}} & \colhead{$i_{775}$} & \colhead{i\tablenotemark{b}} & \colhead{$I_{814}$} } \startdata XDF\tablenotemark{d} & 4.7 & 4-10 & --- & --- & 29.6\tablenotemark{e} & --- & --- & 30.0\tablenotemark{e} & --- & 29.8\tablenotemark{e} & --- & 28.7 \\ & & & & & (56) & & & (56) & & (144) & & (16) \\ HUDF09-1 & 4.7 & 4-10 & --- & --- & --- & --- & --- & 28.6 & --- & 28.5 & --- & --- \\ & & & & & & & & (10) & & (23) & & \\ HUDF09-2 & 4.7 & 4-10 & --- & --- & 28.3 & --- & --- & 29.3 & --- & 28.8 & --- & 28.3 \\ & & & & & (10) & & & (32) & & (46) & & (144) \\ CANDELS-GS/ & 64.5 & 4-10 & --- & --- & 27.7 & --- & --- & 28.0 & --- & 27.5 & --- & 28.0 \\ $~~$DEEP & & & & & (3) & & & (3) & & (3.5) & & ($>$12) \\ CANDELS-GS/ & 34.2 & 4-10 & --- & --- & 27.7 & --- & --- & 28.0 & --- & 27.5 & --- & 27.0 \\ $~~$WIDE & & & & & (3) & & & (3) & & (3.5) & & ($\sim$2)\\ ERS & 40.5 & 4-10 & --- & --- & 27.5 & --- & --- & 27.7 & --- & 27.2 & --- & 27.6 \\ & & & & & (3) & & & (3) & & (3.5) & & ($\sim$4)\\ CANDELS-GN/ & 62.9 & 4-10 & --- & --- & 27.5 & --- & --- & 27.7 & --- & 27.3 & --- & 27.9 \\ $~~$DEEP & & & & & (3) & & & (3) & & (3.5) & & ($>$12) \\ CANDELS-GN/ & 60.9 & 4-10 & --- & --- & 27.5 & --- & --- & 27.7 & --- & 27.2 & --- & 27.0 \\ $~~$WIDE & & & & & (3) & & & (3) & & (3.5) & & ($\sim$2)\\ CANDELS- & 151.2 & 5-10 & 25.5 & 28.0 & --- & --- & 27.7 & 27.2 & 27.5 & --- & 27.4 & 27.2 \\ $~~$UDS & & & & & & & & ($\sim$1.5) & & & & ($\sim$3) \\ CANDELS- & 151.9 & 5-10 & 27.8 & 28.0 & --- & 28.0 & 27.0 & 27.2 & 27.9 & --- & 27.8 & 27.2 \\ $~~$COSMOS & & & & & & & & ($\sim$1.5) & & & & ($\sim$4) \\ CANDELS- & 150.7 & 5-10 & 27.4 & --- & --- & 27.9 & --- & 27.6 & 27.6 & --- & 27.5 & 27.6 \\ $~~$EGS & & & & & & & & ($\sim$2.5) & & & & ($\sim$4) \\ BoRG/$~$ & 218.3 & 8 & --- & --- & --- & --- & --- & 27.0-$~$ & --- & --- & --- & --- \\ $~$HIPPIES\tablenotemark{g} & & & & & & & & $~$28.7 & & & \\ \\ & z\tablenotemark{b} & $z_{850}$ & Y\tablenotemark{b} & $Y_{098}/Y_{105}$ & J\tablenotemark{b} & $J_{125}$ & $JH_{140}$ & H\tablenotemark{b} & $H_{160}$ & $K_s$\tablenotemark{b} & $3.6\mu$m\tablenotemark{c} & $4.5\mu$m\tablenotemark{c}\\ \tableline XDF\tablenotemark{b} & --- & 29.2\tablenotemark{c} & --- & 29.7 & --- & 29.3 & 29.3 & --- & 29.4 & --- & 26.5 & 26.5\\ & & (170) & & (100) & & (40) & (30) & & (85) & & (130) & (130) \\ HUDF09-1 & --- & 28.4 & --- & 28.3 & --- & 28.5 & 26.3\tablenotemark{f} & --- & 28.3 & --- & 26.4 & 26.4\\ & & (71) & & (8) & & (12) & (0.3) & & (13) & & (80) & (80)\\ HUDF09-2 & --- & 28.8 & --- & 28.6 & --- & 28.9 & 26.3\tablenotemark{f} & --- & 28.7 & --- & 26.5 & 26.5\\ & & (89) & & (11) & & (18) & (0.3) & & (19) & & (130) & (130)\\ CANDELS-GS/ & --- & 27.3 & --- & 27.5 & --- & 27.8 & 26.3\tablenotemark{f} & --- & 27.5 & --- & 26.1 & 25.9\\ $~~$Deep & & ($\sim$15) & & (3) & & (4) & (0.3) & & (4) & & (50) & (50) \\ CANDELS-GS/ & --- & 27.1 & --- & 27.0 & --- & 27.1 & 26.3\tablenotemark{f} & --- & 26.8 & --- & 26.1 & 25.9\\ $~~$Wide & & ($\sim$15) & & (1) & & (0.7) & (0.3) & & (1.3) & & (50) & (50)\\ ERS & --- & 27.1 & --- & 27.0 & --- & 27.6 & 26.4\tablenotemark{f} & --- & 27.4 & --- & 26.1 & 25.9\\ & & ($\sim$15) & & (2) & & (2) & (0.3) & & (2) & & (50) & (50)\\ CANDELS-GN/ & --- & 27.3 & --- & 27.3 & --- & 27.7 & 26.3\tablenotemark{f} & --- & 27.5 & --- & 26.1 & 25.9\\ $~~$Deep & & ($\sim$15) & & (3) & & (4) & (0.3) & & (4) & & (50) & (50)\\ CANDELS-GN/ & --- & 27.2 & --- & 26.7 & --- & 26.8 & 26.2\tablenotemark{f} & --- & 26.7 & --- & 26.1 & 25.9\\ $~~$Wide & & ($\sim$15) & & (1) & & (0.7) & (0.3) & & (1.3) & & (50) & (50)\\ CANDELS- & 26.2 & --- & 26.0 & --- & --- & 26.6 & 26.3\tablenotemark{f} & --- & 26.8 & 25.5 & 25.5 & 25.3 \\ $~~$UDS & & & & & & (0.6) & (0.3) & & (1.3) & & (12) & (12)\\ CANDELS- & 26.5 & --- & 26.1 & --- & 25.4 & 26.6 & 26.3\tablenotemark{f} & 25.0 & 26.8 & 25.3 & 25.4 & 25.2\\ $~~$COSMOS & & & & & & (0.6) & (0.3) & & (1.3) & & (12) & (12)\\ CANDELS- & 26.1 & --- & --- & --- & --- & 26.6 & 26.3\tablenotemark{f} & --- & 26.9 & 24.1 & 25.5 & 25.3\\ $~~$EGS & & & & & & (0.6) & (0.3) & & (1.3) & & (12) & (12)\\ BoRG/$~$ & --- & --- & --- & 26.5-$~$ & --- & 26.5-$~$ & --- & --- & 26.3-$~$ & --- & --- & --- \\ $~$HIPPIES\tablenotemark{g} & & & & $~$28.2 & & $~$28.4 & & & $~$28.1 \enddata \tablenotetext{*}{More details on the observational data we use for each of these search fields is provided in Appendix A.} \tablenotetext{a}{The $5\sigma$ depths for the HST observations are computed based on the median flux uncertainties (after correction to total) for the faintest 20\% of sources in our fields. While these depths are shallower than one computes from the noise in $0.35''$-diameter apertures (and not extrapolating to the total flux), the depths we quote here are reflective of that achieved for real sources.} \tablenotetext{b}{Indicates ground-based observations from Subaru/Suprime-Cam, CFHT/Megacam, CFHT/Megacam, HAWK-I, VISTA, and CFHT/WIRCam in the $BgVriz$, $ugriyz$, $u$, $YK_s$, $YJHK_s$, and $K_s$ bands, respectively. The $5\sigma$ depths for the ground-based observations are derived from the noise fluctuations in 1.2$''$-diameter apertures (after correction to total). These apertures are almost identical in size to those chosen by Skelton et al.\ (2014) to perform photometry on sources over the CANDELS fields.} \tablenotetext{c}{The $5\sigma$ depths for the Spitzer/IRAC observations are derived in 2.0$''$-diameter apertures (after correction to total).} \tablenotetext{d}{The XDF refers to the 4.7 arcmin$^2$ region over the HUDF with ultra-deep near-IR observations from the HUDF09 and HUDF12 programs (Illingworth et al.\ 2013). It includes all ACS and WFC3/IR observations acquired over this region for the 10-year period 2002 to 2012.} \tablenotetext{e}{The present XDF reduction (Illingworth et al.\ 2013) is typically $\sim$0.2 mag deeper than the original reduction of the HUDF ACS data provided by Beckwith et al.\ (2006).} \tablenotetext{f}{The $JH_{140}$ observations are from the 3D-HST and GO-11600 (PI: Weiner) programs.} \tablenotetext{g}{Only the highest quality (longer exposure) BoRG/HIPPIES fields (and similar programs) are considered in our analysis (see Appendix A.2). For inclusion, we require search fields to have an average exposure time in the $J_{125}$ and $H_{160}$ bands of at least 1200 seconds and with longer exposure times in the optical $V_{606}+V_{600}$ bands than the average exposure time in the near-infrared $J_{125}+H_{160}$ observations.} \end{deluxetable*} Attempts to map out the evolution of the luminosity function of galaxies in the high-redshift universe has a long history, beginning with the discovery of Lyman-break galaxies at $z\sim3$ (Steidel et al.\ 1996) and work on the Hubble Deep Field North (e.g., Madau et al.\ 1996; Sawicki et al.\ 1997). One of the most important early results on the LF at high redshift were the $z\sim3$ and $z\sim4$ determinations by Steidel et al.\ (1999), based on a wide-area (0.23 degree$^2$) photometric selection and spectroscopic follow-up campaign. Steidel et al.\ (1999) derived essentially identical LFs for galaxies at both $z\sim3$ and $z\sim4$, pointing towards a broader peak in the star formation history extending out to $z\sim4$, finding no evidence for the large decline that Madau et al.\ (1996) had reported between $z\sim3$ and $z\sim4$. Following upon these early results, there was a push to measure the $UV$ LF to $z\sim5$ and higher (e.g., Dickinson 2000; Ouchi et al.\ 2004; Lehnert \& Bremer 2003). However, it was not until the installation of the Advanced Camera for Surveys (Ford et al.\ 2003) on the Hubble Space Telescope in 2002 that the first substantial explorations of the $UV$ LF at $z\sim6$ began. Importantly, the HST ACS instrument enabled astronomers to obtain deep, wide-area imaging in the $z_{850}$ band, allowing for the efficient selection of galaxies at $z\sim6$ (Stanway et al.\ 2003; Bouwens et al.\ 2003b; Dickinson et al.\ 2004). Based on $z\sim6$ searches and the large HST data sets from the wide-area GOODS and ultra-deep HUDF data sets, the overall evolution of the $UV$ LF was quantified to $z\sim6$ (Bouwens et al.\ 2004a; Bunker et al.\ 2004; Yan \& Windhorst 2004; Bouwens et al.\ 2006; Beckwith et al.\ 2006). The first quantification of the evolution of the $UV$ LF with fits to all three Schechter parameters was by Bouwens et al.\ (2006) and suggested a brightening of the characteristic luminosity with cosmic time. Most follow-up studies supported this conclusion (Bouwens et al.\ 2007; McLure et al.\ 2009; Su et al.\ 2011: though Beckwith et al.\ 2006 favored a simple $\phi^*$ evolution model with no evolution in $\alpha$ or $M^*$). The next significant advance in our knowledge of the $UV$ LF at high redshift came with the installation of the Wide Field Camera 3 (WFC3) and its near-IR camera WFC3/IR on the Hubble Space Telescope. The excellent sensitivity, field of view, and spatial resolution of this camera allowed us to survey the sky $\sim$40$\times$ more efficiently in the near-IR than with the earlier generation IR instrument NICMOS. The high efficiency of WFC3/IR enabled the identification of $\sim$200-500 galaxies at $z\sim7$-8 (e.g., Wilkins et al.\ 2010; Bouwens et al.\ 2011; Oesch et al.\ 2012; Grazian et al.\ 2012; Finkelstein et al.\ 2012; Yan et al.\ 2012; McLure et al.\ 2013; Schenker et al.\ 2013; Lorenzoni et al.\ 2013; Schmidt et al.\ 2014), whereas only $\sim$20 were known before (Bouwens et al.\ 2008, 2010b; Oesch et al.\ 2009; Ouchi et al.\ 2009b). While initial determinations of the $UV$ LF at $z\sim7$-8 appeared consistent with a continued evolution in the characteristic luminosity to fainter values (e.g., Bouwens et al.\ 2010a; Lorenzoni et al.\ 2011), the inclusion of wider-area data in these determinations quickly made it clear that some of the evolution in the LF was in the volume density $\phi^*$ (e.g., Ouchi et al.\ 2009b; Castellano et al.\ 2010; Bouwens et al.\ 2011b; Bradley et al.\ 2012; McLure et al.\ 2013) and in the faint-end slope $\alpha$ (Bouwens et al.\ 2011b; Bradley et al.\ 2012; Schenker et al.\ 2013; McLure et al.\ 2013). With the recent completion of the wide-area CANDELS program (Grogin et al.\ 2011; Koekemoer et al.\ 2011) and availability of even deeper optical+near-IR observations over the HUDF from the XDF/UDF12 data set (Illingworth et al.\ 2013; Ellis et al.\ 2013), there are several reasons to revisit determinations of the $UV$ LF not just at $z\sim7$-10, but over the entire range $z\sim10$ to $z\sim4$ to more precisely study the evolution. First, the addition of especially deep WFC3/IR observations to legacy fields with deep ACS observations allows for an improved determination of the $UV$ LF at $z\sim5$-6 due to the $\sim$1-mag greater depths of the $UV$ LF probed at $z\sim5$-6 by the WFC3/IR near-IR observations relative to the original $z_{850}$-band observations. The gains at $z\sim6$ are even more significant, as the new WFC3/IR data make it possible (1) to perform a standard two-color selection of $z\sim6$ galaxies and (2) to measure their $UV$ luminosities at the same rest-frame wavelengths as with other samples. Bouwens et al.\ (2012a) already made use of the initial observations over the CANDELS GOODS-South to provide such a determination of the $z\sim6$ LF, but the depth and area of the current data sets allow us to significantly improve upon this early analysis. Second, the availability of WFC3/IR observations over legacy fields like GOODS or the HUDF can also significantly improve the redshift completeness of Lyman-break-like selections at $z\sim4$, $z\sim5$, and $z\sim6$, while keeping the overall contamination levels to a minimum (as we will illustrate in \S3 of this paper). Improving the overall completeness and redshift coverage of Lyman-break-like selections is important, since it will allow us to leverage the full search volume, thereby reducing the sensitivity of the high-redshift results to large-scale structure variations and shot noise (from small number statistics). Finally, the current area covered by the wide-area CANDELS program now is in excess of 750 arcmin$^2$ in total area, or $\sim$0.2 square degrees, over 5 independent pointings on the sky. The total area available at present goes significantly beyond the CANDELS-GS, CANDELS-UDS, ERS, and BoRG fields that have been used for many previous LF determinations at $z\sim7$-10 (e.g., Bouwens et al.\ 2011; Oesch et al.\ 2012; Bradley et al.\ 2012; Yan et al.\ 2012; Grazian et al.\ 2012; Lorenzoni et al.\ 2013; McLure et al.\ 2013; Schenker et al.\ 2013). While use of the full CANDELS area can be more challenging due to a lack of deep HST data at $\sim$0.9-1.1$\mu$m over the UDS, COSMOS, and EGS areas, the effective selection of $z\sim5$-10 galaxies is nevertheless possible, leveraging the available ground-based observations, as we demonstrate in \S3 and \S4 (albeit with some intercontamination between the CANDELS-EGS $z\sim7$ and $z\sim8$ samples due to the lack of deep $Y$-band data). Of course, there have been a significant number of studies on the $UV$ LF at $z\sim4$-7 over even wider survey areas than available over CANDELS, e.g., van der Burg et al.\ (2010) and Willott et al.\ (2013) at $z\sim3$-5 and $z\sim6$ from the $\sim$4 deg$^2$ Canada France Hawaii Telescope (CFHT) Legacy Survey deep field observations, Ouchi et al.\ (2009b) at $z\sim7$ from Subaru observations of the Subaru Deep Field (Kashikawa et al.\ 2004) and GOODS North (Giavalisco et al.\ 2004a), and Bowler et al.\ (2014) at $z\sim7$ from the UltraVISTA and UDS programs. While each of these surveys also provide constraints on the volume density of the bright rare sources, these programs generally lack high-spatial-resolution data on their candidates, making the rejection of low-mass stars from these survey fields more difficult. In addition, integration of the results from wide-area fields with deeper, narrower fields can be particularly challenging, as any systematic differences in the procedure for measuring magnitudes or estimating volume densities can result in significant errors on the measured shape of the LF (e.g., see Figure~\ref{fig:oldi} from Appendix F.2 for an illustration of the impact that small systematics can have). Controlling for cosmic variance is especially important given the substantial variations in the volume density of luminous sources observed field to field. The use of independent sightlines -- as implemented in the CANDELS program -- is remarkably effective in reducing the impact of cosmic variance on our results. In fact, we would expect the results from the 0.2 degree$^2$ search area available over the 5 CANDELS fields to be reasonably competitive with the 1.5 deg$^2$ UltraVISTA field (McCracken et al.\ 2012), as far as large-scale structure uncertainties are concerned. While the uncertainties on the 5 CANDELS fields are formally expected to be $\sim$1.6$\times$ larger,\footnote{Using the Trenti \& Stiavelli (2008) ``cosmic variance calculator,'' a $z=5.8\pm0.5$ redshift selection window for each sample, galaxies with an intrinsic volume density of $4\times10^{-4}$ Mpc$^{-3}$, and 5 independent 20$'$$\times$7.5$'$ CANDELS survey fields, we estimate a total uncertainty of 10\% on the volume density of galaxies over the entire CANDELS program from ``cosmic variance.'' Repeating this calculation over the 90$'$$\times$60$'$ survey area from UltraVISTA yields $\sim$7\%.} CANDELS usefully allows for a measurement of the field-to-field variations and hence uncertainties due to large-scale structure (which is especially valuable if factor of $\sim$1.8 variations in the volume density of bright $z\gtrsim6$ galaxies are present on square-degree scales: Bowler et al.\ 2015). Of course, very wide-area ground-based surveys can also make use of multiple search fields, both to estimate the uncertainties arising from large-scale structure and as a further control on cosmic variance (e.g., Ouchi et al.\ 2009; Willott et al.\ 2013; Bowler et al.\ 2014, 2015), and can also benefit from smaller shot noise uncertainties (if the goal is the extreme bright end of the LF). The purpose of the present work is to provide for a comprehensive and self-consistent determination of the $UV$ LFs at $z\sim4$, $z\sim5$, $z\sim6$, $z\sim7$, $z\sim8$, and $z\sim10$ using essentially all of the deep, wide-area observations available from HST over five independent lines of sight on the sky and including the full data sets from the CANDELS, ERS, and HUDF09+12/XDF programs. The deepest, highest-quality regions within the BoRG/HIPPIES program (relevant for selecting $z\sim8$ galaxies) are also considered. In deriving the present LFs, we use essentially the same procedures, as previously utilized in Bouwens et al.\ (2007) and Bouwens et al.\ (2011). Great care is taken to minimize the impact of systematic biases on our results. Where possible, extensive use of deep ground-based observations over our search fields is made to ensure the best possible constraints on the redshifts of the sources. A full consideration of the available Spitzer/IRAC SEDS (Ashby et al.\ 2013), Spitzer/IRAC GOODS (Dickinson et al.\ 2004), and IRAC Ultra Deep Field 2010 (IUDF10: Labb{\'e} et al.\ 2013) observations over our fields are made in setting constraints on the LF at $z\sim10$ (see Oesch et al.\ 2014). For consistency with previous work, we find it convenient to quote results in terms of the luminosity $L_{z=3}^{*}$ Steidel et al.\ (1999) derived at $z\sim3$, i.e., $M_{1700,AB}=-21.07$. We refer to the HST F435W, F606W, F600LP, F775W, F814W, F850LP, F098M, F105W, F125W, F140W, and F160W bands as $B_{435}$, $V_{606}$, $V_{600}$, $i_{775}$, $I_{814}$, $z_{850}$, $Y_{098}$, $Y_{105}$, $J_{125}$, $JH_{140}$, and $H_{160}$, respectively, for simplicity. Where necessary, we assume $\Omega_0 = 0.3$, $\Omega_{\Lambda} = 0.7$, and $H_0 = 70\,\textrm{km/s/Mpc}$. All magnitudes are in the AB system (Oke \& Gunn 1983). \begin{figure*} \epsscale{1.12} \plotone{zdist.eps} \caption{(\textit{left}) The expected redshift distributions for our $z\sim4$, $z\sim5$, $z\sim6$, $z\sim7$, $z\sim8$, and $z\sim10$ samples from the XDF using the Monte-Carlo simulations described in \S4.1. The mean redshifts for these samples are 3.8, 4.9, 5.9, 6.8, 7.9, and 10.4, respectively. These simulations demonstrate the effectiveness of our selection criteria in isolating galaxies within fixed redshift ranges. Each selection window is smoothed by a normal distribution with scatter $\sigma_z \sim 0.2$. (\textit{right}) Redshift distribution we recover for sources in our $z\sim4$, $z\sim5$, $z\sim6$, $z\sim7$, $z\sim8$, and $z\sim10$ samples using the EAZY photometric redshift code (with similar smoothing as in the left panel). Our color-color selections segregate sources by redshift in a very similar manner to what one would find selecting sources according to their best-fit photometric redshift estimate (e.g., McLure et al.\ 2010; Finkelstein et al.\ 2012; Bradley et al.\ 2014).\label{fig:zdist}} \end{figure*} | \begin{figure} \epsscale{1.05} \plotone{abmagz2.eps} \caption{Current determinations of the faint-end slope to the $UV$ LF (\textit{solid red squares}) versus redshift. Also shown are the faint-end slope determinations from Treyer et al.\ (1998: \textit{black open circle}) at $z\sim0$, from Arnouts et al.\ (2005) at $z\sim0$-2 (\textit{blue crosses}), and from Reddy et al.\ (2009) at $z\sim2$-3 (\textit{green squares}). The solid line is a fit of the $z\sim4$-8 faint-end slope determinations to a line, with the 1$\sigma$ errors (gray area: calculated by marginalizing over the likelihood for all slopes and intercepts). The light gray region gives the range of expected faint-end slopes at $z>8.5$ assuming a linear dependence of $\alpha$ on redshift. The best-fit trend with redshift is $d\alpha/dz=-0.10\pm0.03$ (\S5.1). If we keep $M^*$ fixed, the trend is an even steeper $d\alpha/dz=-0.10\pm0.02$ (\S5.1). The overplotted arrows indicate the predicted change in the slope of the LF per unit redshift, $d\alpha/dz$, from the evolution of the halo mass function based on the conditional LF model from \S5.5 and from the Tacchella et al.\ (2013) model (see \S5.5.1). We observe strong evidence for a steepening of the $UV$ LF from $z\sim8$ to $z\sim4$ (\S5.1).\label{fig:slopeevol}} \end{figure} \begin{figure} \epsscale{1.0} \plotone{fixednumdens.ps} \caption{(\textit{upper}) The $UV$ luminosities we estimate for galaxies from our derived LFs taking galaxies at a fixed cumulative number density, i.e., $n(>L_{UV}) = 2\times10^{-4}$ Mpc$^{-3}$ (identical to the criterion employed by Papovich et al.\ 2011 and Smit et al.\ 2012: \S5.3). Interestingly enough, the best-fit evolution in $UV$ luminosity we estimate at a fixed cumulative number density (\textit{solid red line}) is quite similar to what Bouwens et al.\ (2011) estimated for the evolution in the characteristic magnitude $M^*$ (\textit{dotted black line}), before strong constraints were available on the bright end of the $UV$ LF at $z\gtrsim6$. (\textit{lower}) The star formation rate we estimate for galaxies from our derived LFs to the same cumulative number density as in the upper panel. Results from the literature are corrected to assume the same Salpeter IMF assumed for our own determinations. The $z\sim2$ results are based on the mid-IR and H$\alpha$ LF results (Reddy et al.\ 2008; Magnelli et al.\ 2011; Sobral et al.\ 2013). The best fit $\textrm{SFR}$ versus redshift relation is shown with the black line and can be described as follows $(15.8 M_{\odot}\textrm{/yr})10^{-0.24(z-6)}$. By selecting galaxies that lie at a fixed cumulative number density at many distinct points in cosmic time, we can plausibly trace the evolution in the SFRs of individual galaxies with cosmic time.\label{fig:sfrevol}} \end{figure} \subsection{Empirical Fitting Formula for Interpolating and Extrapolating our LF Results to $z>8$} As in previous work (e.g., Bouwens et al.\ 2008), it is useful to take the present constraints on the $UV$ LF and condense them into a fitting formula for describing the evolution of the $UV$ LF with cosmic time. This enterprise has utility not only for extrapolating the present results to $z>8$, but also for interpolating between the present LF determinations at $z\sim4$, $z\sim5$, $z\sim6$, $z\sim7$, and $z\sim8$ when making use of a semi-empirical model. We will assume that each of the three Schechter parameter ($M^*$, $\alpha$, $\log_{10} \phi^*$) depends linearly on redshift when deriving this formula. The resultant fitting formula is as follows: \begin{eqnarray*} M_{UV} ^{*} =& (-20.95\pm0.10) + (0.01\pm0.06) (z - 6)\\ \phi^* =& (0.47_{-0.10}^{+0.11}) 10^{(-0.27\pm0.05)(z-6)}10^{-3} \textrm{Mpc}^{-3}\\ \alpha =& (-1.87\pm0.05) + (-0.10\pm0.03)(z-6) \label{eq:empfit} \end{eqnarray*} Constraints from Reddy \& Steidel (2009) on the faint-end slope of the LF at $z\sim3$ were included in deriving the above best-fit relations. As is evident from these relations, the evolution in the faint-end slope $\alpha$ is significant at $3.4\sigma$. The evolution in the normalization $\phi^*$ of the LF is significant at $5.4\sigma$. We find no significant evolution in the value of $M^*$. Given the considerable degeneracies that exist between the Schechter parameters, it is also useful to derive the best-fit model if we fix the characteristic magnitude $M^*$ to some constant value and assume that all of the evolution in the effective shape of the $UV$ LF is due to evolution in the faint-end slope $\alpha$. For these assumptions, the resultant fitting formula is as follows: \begin{eqnarray*} M_{UV} ^{*} =& (-20.97\pm0.06) ~~~~ \textrm{(fixed)} \\ \phi^* =& (0.44\pm0.06) 10^{(-0.28\pm0.02)(z-6)}10^{-3} \textrm{Mpc}^{-3}\\ \alpha =& (-1.87\pm0.04) + (-0.100\pm0.018)(z-6) \label{eq:empfit2} \end{eqnarray*} From this fitting formula, we can see that the steepening in the effective shape of the $UV$ LF (as seen in Figure~\ref{fig:shapelf}) appears to be significant at 5.7$\sigma$. The apparent evolution in the faint-end slope $\alpha$ is quite significant. Even if we allow for large factor-of-2 errors in the contamination rate or sizeable ($\sim10$\%) uncertainties in the selection volume (as we consider in \S4.2), the formal evolution is still significant at $2.9\sigma$, while the apparent steepening of the $UV$ LF presented in Figure~\ref{fig:shapelf} remains significant at $5\sigma$ (instead of $5.7\sigma$). \subsection{Faint-End Slope Evolution} The best-fit faint-end slopes $\alpha$ we find in the present analysis are presented in Figure~\ref{fig:slopeevol}. The faint-end slope $\alpha$ we determine is equal to $-1.87\pm0.10$, $-2.06\pm0.13$, and $-2.02\pm0.23$ at $z\sim6$, $z\sim7$, and $z\sim8$, respectively. Faint-end slopes $\alpha$ of $\sim -2$ are very steep, and the integral flux from low luminosity sources can be very large since the luminosity density in this case is formally divergent. While clearly the $UV$ LF must cut off at some luminosity, the $UV$ light from galaxies fainter than $-$16 should dominate the overall luminosity density (Bouwens et al.\ 2012a). In combination with the results at somewhat lower redshifts, the present results strongly argue for increasingly steep faint-end slopes $\alpha$ at higher redshifts. Results from \S5.1 suggest that this evolution is significant at $3.1\sigma$ if we consider just the formal evolution in the faint-end slope $\alpha$ itself. The evolution is significant at $5.7\sigma$ if we consider the evolution in the shape of the $UV$ LF (Figure~\ref{fig:shapelf}). \begin{deluxetable*}{ccccc} \tablewidth{13cm} \tabletypesize{\footnotesize} \tablecaption{$UV$ Luminosity Densities and Star Formation Rate Densities to $-17.0$ AB mag (0.03 $L_{z=3} ^{*}$: see \S5.4).\tablenotemark{a}\label{tab:sfrdens}} \tablehead{ \colhead{} & \colhead{} & \colhead{$\textrm{log}_{10} \mathcal{L}$} & \multicolumn{2}{c}{$\textrm{log}_{10}$ SFR density} \\ \colhead{Dropout} & \colhead{} & \colhead{(ergs s$^{-1}$} & \multicolumn{2}{c}{($M_{\odot}$ Mpc$^{-3}$ yr$^{-1}$)} \\ \colhead{Sample} & \colhead{$<z>$} & \colhead{Hz$^{-1}$ Mpc$^{-3}$)} & \colhead{Dust Uncorrected} & \colhead{Dust Corrected}} \startdata $B$ & 3.8 & 26.52$\pm$0.06 & $-1.38\pm$0.06 & $-1.00\pm0.06$ \\ $V$ & 4.9 & 26.30$\pm$0.06 & $-1.60\pm$0.06 & $-1.26\pm0.06$ \\ $i$ & 5.9 & 26.10$\pm$0.06 & $-1.80\pm$0.06 & $-1.55\pm0.06$ \\ $z$ & 6.8 & 25.98$\pm$0.06 & $-1.92\pm$0.06 & $-1.69\pm0.06$ \\ $Y$ & 7.9 & 25.67$\pm$0.06 & $-2.23\pm$0.07 & $-2.08\pm0.07$ \\ $J$ & 10.4 & 24.62$_{-0.45}^{+0.36}$ & $-3.28$$_{-0.45}^{+0.36}$ & $-3.13$$_{-0.45}^{+0.36}$ \enddata \tablenotetext{a}{Integrated down to 0.05 $L_{z=3}^{*}$. Based upon LF parameters in Table 2 of Bouwens et al.\ (2011b: see also Bouwens et al.\ 2007) (see \S5.4). The SFR density estimates assume $\gtrsim100$ Myr constant SFR and a Salpeter IMF (e.g., Madau et al.\ 1998). Conversion to a Chabrier (2003) IMF would result in a factor of $\sim$1.8 (0.25 dex) decrease in the SFR density estimates given here.} \end{deluxetable*} \begin{figure*} \epsscale{1.05} \plotone{sfz.eps} \caption{Updated determinations of the derived SFR (\textit{left axis}) and $UV$ luminosity (\textit{right axis}) densities versus redshift (\S5.4). The left axis gives the SFR densities we would infer from the measured luminosity densities, assuming the Madau et al.\ (1998) conversion factor relevant for star-forming galaxies with ages of $\gtrsim10^8$ yr (see also Kennicutt 1998). The right axis gives the $UV$ luminosities we infer integrating the present and published LFs to a faint-end limit of $-17$ mag (0.03 $L_{z=3}^{*}$) -- which is the approximate limit we can probe to $z\sim8$ in our deepest data set. The upper and lower set of points (\textit{red and blue circles, respectively}) and shaded regions show the SFR and $UV$ luminosity densities corrected and uncorrected for the effects of dust extinction using the observed $UV$ slopes $\beta$ (from Bouwens et al.\ 2014a) and the IRX-$\beta$ relationship (Meurer et al.\ 1999). Also shown are the SFR densities at $z\sim2-3$ from Reddy et al.\ (2009: \textit{green crosses}), at $z\sim0$-2 from Schiminovich et al.\ (2005: \textit{black hexagons}), at $z\sim7$-8 from McLure et al.\ (2013: \textit{cyan circles}), and at $z\sim9$-10 from Ellis et al.\ (2013: \textit{cyan circles}), from CLASH (Zheng et al.\ 2012; Coe et al.\ 2013; Bouwens et al.\ 2014b: \textit{light blue circles}), and Oesch et al.\ (2013b, 2014: \textit{blue open circles}), as well as the likely contribution from IR bright sources at $z\sim0.5$-2 (Magnelli et al.\ 2009, 2011; Daddi et al.\ 2009: \textit{dark red shaded region}). The $z\sim9$-11 constraints on the $UV$ luminosity density have been adjusted upwards to a limiting magnitude of $-17.0$ mag assuming a faint-end slope $\alpha$ of $-2.0$ (consistent with our constraints on $\alpha$ at both $z\sim7$ and at $z\sim8$).\label{fig:sfz}} \end{figure*} While consistent with previous results, the present results suggest slightly steeper faint-end slopes $\alpha$ than reported in Bouwens et al.\ (2011), McLure et al.\ (2013), and Schenker et al.\ (2013) at $z\sim7$. These steeper faint-end slope are a direct consequence of the somewhat brighter values for $M^*$ that we find in the current study and the trade-off between fainter values for $M^*$ and steeper faint-end slopes $\alpha$. These results only serve to strengthen earlier findings suggesting that the faint-end slope $\alpha$ is steeper at $z\sim7$ (and likely $z\sim8$) than it is at $z\sim3$. Similar conclusions have been drawn from follow-up work on gamma-ray hosts (Robertson et al.\ 2012; Trenti et al.\ 2012b; Tanvir et al.\ 2012; Trenti et al.\ 2013). \subsection{SFR Evolution in Individual Galaxies} Given the apparent evolution of the $UV$ LF, one might ask how rapidly the $UV$ luminosity or SFR of an individual galaxy likely increases with cosmic time. Fortunately, we can make progress on this question using a number density-matching procedure,\footnote{Cumulative number-density matching can be a powerful way for following the evolution of individual galaxies with cosmic time. This is due to the fact that galaxies within a given volume of the universe largely grow in a self-similar fashion, so that $n$th brightest or most massive galaxy at some point in cosmic time generally maintains its ranking in terms of brightness or mass at some later point in cosmic time (van Dokkum et al.\ 2010; Papovich et al.\ 2011; Lundgren et al.\ 2014).} by ordering galaxies in terms of their observed $UV$ luminosities and following the evolution of those sources with a fixed cumulative number density. For convenience, we adopt the same integrated number density $2\times10^{-4}$ Mpc$^{-3}$ (the approximate cumulative number density for $L^*$ galaxies) for this question as Papovich et al.\ (2011: see also Lundgren et al.\ 2014) had previously considered in quantifying the growth in the SFR of an individual galaxy with cosmic time. Dust corrections are performed using the measured $\beta$'s for galaxies at $z\sim4$-8 (Bouwens et al.\ 2014a) and the well-known IRX-$\beta$ relationship from Meurer et al.\ (1999). The results are presented in Figure~\ref{fig:sfrevol}. The $UV$ luminosity at a fixed cumulative number density evolves as $M_{UV}(z) = -20.40 + 0.37(z-6)$. Interestingly enough, the evolution in the $UV$ luminosity we infer for galaxies at some fixed cumulative number density is almost identical to what Bouwens et al.\ (2011) had previously inferred for the evolution in the characteristic magnitude $M^*$ with redshift (i.e., $-20.29 + 0.33(z-6)$: \textit{dotted black line}). Upon reflection, it is clear why this must be so. For pure luminosity evolution, one would expect both the characteristic magnitude $M^*$ of the $UV$ LF and the $UV$ luminosity of individual galaxies to evolve in exactly the same manner. Even though we now see that such a scenario does not work for the brightest, rarest galaxies, one can nevertheless roughly parameterize the evolution of fainter galaxies assuming pure luminosity evolution. For these galaxies, the Bouwens et al.\ (2008, 2011) fitting formula for $M^*$ evolution works remarkably well in describing their steadily-increasing $UV$ luminosities. In this way, the modeling of the evolution of the LF using $M^*$ evolution by Bouwens et al.\ (2008, 2011) -- a treatment built on by Stark et al.\ (2009) -- effectively foreshadowed later work using a sophisticated cumulative number density-matching formalism to trace the star-formation history of individual systems at $z>2$ (Papovich et al.\ 2011; Lundgren et al.\ 2014). The SFR for a galaxy in this number density-matched scenario evolves as $\textrm{SFR} = (16.2 M_{\odot}\textrm{/yr})10^{-0.24(z-6)}$. The evolution in the SFR is remarkably similar to the relations found by Papovich et al.\ (2011) and Smit et al.\ (2012). Not surprisingly, the best-fit trends for galaxies with $L^*$-like volume densities (i.e., at $\sim$$2\times10^{-4}$ Mpc$^{-3}$) show little dependence on the parameterization of the Schechter function and whether one fits the evolution through a change in $M^*$ or a change in $\phi^*$ and $\alpha$. \subsection{Luminosity and Star Formation Rate Densities} We will take advantage of our new LF determinations at $z\sim4$-10 to provide updated measurements of the $UV$ luminosity density at $z\sim4$-10. As in previous work (Bouwens et al.\ 2007, 2008, 2011; Oesch et al.\ 2012), we only derive the $UV$ luminosity density to the limiting luminosity probed by the current study at $z\sim8$, i.e., $-17$ mag (0.03 $L_{z=3}^{*}$), to keep these determinations as empirical as possible. Since this is slightly fainter than what one can probe in searches for galaxies at $z\sim10$, we make a slight correction to our $z\sim9$ and $z\sim 10$ results. The best-fit faint-end slope $\alpha=-2$ we find at $z\sim8$ is assumed in this correction. The use of even steeper faint-end slopes (i.e., $-2.3$) as implied by our LF fitting formula in \S5.1 would yield similar results, only increasing the luminosity density by $\sim$0.015 dex. \begin{figure*} \epsscale{0.8} \plotone{theorylf.ps} \caption{Comparison of the observed $UV$ LFs with the simulation results from Jaacks et al.\ (2012: \textit{left panel}) and the predictions of a simple conditional luminosity function (CLF) model based on halo growth (Bouwens et al.\ 2008: \textit{right panel}). The Jaacks et al.\ (2012) curves are for $z\sim8$, $z\sim7$, and $z\sim6$. As described in \S5.5, the Jaacks et al.\ (2012) results show the predictions of a sophisticated cosmological hydrodynamical simulation for the LF, while the CLF model shows the predicted evolution based on the expected evolution of the halo mass function and a mass-to-light ratio that evolves as $(1+z)^{-1.5}$ (see Appendix I). While the Jaacks et al.\ (2012) model overpredicts the observed steepening of the $UV$ LF towards high redshift ($d\alpha/dz\sim-0.17$ vs. $d\alpha/dz=-0.10 \pm 0.02$), the simple conditional LF model considered here predicts the observed steepening quite well ($d\alpha/dz\sim-0.12$ vs. $d\alpha/dz=-0.10 \pm 0.02$). The luminosity per unit halo mass for lower-mass galaxies may increase more rapidly towards high redshift than for higher-mass galaxies. Our CLF model predicts a cut-off in the $UV$ LF at $z>6$ brightward of $-23$ mag, in apparent agreement with the observations.\label{fig:theorylf}} \end{figure*} In combination with our estimates of the luminosity density, we also take this opportunity to provide updated measurements of the star formation rate density at $z\sim4$-10. In making these estimates of the SFR density at $z\sim4$-10, we correct for dust extinction using the well-known IRX-$\beta$ relationship (Meurer et al.\ 1999) combined with the latest measurements of $\beta$ from Bouwens et al.\ (2014a). As before, we assume that the extinction $A_{UV}$ at rest-frame $UV$ wavelengths is $4.43+1.99\beta$, with an intrinsic scatter of 0.35 in the $\beta$ distribution. This is consistent with what has been found for bright galaxies at $z\sim4$-5 (Bouwens et al.\ 2012b; Castellano et al.\ 2012). The new $\beta$ determinations from Bouwens et al.\ (2014a) utilize large $>$4000-source samples constructed from the XDF, HUDF09-1, HUDF09-2, ERS, CANDELS-GN, and CANDELS-GS data sets and were constructed to provide much more accurate and robust measurements of the $\beta$ distribution than has been provided in the past. The mean dust extinction we estimate based on the Meurer et al.\ (1999) law for the observed $\beta$ distribution is 2.4, 2.2, 1.8, 1.66, 1.4, and 1.4 (in units of $L_{IR}/L_{UV}+1$ where $L_{IR}$ and $L_{UV}$ are the bolometric and UV luminosities of a galaxy, respectively) for the observed galaxies at $z\sim4$, $z\sim5$, $z\sim6$, $z\sim7$, $z\sim8$, and $z\sim10$, respectively. The dust-corrected $UV$ luminosity densities are then converted into SFR densities using the canonical Madau et al.\ (1998) and Kennicutt et al.\ (1998) relation: \begin{equation} L_{UV} = \left( \frac{\textrm{SFR}}{M_{\odot} \textrm{yr}^{-1}} \right) 8.0 \times 10^{27} \textrm{ergs}\, \textrm{s}^{-1}\, \textrm{Hz}^{-1}\label{eq:mad} \end{equation} where a $0.1$-$125\,M_{\odot}$ Salpeter IMF and a constant star formation rate for ages of $\gtrsim100$ Myr are assumed. In light of the very high EWs of the $H\alpha$ and [OIII] emission lines in $z\sim4$-8 galaxies (Schaerer \& de Barros 2009; Shim et al.\ 2011; Stark et al.\ 2013; Schenker et al.\ 2013; Gonz{\'a}lez et al.\ 2012; Labb{\'e} et al.\ 2013; Smit et al.\ 2014; Gonz{\'a}lez et al.\ 2014), it is probable that the adopted conversion factors underestimate the actual SFRs (perhaps by as much as a factor of 2: Castellano et al.\ 2014). Our updated results on both the luminosity density and star-formation rate density are presented in Table~\ref{tab:sfrdens} and Figure~\ref{fig:sfz}. As before, we have included select results from the literature (Schiminovich et al.\ 2005; Reddy \& Steidel 2009) to show the trends at $z<4$, as well as presenting recent determinations of the star formation rate density at $z\sim0.5$-2.0 from IR bright sources (Daddi et al.\ 2009; Magnelli et al.\ 2009, 2011). We also include select $z\geq6$ results from the literature for comparison with previous results (Zheng et al.\ 2012; Coe et al.\ 2013; McLure et al.\ 2013; Ellis et al.\ 2013; Bouwens et al.\ 2014b). We observe very good agreement with previous results over the full range in redshift $z\sim4$-10. The most noteworthy changes occur at $z\sim5$ where the volume density we find is higher than estimated previously (Bouwens et al.\ 2007) and better matches the evolutionary trend connecting the $z\sim4$ and $z\sim6$ results. The improved robustness of the present $z\sim5$ results is likely a direct consequence of the significantly broader wavelength baseline available to select $z\sim5$ galaxies over the $z\sim4.5$-5.5 volume than was available in the earlier purely optical/ACS data set (e.g., see discussion in Duncan et al.\ 2014). \subsection{Comparison with Theoretical Models} It is interesting to compare the current observational results with what is found from large hydrodynamical simulations and also from simple theoretical models. Such comparisons are useful for interpreting the present results and also for ascertaining whether any of our observational results are unexpected or challenge the current paradigm in any way. We first describe the models and then in the following subsections we discuss comparisons with our new LF results. The first set of cosmological hydrodynamical simulations we consider are those from Jaacks et al.\ (2012). These results provide a very detailed investigation as to how the shape of the $UV$ LF might evolve with cosmic time. Jaacks et al.\ (2012) make use of some large simulations done on a modified version of the GADGET-3 code (Springel et al.\ 2005) that includes cooling by H+He+metal line cooling, heating by a modified Haardt \& Madau (1996) spectrum (Katz et al.\ 1996), an Eisenstein \& Hu (1999) initial power spectrum, ``Pressure model'' star formation (Schaye \& Dalla Vecchia 2008), supernovae feedback, and multiple-component variable velocity wind model (Choi \& Nagamine 2011). Simulations are done with a range of box sizes from 10 $h^{-1}$ Mpc to 100 $h^{-1}$ Mpc ($2\times 600^3$ or $3\times 400^3$ particles). As an alternative to the results from large hydrodynamical simulations, we make use of a much more simple-minded theoretical model using a conditional luminosity function (CLF: Yang et al.\ 2003; Cooray \& Milosavljevi{\'c} 2005) formalism where one derives the LF from the halo mass function using some mass-to-light kernel. We adopt the same CLF model as Bouwens et al.\ (2008) had previously used in their analysis of the $UV$ LF, but have modified the model to include a faster evolution in the M/L of halos, i.e., $\propto (1+z)^{-1.5}$. This evolution better reproduces changes in the observed $UV$ LF from $z\sim8$ to $z\sim4$. The $(1+z)^{-1.5}$ factor also matches the expected evolution of the dynamical time scale. A detailed description of this model is provided in Appendix I. The advantage of this approach is that it can give us insight into the extent to which the evolution in the $UV$ LF is driven by the growth of dark matter halos themselves and to what extent the evolution arises from changes in the mass-to-light ratio of those halos and hence gas dynamical processes (e.g., gas cooling or SFR time scales). Finally, we consider the predictions by Tacchella et al.\ (2013), which are based on a minimal model that also links the evolution of the UV galaxy luminosity function to that of the dark-matter halo mass function. The model is constructed by assuming that a halo of mass $M_h$ at redshift $z$ has a stellar mass $M_*=\epsilon(M_h)*M_h$, of which a small fraction (10\%) is formed at the halo assembly time $z_a$, while the remaining is formed at a constant rate from $z_a$ to $z$. Since halos have shorter assembly times as redshift increases, the UV light to halo mass ratio increases with redshift. $\epsilon(M_h)$ describes the efficiency of the accretion in forming stars; Tacchella et al.\ (2013) calibrate it at $z=4$ via abundance matching. Before conducting detailed comparisons of the observational results with the above theoretical models, we first present a comparison of the binned LF results with the first two theoretical models to illustrate the broad overall agreeement between the two sets of results (Figure~\ref{fig:theorylf}). \subsubsection{Expected Evolution of the Faint-End Slope} The present observational results provide compelling evidence for significant evolution in the effective slope of the $UV$ LF (Figure~\ref{fig:shapelf}). While some of the evolution in the effective slope of the $UV$ LF may be due to a change in the characteristic magnitude $M^*$, most of the evolution appears to result from an evolving faint-end slope $\alpha$. In comparing the present observational results with theory, let us assume that we can effectively parameterize the entire shape evolution of the LF using the faint-end slope $\alpha$ (and because we do not finding convincing evidence for evolution in $M^*$). This assumption is useful, since it distills the shape information present in the moderately-degenerate $M^*$+$\alpha$ combination into a single parameter, resulting in a smaller formal error on the evolution. As shown in \S5.1, we derive $d\alpha/dz=-0.10 \pm 0.02$ from the observations, if we force $M^*$ to be constant in our fits. Remarkably enough, our simple-minded conditional LF model (Appendix I) is in remarkable agreement with our observational results, predicting that the faint-end slope $\alpha$ of the LF evolves as $d\alpha/dz \sim -0.12$. This compares with $d\alpha/dz \sim -0.17$ predicted from the Jaacks et al.\ (2012) simulation results. Finally, the Tacchella et al.\ (2013) model predict an evolutionary trend $d\alpha/dz$ of $-0.08$. Each of these predictions is very similar to the observed evolution (see Figure~\ref{fig:slopeevol}) of $d\alpha/dz = -0.10 \pm 0.02$. \subsubsection{Expected Evolution in the Characteristic Luminosity?} Our discovery of modest numbers of highly luminous galaxies in each of our high redshift samples, even at $z\sim10$, provides strong evidence against a rapid evolution in the luminosity where the $UV$ LF cuts off. Over the redshift range $z\sim4$ to $z\sim7$, we find no significant evolution in $M^*$ (see Table~\ref{tab:lfparm}). Over the slightly wider redshift range $z\sim4$ to $z\sim8$, our best-fit estimate for the evolution in the characteristic magnitude $M^*$ is just $dM^*/dz\sim 0.01\pm0.06$ (see the fitting formula in \S5.1) or just $dM^*\sim0.25 \pm0.37$ from $z\sim8$ to $z\sim4$. Given the observed luminosity of the brightest $z\sim10$ candidates found over the CANDELS fields (Oesch et al.\ 2014), i.e., $-21.4$ mag, it seems unlikely that the bright-end cut-off $M^*$ is especially fainter than $M^*\sim-20$ (limiting the evolution in $M^*$ to $\lesssim\,$1 mag over the redshift range $z\sim4$-10). \begin{figure} \epsscale{0.8} \plotone{abmagz.ps} \caption{Comparison of the observed evolution in the characteristic magnitude $M^*$ with that expected from a simple CLF model based on the growth in the halo mass function (Bouwens et al.\ 2008: Appendix I). Shown separately (and horizontally offset for clarity) are our characteristic magnitudes $M^*$ determinations (Table~\ref{tab:lfparm}) for all of the fields in our analysis (\textit{solid red circles}), all of the fields in our analysis but CANDELS-EGS (\textit{open red circles}), and the XDF+HUDF09-Ps+ERS+CANDELS-GN+GS fields (\textit{open red squares}). The green cross is the characteristic magnitude determination at $z\sim3$ from Reddy \& Steidel (2009). The gray dashed line shows the expected evolution in $M^*$ for simple-minded CLF models that do not include a cut-off at the bright end of the $UV$ LF (renormalizing the mass-to-light ratio to match $M^*$ at $z\sim4$). The black dotted and blue solid lines show the expected evolution in $M^*$ for CLF models where the mass-to-light ratio of halos is constant in time or evolves as the dynamical time scale, i.e., as $(1+z)^{-3/2}$ (\textit{blue line}). At sufficiently high redshift, it seems clear that we would expect the characteristic magnitude $M^*$ to be fainter due to evolution in the halo mass function. In practice, the evolution in the characteristic magnitude $M^*$ may be more limited (1) if the bright-end cut-off to the $UV$ LF (above some mass threshold) is instead set by a physical process (e.g., dust obscuration or quenching) and (2) if halos at higher redshifts have systematically lower mass-to-light ratios.\label{fig:theoryms}} \end{figure} This implies that whatever physical mechanism imposes a cut-off at the bright end of $z\gtrsim 4$ $UV$ LFs, this cut-off luminosity does not vary dramatically with redshift, at least out to $z\sim7$. Indeed, for the three mechanisms discussed by Bouwens et al.\ (2008) to impose a cut-off at the bright end of the $UV$ LF, i.e., heating from an AGN (Croton et al.\ 2005), the inefficiency of gas cooling for high-mass halos (e.g., Binney 1977; Rees \& Ostriker 1977; Silk 1977), and the increasing importance of dust attenuation for the most luminous and likely most massive galaxies (Bouwens et al.\ 2009; Pannella et al.\ 2009; Reddy et al.\ 2010; Bouwens et al.\ 2012; Finkelstein et al.\ 2012; Bouwens et al.\ 2014a), there is no obvious reason any of these mechanisms should depend significantly on redshift or cosmic time. Indeed, the results of the simulations or theoretical models bear out these expectations. The best-fit characteristic magnitudes $M^*$ derived from the Jaacks et al.\ (2012) simulations show very little evolution with cosmic time. Jaacks et al.\ (2012) derive $-21.15$, $-20.85$, and $-21.0$ for their $z\sim6$, $z\sim7$, and $z\sim8$ LFs, respectively. Simple fits to our CLF results also show only limited evolution in the characteristic magnitude $M^*$ with redshift, even out to $z\sim10$. The characteristic magnitudes we derive from fitting the model LFs at $z\sim4$-10 (minimizing the square of the logarithmic residuals) are presented in Figure~\ref{fig:theoryms} for comparison with our observational determinations of this same quantity (Table~\ref{tab:lfparm}). Both a model assuming fixed mass-to-light ratios for the halos (\textit{black line}) and a model with mass-to-light ratios evolving as the dynamical time ($(1+z)^{-3/2}$: \textit{blue line}) are considered. It is useful to contrast these results with a CLF model where no cut-off is imposed at the bright-end of the $UV$ LF and where there is no evolution in the mass-to-light ratio of halos. For the model described in Appendix I, this could be achieved by replacing the $(1+(M/m_c))$ term in Eq.~\ref{eq:kernel} by unity and renormalizing the mass-to-light ratio so that $M^*$ for the model LF is equal to $-21$ at $z\sim4$. The evolution in the characteristic magnitude $M^*$ we would predict for this model is shown with the dashed gray line in Figure~\ref{fig:theoryms}. At sufficiently high redshift, it seems clear from the gray line that we would expect the characteristic magnitude $M^*$ to be fainter due to evolution in the halo mass function. In practice, however, the evolution in the characteristic magnitude $M^*$ may be more limited if the bright-end cut-off to the $UV$ LF is instead set by a physical process that becomes dominant at some mass threshold (e.g., dust obscuration or quenching), as the dotted black line in Figure~\ref{fig:theoryms} illustrates. Even less evolution would be expected in the characteristic magnitude $M^*$ with cosmic time if halos at higher redshifts had systematically lower mass-to-light ratios, as illustrated by the blue line in this same figure. In reality, of course, we should emphasize that almost all LFs predicted by simulations or CLF models can only be approximately modelled using a Schechter-function-like parameterization, and therefore there can be considerable ambiguity in actually extracting the Schechter parameters from the model results and hence representing their evolution with cosmic time. | 14 | 3 | 1403.4295 |
1403 | 1403.4589_arXiv.txt | We revisit large field inflation models with modulations in light of the recent discovery of the primordial B-mode polarization by the BICEP2 experiment, which, when combined with the {\it Planck} + WP + highL data, gives a strong hint for additional suppression of the CMB temperature fluctuations at small scales. Such a suppression can be explained by a running spectral index. In fact, it was pointed out by two of the present authors (TK and FT) that the existence of both tensor mode perturbations and a sizable running of the spectral index is a natural outcome of large inflation models with modulations such as axion monodromy inflation. We find that this holds also in the recently proposed multi-natural inflation, in which the inflaton potential consists of multiple sinusoidal functions and therefore the modulations are a built-in feature. | The BICEP2 experiment detected the primordial B-mode polarization of the cosmic microwave background (CMB) with very high significance~\cite{BICEP2}, giving a very strong case for inflation~\cite{Guth:1980zm,Linde:1981mu}. The inflation scale is determined to be \bea \label{B-Hr} H_{\rm inf} &\simeq& 1.0 \times \GEV{14} \lrfp{r}{0.16}{\frac{1}{2}},\\ r &=& 0.20^{+0.07}_{-0.05} ~~(68\%{\rm CL}), \eea where $H_{\rm inf}$ is the Hubble parameter during inflation, and $r$ denotes the tensor-to-scalar ratio. The preferred range of $r$ is modified to $r = 0.16^{+0.06}_{-0.05}$, after subtracting the best available estimate for foreground dust. The BICEP2 result strongly suggests large-field inflation occurred, and by far the simplest model is the quadratic chaotic inflation~\cite{Linde:1983gd}.\footnote{ For various large-field inflation models and their concrete realization in the standard model as well as supergravity and superstring theory, see e.g.~\cite{Freese:1990ni,Murayama:1992ua,Kawasaki:2000yn,Kallosh:2007ig,Silverstein:2008sg,McAllister:2008hb,Kaloper:2008fb,Takahashi:2010ky, Nakayama:2010kt,Nakayama:2010sk,Harigaya:2012pg,Croon:2013ana,Nakayama:2013jka,Nakayama:2013nya,Czerny:2014wza,Czerny:2014xja,Nakayama:2014-HCI,Murayama:2014saa}} The discovery of the tensor mode perturbations is of significant importance not only for cosmology but also for particle physics, because the suggested inflation energy scale is close to the GUT scale. If the primordial B-mode polarization is measured with better accuracy by the Planck satellite and other ground-based experiments, it will pin down the underlying inflation model, providing invaluable information on the UV physics such as string theory. The BICEP2 data, when combined with the {\it Planck}+WP+highL data, gives a strong hint for some additional suppression of the CMB temperature fluctuations at small scales~\cite{BICEP2}. This is because the large tensor mode perturbations also contribute to the CMB temperature fluctuations at large scales, which causes the tension on the relative size of scalar density perturbations at large and small scales. The suppression of the density perturbations at small scales can be realized by e.g. (negative) running of the spectral index, hot dark matter, etc.\footnote{ Before the BICEP2 results, there was a hint for the presence of hot dark matter, such as sterile neutrinos~\cite{Wyman:2013lza,Hamann:2013iba, Battye:2013xqa}. Non-thermally produced axions are also an interesting candidate~\cite{Jeong:2013oza,Higaki:2014zua}. } In this letter we focus on the running spectral index as a solution to this tension. The spectral index of the curvature power spectrum ${\cal P_R}$ is defined by \bea n_s(k)-1 &=& \frac{d \ln {\cal P_R}(k)}{d \ln k}, \eea and the running of the spectral index is obtained as the differentiation of $n_s$ with respect to $\ln k$. The preferred range of the running spectral index and its statistical significance are not given in \cite{BICEP2}. Since the combination of the {\it Planck}+WP+highL data constrains the running as~\cite{Ade:2013zuv} $d n_s/d \ln k = - 0.022 \pm 0.010~(68\% {\rm CL})$, we expect that, once the BICEP2 data is combined, non-zero values of $d n_s/d\ln k \approx -0.02 \sim -0.03$ will be suggested with strong significance. As a reference value, we will assume that the running is approximately given by $d n_s/d\ln k \sim -0.025$ over the observed cosmological scales, but the precise value is not relevant for our purpose.\footnote{Note that both the spectral index and its running are usually evaluated at a pivot scale, and the running is assumed to be scale-independent in the MCMC analysis of the CMB data~\cite{Ade:2013zuv}. On the other hand, there is no firm ground to assume that they are completely scale-independent, and in fact, they do depend on scales in many scenarios. Therefore, the comparison between theory and observation must be done carefully, and a dedicated analysis to each theoretical model would be necessary to deduce some definite conclusions. We also note that the joint analyses of the $Planck$ and BICEP2 datasets have been performed in recent works such as~\cite{Abazajian:2014tqa}, see Eq.~(\ref{eq22}).} In a single-field slow-roll inflation model with a featureless potential, the running of the spectral index is of second order in the slow-roll parameters, and therefore of order $10^{-3}$. Thus, it is a challenge to explain a running as large as $d n_s/d\ln k \sim - 0.025$. For various proposals on this topic, see e.g.~Refs.~\cite{Chung:2003iu,Cline:2006db,Espinosa:2006pb,Joy:2007na,Joy:2008qd,Kawasaki:2003zv,Yamaguchi:2003fp,Easther:2006tv}. In particular, \cite{Easther:2006tv} pointed out that a large negative running that is more or less constant over the observed cosmological scales would quickly terminate inflation within $N \lesssim 30$ in terms of the e-folding number. However, it should be noted that such a discussion is based on the assumption that the inflaton potential is expanded in the Taylor series of the inflaton field with finite truncation. In fact, it is possible to realize the running spectral index in simple single-field inflation models. In Ref.~\cite{Kobayashi:2010pz}, two of the present authors (TK and FT) showed that a sizable running spectral index can be realized without significant impact on the overall behavior of the inflaton if there are small modulations on the inflaton potential. (See also~\cite{Feng:2003mk} for related work.) Here, in order for the inflaton dynamics to be locally affected by the modulations, the inflaton field excursion must be relatively large as in the large-field inflation. Therefore, both the tensor mode and the running spectral index are a natural outcome of the large field inflation with modulations. Examples such as monomial inflaton potentials ($V = \lambda \phi^n$) with superimposed periodic oscillations were studied in~\cite{Kobayashi:2010pz}. In this letter we revisit the large-field inflation with modulations in light of the recent discovery of the primordial B-mode polarization by the BICEP2. Along the lines of Ref.~\cite{Kobayashi:2010pz}, we study the recently proposed multi-natural inflation~\cite{Czerny:2014wza,Czerny:2014xja,Czerny:2014qqa} as an example. Interestingly, the existence of the periodic oscillations is a built-in feature of multi-natural inflation. We show that the negative running spectral index can be realized without significant impact on the overall inflation dynamics, similar to the case studied before. We will also show that the predicted values of $(n_s, r)$ for quadratic chaotic inflation and natural inflation can also be realized in multi-natural inflation. | As we have pointed out in~\cite{Kobayashi:2010pz}, large field inflation with substructures in the inflaton potential entails large tensor perturbations as well as a running spectral index. In this letter, we revisited large field models with modulations in the context of multi-natural inflation, where multiple effects breaking the shift symmetry give rise to a superposition of sinusoidal functions to the inflaton potential. We focused on the interesting case where a hierarchy exists among the periodicities of the sinusoidal oscillations, so that the model is a large field model with superimposed periodic oscillations. While the large field nature of the model produces large tensor mode perturbations, the oscillations on the potential source a running spectral index for the density perturbation spectrum. We have seen that multi-natural inflation possesses rich phenomenology, in particular, it produces a wide variety of values for $(r, \, n_s, \, dn_s/d\ln k)$ depending on the relative size of the sinusoidal functions. We also remark that large field inflation with modulations not only sources the running spectral index for the density perturbations, but also for the tensor perturbation spectrum. The tensor spectral index and its running are given in terms of the slow-roll parameters as \begin{equation} n_T = \frac{d\ln \mathcal{P}_T}{d \ln k }\simeq -2 \epsilon, \qquad \frac{dn_T}{d\ln k} \simeq -8 \epsilon^2 + 4 \epsilon \eta . \end{equation} Unlike~$n_s$, the tensor tilt depends only on~$\epsilon$ and thus the tensor running is set by $\epsilon$ and $\eta$. Therefore, one sees from the conditions (\ref{con1}) - (\ref{con4}) that the running of the tensor tilt is smaller than that for the density perturbations. Nonetheless, it is worth noting that a non-negligible $d n_s / d \ln k$ entails some amount of $d n_T / d \ln k$ as well. This will become especially important when measuring the tensor tilt by combining tensor observations at different scales, such as when combining CMB experiments with direct observations of gravity waves. A simple extrapolation between widely different scales without considering the possibility of the tensor running could lead to a misinterpretation of the observational results; In particular, such a naive extrapolation would give rise to an apparent violation of the slow-roll consistency relation~$r = -8 n_T$~\cite{Liddle:1993fq}, which holds locally at each scale in our case. Upcoming experimental data are expected to verify whether there actually are sizable tensor mode perturbations and a running of the spectral index. This will shed light on the substructure of the inflaton potential, which should be directly tied to the underlying microphysics. | 14 | 3 | 1403.4589 |
1403 | 1403.4226_arXiv.txt | We propose a general principle that leads to a renormalizable and predictive theory of quantum gravity where all scales are generated dynamically, where a small weak scale coexists with the Planck scale, where inflation is a natural phenomenon. The price to pay is a ghost-like anti-graviton state. The general principle is: {\em nature does not possess any scale}. We start presenting how this principle is suggested by two recent experimental results, and next discuss its implementation and consequences. \subsubsection*{1) Naturalness} In the past decades theorists assumed that Lagrangian terms with positive mass dimension (the Higgs mass $M_h$ and the vacuum energy) receive big power-divergent quantum corrections, as suggested by Wilsonian computation techniques that attribute physical meaning to momentum shells of loop integrals~\cite{Wilson}. According to this point of view, a modification of the SM at the weak scale is needed to make quadratically divergent corrections to $M_h^2$ naturally small. Supersymmetry seems the most successful possibility, but naturalness got increasingly challenged by the non-observation of any new physics that keeps the weak scale naturally small~\cite{anat}. The naturalness problem can be more generically formulated as a problem of the effective theory ideology, according to which nature is described by a non-renormalizable Lagrangian of the form \beq \Lag \sim \Lambda^4 + \Lambda^2 |H|^2 +\lambda |H|^4 + \frac{|H|^6}{\Lambda^2}+\cdots \eeq where, for simplicity, we wrote only the Higgs potential terms. The assumption that $\Lambda\gg M_h$ explains why at low energy $E\sim M_h$ we only observe those terms not suppressed by $\Lambda$: the renormalizable interactions. Conservation of baryon number, lepton number, and other successful features of the Standard Model indicate a large $\Lambda\circa{>}10^{16}\GeV$. In this context, gravity can be seen as a non-renormalizable interaction suppressed by $\Lambda \sim M_{\rm Pl}=1.22~10^{19}\GeV$. \smallskip However, this scenario also leads to the expectation that particles cannot be light unless protected by a symmetry. The Higgs mass should be $M_h^2\sim \Lambda^2$ and the vacuum energy should be $V\sim \Lambda^4$. In nature, they are many orders of magnitude smaller, and no protection mechanism is observed so far. We assume that this will remain the final experimental verdict and try to derive the theoretical implications. Nature is maybe telling us that both super-renormalizable terms and non-renormalizable terms vanish and that only adimensional interactions exists. \subsubsection*{2) Inflation} Cosmological observations suggest inflation with a small amount of anisotropies. However, this is a quite unusual outcome of quantum field theory: it requires special models with flat potentials, and often field values above the Planck scale. Let us discuss this issue in the context of Starobinsky-like inflation models~\cite{xii}: a class of inflation models favoured by Planck data~\cite{Planck}. Such models can be described in terms of one scalar $S$ (possibly identified with the Higgs $H$) with a potential $V(S)$ and a coupling to gravity $-\frac12 f(S) R$. Going to the Einstein frame (i.e.\ making field redefinitions such that the graviton kinetic term $R$ gets its canonical coefficient) the potential gets rescaled into $V_E=\bar M_{\rm Pl}^4 V/f^2$, where $\bar M_{\rm Pl}=M_{\rm Pl}/\sqrt{8\pi} =2.4~10^{18}\GeV$ is the reduced Planck mass. Special assumptions such as $V(S) \propto f(S)^2$ make the Einstein-frame potential $V_E$ flat at $S\gg M_{\rm Pl}$, with predictions compatible with present observations~\cite{xii}. However this flattening is the result of a fine-tuning: in presence of generic Planck-suppressed operators $V$ and $f$ and thereby $V_E$ are generic functions of $S/M_{\rm Pl}$. Nature is maybe telling us that $V_E$ becomes flat at $S\gg M_{\rm Pl}$ because only adimensional terms exists. \subsubsection*{The principle} These observations vaguely indicate that nature prefers adimensional terms, so that ideas along these lines are being discussed in the literature~\cite{FNmodels}. \bigskip We propose a simple concrete principle: {\em the fundamental theory of nature does not possess any mass or length scale} and thereby only contains `renormalizable' interactions --- i.e.\ interactions with dimensionless couplings. This simple assumption solves the two issues above and has strong consequences. First, a quasi-flat inflationary potential is obtained because the only adimensional potential is a quartic term $V(S) = \lambda_S |S|^4$ and the only adimensional scalar/gravity coupling is $-\xi_S |S|^2 R$, so that $V_E = \bar M_{\rm Pl}^4 ( \lambda_S|S|^4)/(\xi_S|S|^2)^2 = \bar M_{\rm Pl}^4 \lambda_S/\xi_S^2$ is flat at tree level. At quantum level the parameters $\lambda_S$ and $\xi_S$ run, such that the slow-roll parameters are the beta-functions of the theory, as discussed in section~\ref{infl}. Second, power divergences vanish just because of dimensional reasons: they would have mass dimension, but there are no masses. Vanishing of quadratic divergences leads to a modified version of naturalness, where the weak scale can be naturally small even in absence of new physics at the weak scale~\cite{FN} designed to protect the Higgs mass, such as supersymmetry or technicolor. In this context scale invariance is just an accidental symmetry, present at tree level because there are no masses. Just like baryon number (a well known accidental symmetry of the Standard Model), scale invariance is broken by quantum corrections.\footnote{Other attempts along similar lines assume that scale or conformal invariance are exact symmetries at quantum level. However, computable theories do not behave in this way.} Then, the logarithmic running of adimensional couplings can generate exponentially different scales via dimensional transmutation. This is how the QCD scale arises. The goal of this paper is exploring if the Planck scale and the electro-weak scale can arise in this context. \subsubsection*{The theory} The adimensional principle leads us to consider renormalizable theories of quantum gravity described by actions of the form: \beq \label{eq:Ladim} S = \int d^4x \sqrt{|\det g|} \,\bigg[ \frac{R^2}{6\gs^2} + \frac{\frac13 R^2 - R_{\mu\nu}^2}{\gt^2} + \Lag^{\rm adim}_{\rm SM}+ \Lag^{\rm adim}_{\rm BSM} \bigg]. \eeq The first two terms, suppressed by the adimensional gravitational couplings $\gs$ and $\gt$, are the graviton kinetic terms.\footnote{The second term is also known as `conformal gravity'.} The third term, $\Lag^{\rm adim}_{\rm SM}$, is the adimensional part of the usual Standard Model (SM) Lagrangian: \beq\label{eq:LadimSM} \Lag^{\rm adim}_{\rm SM} = -\frac{F_{\mu\nu}^2}{4g^2} + \bar \psi i \slashed{D} \psi + |D_\mu H|^2 - (y H \psi\psi + \hbox{h.c.}) - \lambda_H |H|^4 - \xi_H |H|^2 R \eeq where $H$ is the Higgs doublet. The last term, $\Lag^{\rm adim}_{\rm BSM}$, describes possible new physics Beyond the SM (BSM). For example adding a scalar singlet $S$ one would have \beq\label{eq:LadimBSM} \Lag^{\rm adim}_{\rm BSM} = |D_\mu S|^2 - \lambda_S |S|^4 + \lambda_{HS} |S|^2|H|^2 -\xi_S |S|^2 R. \eeq We ignore topological terms. Non renormalizable terms, the Higgs mass term $\frac12 M_h^2 |H|^2$ and the Einstein-Hilbert term $ -M_{\rm Pl}^2 R/16\pi$ are not present in the agravity Lagrangian, because they need dimensionful parameters. The Planck mass can be generated dynamically if, at quantum level, $S$ gets a vacuum expectation value such that $\xi_S \langle S\rangle^2 = M_{\rm Pl}^2/16\pi $~\cite{cosmon}. The adimensional parameters of a generic agravity in 3+1 dimensions theory are: \begin{enumerate} \item the two gravitational couplings $\gs$ and $\gt$; \item quartic scalar couplings $\lambda$; \item scalar/scalar/graviton couplings $\xi$; \item gauge couplings $g$; \item Yukawa couplings $y$.\footnote{The list would be much shorter for $d\neq 4$. Gauge couplings are adimensional only at $d=4$. Adimensional scalar self-interactions exist at $d=\{3,4,6\}$. Adimensional interactions between fermions and scalars exist at $d=\{3,4\}$. Adimensional fermion interactions exist at $d=2$. } \end{enumerate} The graviton $g_{\mu\nu}$ has dimension zero, and eq.\eq{Ladim} is the most generic adimensional action compatible with general relativistic invariance. The purely gravitational action just contains two terms: the squared curvature $R^2$ and the Weyl term $\frac13 R^2 - R_{\mu\nu}^2$. They are suppressed by two constants, $\gs^2$ and $\gt^2$, that are the true adimensional gravitational couplings, in analogy to the gauge couplings $g$ that suppress the kinetic terms for vectors, $-\frac14 F_{\mu\nu}^2/g^2$. Thereby, the gravitational kinetic terms contain 4 derivatives, and the graviton propagator is proportional to $1/p^4$. Technically, this is how gravity becomes renormalizable. In presence of an induced Planck mass, the graviton propagator becomes \beq \frac{1}{M^2_2 p^2 - p^4} = \frac{1}{M^2_2} \bigg[\frac{1}{p^2} - \frac{1}{p^2 - M^2_2}\bigg] \eeq giving rise to a massless graviton with couplings suppressed by the Planck scale, and to a spin-2 state with mass $M_2^2 = \frac12 \gt^2 \bar M_{\rm Pl}^2$ and negative norm. Effectively, it behaves as an anti-gravity Pauli-Villars regulator for gravity~\cite{Stelle}. The Lagrangian can be rewritten in a convoluted form where this is explicit~\cite{Ovrut} (any field with quartic derivatives can be rewritten in terms of two fields with two derivatives). The $\gs$ coupling gives rise to a spin-0 graviton with positive norm and mass $M_0^2 = \frac12 \gs^2 \bar M_{\rm Pl}^2 +\cdots$. Experimental bounds are satisfied as long as $M_{0,2}\circa{>}\eV$. At classical level, theories with higher derivative suffer the Ostrogradski instability: the Hamiltonian is not bounded from below~\cite{Ostro}. At quantum level, creation of negative energy can be reinterpreted as destruction of positive energy: the Hamiltonian becomes positive, but some states have negative norm and are called `ghosts'~\cite{Pais}. This quantization choice amounts to adopt the same $i\epsilon$ prescription for the graviton and for the anti-graviton, such that the cancellation that leads to renormalizability takes place. We do not address the potential problem of a negative contribution to the cross-section for producing an odd number of anti-gravitons with mass $M_2$ above their kinematical threshold. Claims in the literature are controversial~\cite{Mannheim}. Sometimes in physics we have the right equations before having their right interpretation. In such cases the strategy that pays is: proceed with faith, % explore where the computations lead, % if the direction is right the problems will disappear.% We here compute the one loop quantum corrections of agravity, to explore its quantum behaviour. Can the Planck scale be dynamically generated? Can the weak scale be dynamically generated? | In conclusion, we proposed that the fundamental theory contains no dimensionful parameter. Adimensional gravity (agravity for short) is renormalizable because gravitons have a kinetic term with 4 derivatives and two adimensional coupling constants $\gs$ and $\gt$. The theory predicts physics above the Planck scale. We computed the RGE of a generic agravity theory, see eq.s\eq{RGEY},\eq{RGElambda},\eq{RGExi} and (\ref{sys:RGG}). We found that quantum corrections can dynamically generate the Planck scale as the vacuum expectation value of a scalar $s$, that acts as the Higgs of gravity. The cosmological constant can be tuned to zero. This happens when a running quartic coupling and its $\beta$ function both vanish around the Planck scale, as summarised in eq.\eq{agravMPl}. The quartic coupling of the Higgs in the SM can run in such a way, see fig.\fig{RGESM}. The graviton splits into the usual massless graviton, into a massive spin 2 anti-graviton, and into a scalar. The spin 2 state is a ghost, to be quantised as a state with positive kinetic energy but negative norm. The lack of dimensional parameters implies successful quasi-flat inflationary potentials at super-Planckian vacuum expectation values: the slow-roll parameters are the $\beta$ functions of the theory. Identifying the inflaton with the Higgs of gravity leads to predictions $n_s\approx 0.967$ for the spectral index and $r\approx 0.13$ for the tensor/scalar amplitude ratio. The Higgs of gravity can also be identified with the Higgs of the Higgs: if $\gs,\gt\sim 10^{-8}$ are small enough, gravitational loops generate the observed weak scale. In this context, a weak scale much smaller than the Planck scale is natural: all small parameters receive small quantum corrections. In particular, quadratic divergences must vanish in view of the lack of any fundamental dimensionful parameter, circumventing the usual hierarchy problem. \small \subsubsection* | 14 | 3 | 1403.4226 |
|
1403 | 1403.3406_arXiv.txt | We announce the discovery of a new Galactic companion found in data from the ESO VST ATLAS survey, and followed up with deep imaging on the 4m William Herschel Telescope. The satellite is located in the constellation of Crater (the Cup) at a distance of $\sim$ 170 kpc. Its half-light radius is $r_h=30$ pc and its luminosity is $M_V=-5.5$. The bulk of its stellar population is old and metal-poor. We would probably have classified the newly discovered satellite as an extended globular cluster were it not for the presence of a handful of Blue Loop stars and a sparsely populated Red Clump. The existence of the core helium burning population implies that star-formation occurred in Crater perhaps as recently as 400 Myr ago. No globular cluster has ever accomplished the feat of prolonging its star-formation by several Gyrs. Therefore, if our hypothesis that the blue bright stars in Crater are Blue Loop giants is correct, the new satellite should be classified as a dwarf galaxy with unusual properties. Note that only ten degrees to the North of Crater, two ultra-faint galaxies Leo IV and V orbit the Galaxy at approximately the same distance. This hints that all three satellites may once have been closely associated before falling together into the Milky Way halo. | By revealing 20 new satellites in the halo of the Milky Way \citep{umadisc, willman1disc, cvndisc, boodisc, uma2disc, catsdisc, leotdisc, koposovdisc, boo2disc, leovdisc, segue2disc, boo3disc, piscdisc, balbinotdisc}, the Sloan Digital Sky Survey (SDSS) has managed to blur the boundary between what only recently seemed two entirely distinct types of objects: dwarf galaxies and star clusters. As a result, we now appear to have ``galaxies'' with a total luminosity smaller than that of a \textit{single} bright giant star (e.g. Segue 1, Ursa Major II). The SDSS observations also wrought havoc on intra-class nomenclature, giving us dwarf spheroidals with properties of dwarf irregulars, i.e. plenty of gas and recent star formation (e.g. Leo T), as well as distant halo globulars so insignificant that if they lay ten times closer they would surely be called open clusters (e.g. Koposov 1 and 2). The art of satellite classification, while it may seem like idle pettifoggery, is nonetheless of importance to our models of structure formation. By classifying the satellite, on the basis of all available observational evidence, as a dwarf galaxy, we momentarily gloss over the details of its individual formation and evolution, and instead move on to modeling the population as a whole. We belive there are more than twenty dwarf galaxies in the Milky Way environs and predict that there are tens more dwarf galaxies waiting to be discovered in the near future \citep[see e.g.][]{KoposovLF, Tollerud}. It is worth pointing out that the Cold Dark Matter (CDM) paradigm remains the only theory that can easily produce large numbers of dwarf satellites around spirals like the Milky Way or the M31. So far the crude assumption of all dwarf galaxies being simple clones of each other living in similar dark matter sub-halos has payed off and the $\Lambda$CDM paradigm appears to have been largely vindicated \citep[e.g.][]{KoposovLFmodel}. However, as the sample size grows, new details emerge and may force us to reconsider this picture. \begin{figure*} \centering \includegraphics[width=0.99\linewidth]{figures/crater_figure1.ps} \caption[]{\small Discovery of the Crater satellite in VST ATLAS survey data. \textit{First:} 3x3 arcminute ATLAS $r$-band image cutout of the area around the centre of the satellite. \textit{Second:} Stellar density map of a 25x25 arcminute region centered on Crater and smoothed using a Gaussian kernel with FWHM of 2 arcmin. Darker pixels have enhanced density. The smaller circle marks the region used to create the CMD of the satellite stars. Bigger circle denotes the exclusion zone used when creating the CMD of the Galactic foreground. \textit{Third:} CMD of the 2 arcmin radius region around the centre of Crater. Several coherent features including a Red Giant Branch and Red Clump are visible. Also note several stars bluer and brighter than the Red Clump. These are likely Blue Loop candidate stars indicating recent ($< 1$ Gyr) star formation activity in the satellite. \textit{Fourth:} CMD of the Galactic foreground created by selecting stars that lie outside the larger circle marked in the Second panel and covering the same area as that within the small circle. It appears that the satellite CMD is largely unaffected by Galactic contamination. \textit{Fifth:} Hess difference of the CMD density of stars inside the small circle shown and stars outside the large circle.} \label{fig:disc} \end{figure*} The distribution of known satellites around the Milky Way is anisotropic, though this is partly a consequence of selection effects. It has been claimed that the Galactic satellites form a vast disc-like structure about 40 kpc thick and 400 kpc in diameter, and that this is inconsistent with $\Lambda$CDM~\citep[e.g.,][]{Kr12}. In fact, as the recent discoveries have been made using the SDSS, a congregation of satellites in the vicinity of the North Galactic Cap is only to be expected~\citep{Be13}. Even then, anisotropic distributions of satellite galaxies do occur naturally within the $\Lambda$CDM framework. For example, using high resolution hydrodynamical simulations, \citet{De11} found that roughly 20 \% of satellite systems exhibit a polar alignment, reminiscent of the known satellites of the Milky Way galaxy. In 10\% of these systems there was evidence of satellites lying in rotationally supported discs, whose origin can be traced back to group infall. Significant fractions of satellites may be accreted from a similar direction in groups or in loosely bound associations and this can lead to preferred planes in the satellite distributions~\citep[e.g.,][]{Li08,Do08}. There is ample evidence of such associations in the satellites around the Milky Way -- such as Leo IV and Leo V ~\citep{Jo10} -- and M31 -- such as NGC 147 and NGC 185 and possibly Cass II~\citep{Wa13,Fa13}. Understanding how dwarf galaxies and clusters evolve in relation to their environment remains a challenge. These objects can occupy both low and high density environments and are subject to both internal processes (mass segregation, evaporation and ejection of stars, bursts of star formation) and external effects (disruption by Galactic tides, disk and bulge shocking, ram pressure stripping of gas by the ISM). There is also growing evidence for interactions and encounters of satellites with each other, for example, in the tidal stream within And II~\citep{Am14} and the apparent shells around Fornax~\citep{Co04,Am12}. The variety of initial conditions near the time of formation, together with the diversity of subsequent evolutionary effects in a range of different environments, is capable of generating a medley of objects with luminosities and sizes of the present day cluster and dwarf galaxies populations. It is therefore unsurprising that as the Milky Way halo is mapped out we are finding an increasing number of ambiguous objects that do no fit tidily into the once clear-cut categories of clusters and dwarf galaxies. In this paper, we describe how applying the overdensity search algorithms perfected on the SDSS datasets to the catalogues supplied by the VST ATLAS survey has uncovered a new satellite in the constellation of Crater. Although Crater has a size close to that of globular clusters, we believe that the satellite has had an extended star-formation history and therefore should be classified as a galaxy \citep[see e.g.][]{whatisgalaxy}. Therefore, following convention, it is named after the constellation in which it resides. Section~\ref{sec:data} gives the particulars of the ATLAS data and of the follow-up imaging we have acquired with the 4m WHT. Section~\ref{sec:prop} describes how the basic properties of Crater were estimated. Section~\ref{sec:conclusions} provides our Discussion and Conclusions. \begin{figure} \centering \includegraphics[width=0.97\linewidth]{figures/crater_cutout_4x4.eps} \caption[]{\small False-color WHT ACAM image cutout of an area 4x4 arcminutes centered on Crater. The $i$-band frame is used for the Red channel, $r$-band for the Green and $g$-band for the Blue. The image reveals the dense central parts of Crater dominated by faint MSTO and SGB stars. Several bright stars are clearly visible, these are the giants on the RGB and in the Red Clump. Amongst these are 3 or 4 bright Blue Loop giant candidates. A sprinkle of faint and very blue stars is also noticeable. These are likely to be either young MS stars or Blue Stragglers.} \label{fig:image} \end{figure} \begin{figure*} \centering \includegraphics[width=0.95\linewidth]{figures/crater_redclump.ps} \caption[]{\small Red Clump morphology. The CMD features around the RC region in Crater (\textit{Top Left}) are contrasted to those in 5 classical dwarf galaxies observed with the HST \citep{HoltzmanHST}. The polygon mask shows the boundary used to select the RC/RHB and RGB stars for the Luminosity Function comparison in Figure~\ref{fig:lf}. The RC/RHB region of Crater most closely resembles that of Carina, the main differences being the lack of an obvious Horizontal Branch and the presence of a small but visible Blue Loop extension. \textit{Top Center:} Fornax, at $(m-M)=20.7$ and E(B-V)=0.025. \textit{Top Right:} Draco, at $(m-M)=19.6$ and E(B-V)=0.03. \textit{Bottom Left:} Carina, at $(m-M)=20.17$ and E(B-V)=0.06. \textit{Bottom Center:} Leo II, at $(m-M)=21.63$ and E(B-V)=0.017. \textit{Bottom Right:} LGS 3, at $(m-M)=24.54$ and E(B-V)=0.04. The dwarf photometry is extinction corrected.} \label{fig:redclump} \end{figure*} \begin{figure} \centering \includegraphics[width=0.99\linewidth]{figures/crater_redclump_lf.ps} \caption[]{\small Luminosity Functions of stars selected using the polygon CMD masks in 6 satellites as shown in Figure~\ref{fig:redclump}, given the distance moduli and the extinction values recorded in Figure~\ref{fig:redclump} caption. The difference in RC/RHB morphology is reflected in the difference of LF shapes and, most importantly, in different absolute magnitudes of the LF peak. Crater's stars are offset to $(m-M)=21.1$ to match the peak of Carina's LF.} \label{fig:lf} \end{figure} | \label{sec:conclusions} In this paper we have presented the discovery of a new Galactic satellite found in VST ATLAS survey data. The satellite is located in the constellation of Crater (the Cup) and has the following properties. \smallskip \noindent (1) The satellite has a half-light radius of $r_{\rm h}\sim30$ pc and a luminosity of $M_V=-5.5$. \smallskip \noindent (2) The heliocentric distance to Crater is between 145 and 170 kpc. \smallskip \noindent (3) The bulk of Crater's stellar population is old and metal-poor. \smallskip \noindent (4) However, there are also several bright blue stars that appear to be possible Crater members. We interpret these as core helium burning Blue Loop giants and, therefore, conclude that the satellite might have formed stars as recently as 400 Myr ago. \smallskip \noindent (5) If Crater were a globular cluster than it possesses the shortest and the reddest Horizontal Branch of all known Galactic globulars with [Fe/H]$<-$1. Such a stubby RHB could otherwise be confused with a Red Clump. If our hypothesis of relatively recent star-formation is correct, then Crater must be classified as a dwarf galaxy, as no extended star-formation has ever been recorded in globular clusters. The satellite then clearly stands out compared to the rest of the known dwarfs, both classical and ultra-faint: it is much smaller and less luminous than the classical specimens and much denser than the typical ultra-faint ones. Given its feeble size and luminosity it is surprising to have found (albeit tentative) evidence for a younger population. The only other Galactic dwarf of comparable metallicity, age and luminosity, with detected low levels of recent star-formation, is Leo T \citep{leotdisc, RyanWeber}, which is known to have kept some of its neutral hydrogen supply intact. This is perhaps unsurprising given its large Galactocentric distance and very modest levels of star-formation activity. We have attempted a search for the presence of HI coincident with the location of Crater in the data of the GASS radio survey \citep{GASS}, but found no convincing evidence for a stand-alone neutral hydrogen cloud. Based on the fact that the HI gas in Leo T is barely detected in the HIPASS data \citep{leotdisc}, and given that Crater is significantly less luminous compared to Leo T, it is perhaps worth pointing a radio interferometer in the direction of this satellite. Crater is just the most recent in a series of discoveries of ambiguous objects that share some of the properties of clusters and dwarf galaxies. The discoveries of Willman 1 and Segue 1 were followed by similar controversies as to their true nature~\citep[see e.g.,][]{willman1disc,Ni09,Wi11,Ma11}. Philosophers of course recognise this as the fallacy of the excluded middle, in which a binary choice is assumed to exhaust all the possibilities. The question ``Is Crater a globular cluster or a dwarf galaxy?'' might just be as futile as the question ``Is the platypus an otter or a duck?". The egg-laying mammal looks like both but is neither. Instead, its mixed-up appearance is the result of it having evolved in an unusual and isolated location. | 14 | 3 | 1403.3406 |
1403 | 1403.4821_arXiv.txt | {We investigate the physical and chemical processes at work during the formation of a massive protostar based on the observation of water in an outflow from a very young object previously detected in H$_2$ and SiO in the IRAS 17233--3606 region. We estimated the abundance of water to understand its chemistry, and to constrain the mass of the emitting outflow. We present new observations of shocked water obtained with the HIFI receiver onboard \textit{Herschel}. We detected water at high velocities in a range similar to SiO. We self-consistently fitted these observations along with previous SiO data through a state-of-the-art, one-dimensional, stationary C-shock model. We found that a single model can explain the SiO and H$_2$O emission in the red and blue wings of the spectra. Remarkably, one common area, similar to that found for H$_2$ emission, fits both the SiO and H$_2$O emission regions. This shock model subsequently allowed us to assess the shocked water column density, $N_{\rm H_2O}=1.2\,10^{18}$~cm$^{-2}$, mass, $M_{\rm H_2O}=12.5~M_\oplus$, and its maximum fractional abundance with respect to the total density, $x_{\rm H_2O}=1.4\,10^{-4}$. The corresponding water abundance in fractional column density units ranges between $2.5\,10^{-5}$ and $1.2\,10^{-5}$, in agreement with recent results obtained in outflows from low- and high-mass young stellar objects.} | The formation mechanism of high-mass stars ($M>8$\,$M_\odot$) has been an open question despite active research for several decades now, the main reason being that the strong radiation pressure exerted by the young massive star overcomes its gravitational attraction \citep{1974A&A....37..149K}. Controversy remains about how high-mass young stellar objects (YSOs) acquire their mass \citep[e.g., ][]{2009sfa..book..288K}, either locally in a prestellar phase or during the star formation process itself, being funnelled to the centre of a stellar cluster by the cluster's gravitational potential. Bipolar outflows are a natural by-product of star formation and understanding them can give us important insights into the way massive stars form. In particular, studies of their properties in terms of morphology and energetics as function of the luminosity, mass, and evolutionary phase of the powering object may help us to understand whether the mechanism of formation of low- and high-mass YSOs is the same or not \citep[see, e.g.,][]{2002A&A...383..892B}. Water is a valuable tool for outflows as it is predicted to be copiously produced under the type of shock conditions expected in outflows \citep{Flower10}. Observations of molecular outflows powered by YSOs of different masses reveal abundances of H$_2$O associated with outflowing gas of the order of some 10$^{-5}$ \citep[e.g., ][]{2010A&A...521L..28E,2012A&A...542A...8K,2013A&A...549A..16N}. Recently, the Water In Star-forming regions with Herschel \citep{2011PASP..123..138V} key program targeted several outflows from Class 0 and I low-mass YSOs in water lines. H$_2$O emission in young Class 0 sources is dominated by outflow components; in Class I YSOs H$_2$O emission is weaker because of less energetic outflows \citep{2012A&A...542A...8K}. Comparisons of low-excitation water data with SiO, CO, and H$_2$ reveal contrasting results because these molecules seem to trace different environments in some sources \citep{2013A&A...549A..16N,2013A&A...551A.116T} while they have similar profiles and morphologies in others \citep{2012ApJ...757L..25L,2012A&A...538A..45S}. Observations of massive YSOs \citep[e.g.,][]{2013A&A...554A..83V} confirm broad profiles due to outflowing gas in low-energy H$_2$O lines. However, the coarse spatial resolution of {\it Herschel} and the limited high angular resolution complementary data resulted in a lack of specific studies dedicated to outflows from massive YSOs. The prominent far-IR source IRAS\,17233$-$3606 (hereafter IRAS\,17233) is one of the best laboratories for studying massive star formation because of its close distance \citep[1\,kpc,][]{2011A&A...530A..12L}, high luminosity, and relatively simple geometry. In previous interferometric studies, we resolved three CO outflows with high collimation factors and extremely high velocity (EHV) emission \citep[][Paper\,I]{2009A&A...507.1443L}. Their kinematic ages ($10^2-10^3$ yr) point to deeply embedded YSOs that still have not reached the main sequence. One of the outflows, OF1 (Fig.\,\ref{overview}), was the subject of a dedicated analysis in SiO lines \citep[][Paper\,II]{2013A&A...554A..35L}. It is associated with EHV CO(2--1), H$_2$, SO, and SiO emission. SiO(5--4) and (8--7) APEX spectra suggest an increase of excitation with velocity and point to hot and/or dense gas close to the primary jet. Through a combined shock-LVG analysis of SiO, we derived a mass of $>0.3\,M_\odot$ for OF1, which implies a luminosity L$\ge10^3\,L_\odot$ for its driving source. In this Letter, we present observations of water towards IRAS\,17233 with the HIFI instrument \citep{2010A&A...518L...6D} onboard {\it Herschel} \citep{2010A&A...518L...1P}. \begin{figure} \centering \includegraphics[angle=-90,width=7cm]{23343fg1.eps} \caption{Grey scale and solid black contours represent the H$_2$ emission at 2.12$\mu$m; dashed contours are the 1.4 mm continuum emission. Red and blue contours are the SMA integrated emission of the SiO(5--4) line ( $\varv_{\rm{bl}}=[-30,-20]$ km\,s$^{-1}$ and $\varv_{\rm{rd}}= [+10,+39]$ km\,s$^{-1}$). The crosses mark the {\it Herschel} pointings; the solid and dotted circles are the {\it Herschel} beams (Sect.\,\ref{obs}). The square marks the peak of the EHV CO(2--1) red-shifted emission (R1). The arrow marks the OF1 outflow.}\label{overview} \end{figure} | \label{dis} The SiO(8--7) and H$_2$O profiles (in particular that of the 1113\,GHz line) suggest a common origin of the H$_2$O and SiO emission in IRAS\,17233. This result is based on emission at high velocities and is different from the findings that SiO and H$_2$O do not trace the same gas in molecular outflows from low-mass YSOs at low-velocities and/or in low-energy lines \citep{2012A&A...538A..45S,2013A&A...549A..16N}. However, an excellent match between SiO and H$_2$O profiles is found in other sources at high velocities \citep{2012ApJ...757L..25L}. With the limitations previously discussed, we find that the shock parameters of OF1 are comparable with those found for low-mass protostars with a higher pre-shock density. The derived water abundance is compatible with values of other molecular outflows \citep[e.g.,][]{2010A&A...521L..28E,2012A&A...540A..84H}. While often measurements of H$_2$O abundances have large uncertainties because the H$_2$ column density is inferred from observations of CO or from models \citep[for a compilation of sources, abundances and methods, see][]{2013ChRv..113.9043V}, the value inferred in our analysis is consistently derived, as the H$_2$O and H$_2$ column densities are outcomes of the same model. Moreover, the estimated H$_2$O column density matches the data. Although photo-dissociation probably affects the low-energy H$_2$O lines, simple C-shocks models can be used to model higher-energy transitions. The inclusion of photo-dissociation in our models is work in progress in a larger framework of studying the effect of an intense UV field on shocks. Estimates of H$_2$O mass are not easily found in the literature. \citet{2014A&A...561A.120B} modelled water emission in L1157-B1 through J- and C-type shocks. Their H$_2$O column densities derived over the whole line profiles translate in to masses in the range 0.009--0.125\,$M_\oplus$ for a hot component of 2\arcsec--5\arcsec size and $<(0.7-1.5)\,10^{-3}\,M_\oplus$ for a warm component with a size of $\le$10\arcsec. Our estimate of 12.5\,$M_\oplus$ for the H$_2$O mass of OF1 therefore seems to be compatible with previous results. In summary, we presented the first estimate of the abundance of water in an outflow driven by a massive YSOs based on a self-consistent shock model of water and SiO transitions. We inferred a water abundance in fractional column density units between $1.2\,10^{-5}$ and $2.5\,10^{-5}$, which is an average value of the water abundance over the shock layer. Additionally, our model indicates that the maximum fractional abundance of water locally reached in the layer is $10^{-4}$. Finally, we inferred the water mass of the OF1 outflow to be 12.5\,$M_\oplus$. | 14 | 3 | 1403.4821 |
1403 | 1403.1129_arXiv.txt | Kilohertz QPOs can be used as a probe of the inner regions of accretion disks in compact stars and hence also of the properties of the central object. Most models of kHz QPOs involve epicyclic frequencies to explain their origin. We compute the epicyclic frequencies of nearly circular orbits around rotating strange quark stars. The MIT bag model is used to model the equation of state of quark matter and the uniformly rotating stellar configurations are computed in full general relativity. The vertical epicyclic frequency and the related nodal precession rate of inclined orbits are very sensitive to the oblateness of the rotating star. For slowly rotating stellar models of moderate and high mass strange stars, the sense of the nodal precession changes at a certain rotation rate. At lower stellar rotation rates the orbital nodal precession is prograde, as it is in the Kerr metric, while at higher rotation rates the precession is retrograde, as it is for Maclaurin spheroids. Thus, qualitatively, the orbits around rapidly rotating strange quark stars are affected more strongly by the effects of stellar oblateness than by the effects of general relativity. We show that epicyclic and orbital frequencies calculated numerically for small mass strange stars are in very good agreement with analytical formulae for Maclaurin spheroids. | \label{intro} Strange quark stars (SQS) are considered as a possible alternative to neutron stars as compact objects (see, e.g., \cite{Weber99} for a review). The possibility of the existence of quark matter was first recognized in the early seventies. Bodmer \cite{Bodmer71} remarked that matter consisting of deconfined up, down and strange quarks could be the absolute ground state of matter at zero pressure and temperature. If this is true, then macroscopic stellar-mass objects made of such matter, i.e., quark stars (also called ``strange stars") could in principle exist \cite{Witten84}. Typically, strange stars \cite{Alcock86,Haens86} are modeled with an equation of state (EOS) based on the phenomenological MIT-bag model of quark matter in which quark confinement is described by an energy term proportional to the volume \cite{Fahri84}. It was shown \cite{Gonde03} that a strange star described by the standard MIT bag model can be accelerated to high rotation rates in low-mass X-ray binaries (LMXBs) taking into account both a reasonable value of mass of the strange quark, and secular instabilities, such as the viscosity-driven instability and the $r$-mode instability. Therefore, strange stars in LMXBs could rotate rapidly (with spin frequency $> 400\,$Hz). This provides the astrophysical motivation for computing models of rapidly rotating strange stars. General relativity (GR) predicts the existence of the marginally stable orbit, within which no stable circular motion is possible \cite{Kapla49}, and all models of accretion disks around black holes take this into account; e.g. \cite{Shaku73,Sadow11}. In the case of neutron stars, the marginally stable orbit may be separated from the stellar surface by a gap, whose size depends on the equation of state of dense matter and the spin of the neutron star, as well as on its mass \cite{Kluzn85,Cook94}. Whether or not the accreting fluid attains that orbit depends also on the value of the stellar magnetic field \cite{Kluzn90}. Similar considerations apply to quark stars \cite{Gonde01}. The marginally stable orbit is often called the innermost stable circular orbit (ISCO). While in the Kerr geometry \cite{Bardeen72} this can lead to no misunderstandings, in general an ISCO cannot be identified with the marginally stable orbit. In some metrics a marginally stable orbit may be the outermost (in a certain radius range) stable circular orbit \cite{Pugli11,Vieira13}, while in the Newtonian gravity of a $1/r$ potential all circular orbits are stable, and the innermost one is simply the one grazing the surface of the spherical gravitating body. In this paper we will mostly avoid using the term ISCO in the context of quark stars, where the term ISCO is unambiguous only when there is a gap between the marginally stable orbit and the stellar surface. Whenever the marginally stable orbit is present around a neutron star or a quark star, its frequency is an upper bound on the frequency of stable orbital motion of a test particle. In addition to the orbital frequencies, epicyclic frequencies are of great interest in the discussion of accretion disks in GR. Indeed, the Rayleigh criterion for stability of circular motion is that $\nu_r^2>0$. Thus, the radial epicyclic frequency, $\nu_r$, goes to zero at the marginally stable orbit, and therefore must have a maximum at a somewhat larger radius.\footnote{In Schwarzschild geometry the ISCO is at $r=6M(G/c^2)$, while the maximum of $\nu_r$ is attained at $r=8M(G/c^2)$.} The presence of a maximum in $\nu_r$ allows mode trapping of $g$-modes of disk oscillations, whose eigenfrequency is somewhat lower than the maximum value of $\nu_r$ \cite{Kato80,Nowak92}, while the vertical epicyclic frequency is related to a generalization of the Lense-Thirring precession, the so called $c$-mode, whose eigenfrequency is approximately equal to the difference between the orbital and the vertical epicyclic frequencies \cite{Silbe01}. Such modes may have been detected in LMXBs as the celebrated kHz QPOs (see e.g., \cite{Klis00} for a review of QPOs). In Newtonian gravity, all circular orbits around spherically symmetric objects are stable. However, the marginally stable orbit may be present around {\sl rapidly rotating} Newtonian stars \cite{KluznBG01,Zduni01}. Indeed, for rapidly rotating Maclaurin spheroids, the ISCO is well outside the surface of this figure of equilibrium \cite{Amste02}. In this paper we report on numerical calculations in general relativity of epicyclic frequencies for rotating strange stars with astrophysically relevant masses of $1.4 M_{\odot}$ and $1.96 M_{\odot}$. We use an up-to-date version of the RNS code \cite{StergF95}. The motivation for the study is explained in \S~\ref{aims}, and the implications of our findings are sketched in \S~\ref{astro}. | We have computed numerical models of rapidly rotating strange quark stars and their external metric. In particular, we have computed the orbital and epicyclic frequencies of circular prograde orbits around SQS for astrophysically relevant stellar masses, i.e., those occurring in LMXBs. {\bf We have validated the epicyclic frequency module of the RNS code by comparing the code results for low-mass quark star models, at $M=0.01 M_\odot$ and $M=0.001 M_\odot$, with analytical formulae.} We find that the properties of orbital and epicyclic frequencies are a result of the interplay of competing GR and Newtonian effects. For moderately rotating massive stars the behavior of the epicyclic frequencies is very similar to that of the frequencies in neutron stars, which in turn are similar to those of prograde orbits in the Kerr metric of slowly spinning black holes ($0<a_*<<1$). However, at high rotation rates the behavior of the epicyclic frequencies near the quark star is similar to that of retrograde orbits in the Kerr metric, i.e., in the latter case the marginally stable orbit is pushed away from the star and the vertical epicyclic frequency is higher than the orbital one. For moderately rotating massive SQS the behavior of the epicyclic frequencies is dominated by GR effects. However, for rapidly rotating SQS a qualitatively new effect appears for prograde orbits---the vertical epicyclic frequency becomes larger than the orbital frequency. This is a non-relativistic effect of oblateness, known from a study of Maclaurin spheroids \cite{Gonde13}, as has been verified in a calculation of the frequencies of low mass stars, for which the effects of GR are unimportant, and for which it is found the vertical epicyclic frequency is always larger than the orbital frequency, even at very low stellar rotation rates (Figs.~\ref{velocity}--\ref{figmac}). The competition of GR effects and those of higher multipoles can clearly be seen in a plot of the difference between the vertical epicyclic frequency and the orbital one. Fig.~\ref{f:difference} shows that for a $1.4 M_\odot$ star, at large radii the vertical epicyclic frequency is lower than the orbital one, even at a rotation rate as high as 1165 Hz. It is only close to the star that the effect of higher multipoles prevails and raises the value of the vertical epicyclic frequency above the orbital one. Were this effect, affecting the kHz QPOs (\S~\ref{astro}), to occur also in neutron stars, it could have observable implications in many LMXBs. | 14 | 3 | 1403.1129 |
1403 | 1403.6528.txt | Shortly after the seminal paper {\sl ``Self-Organized Criticality: An explanation of 1/f noise''} by Bak, Tang, and Wiesenfeld (1987), the idea has been applied to solar physics, in {\sl ``Avalanches and the Distribution of Solar Flares''} by Lu and Hamilton (1991). In the following years, an inspiring cross-fertilization from complexity theory to solar and astrophysics took place, where the SOC concept was initially applied to solar flares, stellar flares, and magnetospheric substorms, and later extended to the radiation belt, the heliosphere, lunar craters, the asteroid belt, the Saturn ring, pulsar glitches, soft X-ray repeaters, blazars, black-hole objects, cosmic rays, and boson clouds. The application of SOC concepts has been performed by numerical cellular automaton simulations, by analytical calculations of statistical (powerlaw-like) distributions based on physical scaling laws, and by observational tests of theoretically predicted size distributions and waiting time distributions. Attempts have been undertaken to import physical models into the numerical SOC toy models, such as the discretization of magneto-hydrodynamics (MHD) processes. The novel applications stimulated also vigorous debates about the discrimination between SOC models, SOC-like, and non-SOC processes, such as phase transitions, turbulence, random-walk diffusion, percolation, branching processes, network theory, chaos theory, fractality, multi-scale, and other complexity phenomena. We review SOC studies from the last 25 years and highlight new trends, open questions, and future challenges, as discussed during two recent ISSI workshops on this theme. | About 25 years ago, the concept of {self-organized criticality (SOC)} emerged (Bak et al.~1987), initially envisioned to explain the ubiquitous 1/f-power spectra, which can be characterized by a powerlaw function $P(\nu) \propto \nu^{-1}$. The term {\sl 1/f power spectra} or {\sl flicker noise} should actually be understood in broader terms, including power spectra with pink noise ($P(\nu) \propto \nu^{-1}$), red noise ($P(\nu) \propto \nu^{-2}$), and black noise ($P(\nu) \propto \nu^{-3}$), essentially everything except white noise ($P(\nu) \propto \nu^{0}$). While white noise represents traditional random processes with uncorrelated fluctuations, 1/f power spectra are a synonym for time series with non-random structures that exhibit long-range correlations. These non-random time structures represent the avalanches in Bak's paradigm of sandpiles. Consequently, Bak's seminal paper in 1987 triggered a host of numerical simulations of sandpile avalanches, which all exhibit powerlaw-like size distributions of avalanche sizes and durations. These numerical simulations were, most commonly, cellular automata in the language of complexity theory, which are able to produce complex spatio-temporal patterns by iterative application of a simple mathematical redistribution rule. The numerical algorithms of cellular automata are extremely simple, basically a one-liner that defines the redistribution rule, with an iterative loop around it, but can produce the most complex dynamical patterns, similar to the beautiful geometric patterns created by Mandelbrot's fractal algorithms (Mandelbrot 1977, 1983, 1985). An introduction and exhaustive description of cellular automaton models that simulate SOC systems is given in Pruessner (2012, 2013), and a review of cellular automaton models applied to solar physics is given in Charbonneau et al.~(2001). Four years after introduction, Bak's SOC concept was applied to solar flares, which were known to exhibit similar powerlaw size distributions for hard X-ray peak fluxes, total fluxes, and durations as the cellular automaton simulations produced for avalanche sizes and durations (Lu and Hamilton 1991). This discovery enabled a host of new applications of the SOC concept to astrophysical phenomena, such as solar and stellar flare statistics, magnetospheric substorms, X-ray pulses from accretion disks, pulsar glitches, and so forth. A compilation of SOC applications to astrophysical phenomena is given in a recent textbook (Aschwanden 2011a), as well as in recent review articles (Aschwanden 2013; Crosby 2011). The successful spreading of the SOC concept in astrophysics mirrored the explosive trend in other scientific domains, such as the application of SOC in magnetospheric physics (auroras, substorms; see review by Sharma \etal~(2014), in geophysics (earthquakes, mountain and rock slides, snow avalanches, forest fires; see Hergarten 2002 and review by Hergarten in this volume), in biophysics (evolution and extinctions, neuron firing, spread of diseases), in laboratory physics (Barkhausen effect, magnetic domain patterns, Ising model, tokamak plasmas; Jensen 1998), financial physics (stock market crashes; Sornette 2003), and social sciences (urban growth, traffic, global networks, internet) or sociophysics (Galam 2012). This wide range of applications elevated the SOC concept to a truly interdisciplinary research area, which inspired Bak's vision to explain ``how nature works'' (Bak 1996). What is common to all these systems is the statistics of nonlinear processes, which often ends up in powerlaw-like size distributions. Other aspects that are in common among the diverse applications are complexity, contingency, and criticality (Bak and Paczuski 1995), which play a grand role in complexity theory and systems theory. What became clear over the last 25 years of SOC applications is the duality of (1) a universal statistical aspect, and (2) a special physical system aspect. The universal aspect is a statistical argument that can be formulated in terms of the scale-free probability conjecture (Aschwanden 2012a), which explains the powerlaw function and the values of the powerlaw slopes of most occurrence frequency distributions of spatio-temporal parameters in avalanching systems. This statistical argument for the probability distributions of nonlinear systems is as common as the statistical argument for binomial or Gaussian distributions in linear or random systems. In this sense, solar flares, earthquakes, and stockmarket systems have a statistical commonality (e.g., de Arcangelis et al.~2006). On the other hand, each SOC system may be governed by different physical principles unique to each observed SOC phenomenon, such as plasma magnetic reconnection physics in solar flares, mechanical stressing of tectonic plates in earthquakes, or the networking of brokers in stock market crashes. So, one should always be aware of this duality of model components when creating a new SOC model. There is no need to re-invent the universal statistical aspects or powerlaw probability distributions each time, while the modeling of physical systems may be improved with more accurate measurements and model parameterizations in every new SOC application. There is another duality in the application of SOC: the numerical world of lattice simulation toy models, and the real world of quantitative observations governed by physical laws. The world of lattice simulations has its own beauty in producing complexity with mathematical simplicity, but it cannot capture the physics of a SOC system. It can be easily designed, controlled, modified, and visualized. It allows us to perform Monte-Carlo simulations of SOC models and may give us insights about the universal statistical aspects of SOC. Real world phenomena, in contrast, need to be observed and measured with large statistics and reliable parameters that have been cleaned from systematic bias effects, incomplete sampling, and unresolved spatial and temporal scales, which is often hard to achieve. However, computer power has increased drastically over the last 25 years, exponentially according to Gordon Moore's law, so that enormous databases with up to $\approx 10^9$ events have been gathered per data set from some SOC phenomena, such as from solar small-scale phenomena for instance (McIntosh and Gurman 2005). We organize this review by describing first some basics of SOC systems (Section 2), concerning SOC definitions, elements of a SOC system, the probability concept, geometric scaling laws, transport process, derivation of occurrence frequency distributions, waiting time distributions, separation of time scales, and the application of cellular automata. Then we deliver an overview on astrophysical applications (Section 3), grouped by observational results and theoretical models in solar physics, magnetospheres, planets, stars, galaxies, and cosmology. In Section 4 we capture some discussions, open issues and challenges, critiques, limitations, and new trends on the SOC subject, including also discussions of SOC-related processes, such as turbulence and percolation. The latter section mostly results from discussions during two weeks of dedicated workshops on ``Self-organized Criticality and Turbulence'', held at the International Space Science Institute (ISSI) Bern during 2012 and 2013, attended by participants who have contributed to this review. \clearpage | The literature on self-organized criticality (SOC) models counts over 3000 refereed publications at the time of writing, with about 500 papers dedicated to solar and astrophysics. Given the relatively short time interval of 25 years since the SOC concept was born (Bak et al.~1987), the productivity in this interdisciplinary and innovative field speaks for the generality, versatility, and inspirational power of this new scientific theory. Although there exist some previous similar concepts in complexity theory, such as phase transitions, turbulence, percolation, or branching theory, the SOC concept seems to have the broadest scope and the most general applicability to phenomena with nonlinear energy dissipation in complex systems with many degrees of freedom. Of course there is no such thing as a single ``SOC theory'', but we rather deal with various SOC concepts (that are more qualitative rather than quantitative), which in some cases have been developed into more rigorous quantitative SOC models that can be tested with real-world data. Computer simulations of the BTW type provide toy models that can mimic complexity phenomena, but they generally lack the physics of real-world SOC phenomenona, because their discretized lattice grids do not reflect in any way the microscopic atomic or subatomic structure of real-world physical systems. In this review we focus on the astrophysical applications only, including solar physics, magnetospheric, planetary, stellar, and galactic physics. We summarize first some basic concepts of a generalized SOC theory, covering different SOC definitions, the driver, instability and criticality, avalanches, microscopic structures, basic spatio-temporal scaling laws and derivations of basic occurrence frequency or size distributions, waiting time distributions, and a comparison of basic numerical cellular automaton simulations. Most of these aspects are the ingredients of a generalized fractal-diffusive self-organized criticality (FD-SOC) model (Aschwanden 2014), which we use as a standard model for the macroscopic description of a SOC system, bearing in mind that it represents only a first-order approximation to the statistics of the microphysics of SOC avalanches. This standard model is based on the scale-free probability conjecture, fractal geometry, and diffusive transport. This model can explain most of the astrophysical observations and enables us to discriminate which SOC-related observations can be explained with standard scaling laws, and which phenomena represent mavericks that need either a special model, an improved data analysis, or better statistical completeness. We summarize the major findings of this review in the following: \begin{enumerate} \item{A general working definition of a SOC system that can be applied to the majority of the observed astrophysical phenomena interpreted as SOC phenomena can be formulated as: {\sl SOC is a critical state of a nonlinear energy dissipation system that is slowly and continuously driven towards a critical value of a system-wide instability threshold, producing scale-free, fractal-diffusive, and intermittent avalanches with powerlaw-like size distributions} (Aschwanden 2014). This generalized definition expands the original meaning of self-organized ``criticality'' to a wider class of critical points and instability thresholds that have a similar (nonlinear) dynamical behavior and produce similar (powerlaw-like) statistical size distributions.} \item{A generalized (macroscopic description of a) SOC model can be formulated as a function of the Euclidean space dimension $d$, the spatio-temporal spreading exponent $\beta$, a fractal dimension $D_d$, and a volume-flux scaling (or radiation coherency) exponent $\gamma$. For standard conditions [$d=3$, $D_d \approx (1+d)/2$, $\beta=1$, and $\gamma=1$], this SOC model predicts (with no free parameters) powerlaw distributions for all SOC parameters, namely $\alpha_L=3$ for length scales, $\alpha_A=2$ for areas, $\alpha_V=5/3$ for volumes, $\alpha_F=2$ for fluxes or energy dissipation rates, $\alpha_F=5/3$ for peak fluxes or peak energy dissipation rates, and $\alpha_E=3/2$ for time-integrated fluences or energies of SOC avalanches.} \item{The underlying correlations or scaling laws are: $A \propto L^2$ for the maximum avalanche area, $A_f \propto L^{D_d}$ for the fractal avalanche area, $V \propto L^3$ for the maximum avalanche volume, $V_f \propto L^{D_d}$ for the fractal avalanche volume, $T \propto L^{(2/\beta)}$ for the avalanche duration, $F \propto L^{(\gamma D_d)}$ for the flux or energy dissipation rate, $P \propto L^{(\gamma d)}$ for the peak flux or peak energy dissipation rate, $E \propto L^{(\gamma D_d+2/\beta)}$ for the fluence or total energy.} \item{Moreover, the FD-SOC model predicts a waiting time distribution with a slope of $\alpha_{\Delta t}=2$ for short waiting times, and an exponential drop-off for long waiting times, where the two waiting time regimes are attributed to intermittently active periods, and to randomly distributed quiescent periods. The contiguous activity periods are predicted to have persistence and memory.} \item{Among the astrophysical applications we find agreement between the predicted and observed size distribution for 10 out of 14 reported phenomena, including lunar craters, meteorites, asteroid belts, Saturn ring particles, auroral events during magnetospheric substorms, outer radiation belt electron events, solar flares, soft gamma-ray repeaters, blazars, and black-hole objects.} \item{Discrepancies between the predicted and observed size distributions are found for solar energetic particle (SEP) events, stellar flares, pulsar glitches, the Cygnus X-1 black hole, and cosmic rays, which require a modification of the standard FD-SOC model or improved data analysis. The disagreement for SEP events is believed to be due to a selection bias for large events, or could alternatively be modeled with a different dimensionality of the SOC system. For stellar flares we conclude that the bolometric fluence is not proportional to the dissipated energy and flaring volume. Pulsar glitches are subject to small-number statistics. Black hole pulses from Cygnus X-1 have an extremely steep size distribution that could be explained by a suppression of large pulses for a certain period after a large pulse. For cosmic rays, the energy distribution appears to be subject to incomplete uni-directional sampling by in-situ observations, rather than omni-directional sampling by remote-sensing methods.} \item{Some of the SOC-associated phenomena have also been modeled with alternative models regarding their size or waiting time distributions and were found to be commensurable, such as in terms of turbulence, percolation, branching theory, or phase transitions. All these theories have some commonalities in their concept and can often not be discriminated based on their observed size distributions alone. Some of the physical processes may coexist and not exclude each other, such as SOC and turbulence in the solar wind.} \end{enumerate} A summary of theoretically predicted and observed powerlaw indices of selected astrophysical SOC phenomena is listed in Table 15, while more complete compilations for each phenomenon are given in Tables 2 to 14. The variation of powerlaw values among the same phenomena indicates incompatible data analysis methods or statistically irreconcilable samples. Improved data analysis, larger statistics, and more detailed complexity models are called for in future studies, which should reconcile existing discrepancies and answer the existing open questions and challenges. Besides the statistical improvements, also physical models (Table 16) that reproduce the underlying scaling laws are expected in future work. All these tasks present a rich and rewarding activity of future research in the field of complex systems. The SOC concept has clearly stimulated a new way of thinking and analyzing the dynamics and statistics of complex systems. | 14 | 3 | 1403.6528 |
1403 | 1403.4360_arXiv.txt | We present the statistics of faint submillimeter/millimeter galaxies (SMGs) and serendipitous detections of a submillimeter/millimeter line emitter (SLE) with no multi-wavelength continuum counterpart revealed by the deep ALMA observations. We identify faint SMGs with flux densities of $0.1-1.0$ mJy in the deep Band 6 and Band 7 maps of $10$ independent fields that reduce cosmic variance effects. The differential number counts at $1.2$ mm are found to increase with decreasing flux density down to $0.1$ mJy. Our number counts indicate that the faint ($0.1-1.0$ mJy, or SFR$_{\rm IR} \sim 30-300 M_\odot$ yr$^{-1}$) SMGs contribute nearly a half of the extragalactic background light (EBL), while the remaining half of the EBL is mostly contributed by very faint sources with flux densities of $<0.1$ mJy (SFR$_{\rm IR} \lesssim 30 M_\odot$ yr$^{-1}$). We conduct counts-in-cells analysis with the multifield ALMA data for the faint SMGs, and obtain a coarse estimate of galaxy bias, $b_{\rm g} < 4$. The galaxy bias suggests that the dark halo masses of the faint SMGs are $\lesssim 7 \times 10^{12} M_\odot$, which is smaller than those of bright ($>1$ mJy) SMGs, but consistent with abundant high-$z$ star-forming populations such as sBzKs, LBGs, and LAEs. Finally, we report the serendipitous detection of SLE--1 with continuum counterparts neither in our 1.2 mm-band nor multi-wavelength images including ultra deep \textit{HST}/WFC3 and \textit{Spitzer} data. The SLE has a significant line at $249.9$ GHz with a signal-to-noise ratio of $7.1$. If the SLE is not a spurious source made by unknown systematic noise of ALMA, the strong upper limits of our multi-wavelength data suggest that the SLE would be a faint galaxy at $z \gtrsim 6$. | \label{sec:introduction} In the past decades, it has been found that the amount of the cosmic infrared (IR) background is comparable to that of the cosmic optical background \citep{puget1996,fixsen1998,hauser1998,hauser2001,dole2006}. The large amount of energy in the IR indicates that a significant fraction of the star formation in the universe is hidden by dust. Probing far-infrared (FIR) sources is key to a full understanding of galaxy formation history, and can provide strong constraints on models of galaxy formation \citep[e.g.,][]{granato2004,baugh2005,fontanot2007,shimizu2012,hayward2013}. Considerable progress has been made in charting the abundance of FIR sources \citep[see the recent review of][]{casey2014} and shown that the extragalactic background light (EBL) at submillimeter and millimeter wavelengths is largely contributed by dusty star-forming galaxies, the so-called submillimeter galaxies \citep[SMGs;][]{lagache2005}. With a $15$-m dish, the James Clerk Maxwell Telescope (JCMT) blank-field $850\mu$m submillimeter surveys with Submillimeter Common User Bolometer Array \citep[SCUBA;][]{holland1999} have resolved $\sim 20-30${\%} of the $850\mu$m EBL into distinct, bright SMGs with $S_{850\mu{\rm m}} > 2$ mJy \citep[e.g.,][]{barger1998,hughes1998,barger1999,eales1999,eales2000,scott2002,borys2003,wang2004,coppin2006}. Similar results have been obtained at $870\mu$m with the Large APEX Bolometer Camera \citep[LABOCA;][]{siringo2009} on the $12$-m APEX telescope \citep{weiss2009}. At $1.1$ mm, about $6-10$ {\%} of the EBL has been resolved into individual sources by deep surveys with the AzTEC camera \citep{wilson2008} on both the JCMT \citep[e.g.,][]{perera2008,austermann2009,austermann2010} and the $10$-m Atacama Submillimeter Telescope Experiment \citep[ASTE; e.g.,][]{aretxaga2011,scott2010,hatsukade2011,scott2012}. The biggest challenge for constructing the number counts of SMGs from such observations is the coarse spatial resolutions of the single-dish telescopes. Poor resolutions impose a fundamental limitation, the confusion limit \citep{condon1974}, on our ability to directly detect faint SMGs due to confusion noises. For instance, blank-field SCUBA surveys cannot reach the sensitivities required to identify the faint population below $2$ mJy at $850\mu$m. However, since the fraction of the millimeter and submillimeter EBL above $2$ mJy is not large, the total EBL is likely dominated by the population below the limit. Observations of massive galaxy cluster fields push the detection limits of intrinsic flux density toward fainter ones thanks to gravitational lensing effects \citep[e.g.,][]{smail1997,smail2002,cowie2002,knudsen2008,johansson2011,chen_cc2013}, but the positional uncertainties of the SMGs cause large uncertainties in the amplifications and the intrinsic fluxes \citep{chen_cc2011}. Another issue which arises from the poor resolutions is source blending; it is possible that several faint SMGs within a beam appear as a single brighter SMG. Source blending possibly changes the shape of the number counts, most critically by mimicking a population of bright SMGs. Multiplicity in a single-dish beam is also expected from evidence of strong clustering among SMGs \citep[e.g.,][]{blain2004,scott2006,weiss2009,hickox2012}. In fact, interferometric observations have shown that close pairs are common among SMGs and a significant fraction of bright SMGs found by single-dish observations are resolved into multiple sources \citep[e.g.,][]{ivison2007b,wang2011,smolcic2012,barger2012,hodge2013,karim2013}. although this issue is still under debate \citep[e.g.,][]{hezaveh2013,chen_cc2013b,koprowski2014}. To construct more reliable number counts down to flux densities of $< 1$ mJy, we need to conduct deep surveys with high angular resolution. The Atacama Large Millimeter/submillimeter Array (ALMA) enables us to explore faint ($0.1-1.0$ mJy) SMGs without effect of confusion limit thanks to its high sensitivity and high angular resolution. \cite{hatsukade2013} have shown the potential of ALMA; they have obtained number counts of unlensed faint SMGs down to sub-mJy level using ALMA. However, their ALMA data were originally obtained for their $20$ targets selected in one blank field, the Subaru/\textit{XMM-Newton} Deep Survey (SXDS) field \citep{furusawa2008} and the total survey area is not large, which may induce uncertainties in their measurements. The physical properties of faint SMGs and their relationships with other galaxy populations found at similar redshifts have not yet been investigated well. The IR luminosities of the faint SMGs with $1.2$ mm flux densities of $0.1-1.0$ mJy are estimated to be $L_{\rm IR} \sim (1.5-15) \times 10^{11} L_\odot$, if we adopt a modified blackbody with typical values for SMGs, i.e., spectral index of $\beta_{\rm d} = 1.5$ and dust temperature of $T_{\rm d} = 35$ K \citep[e.g.,][]{kovacs2006,coppin2008b}, located at $z=2.5$ \citep[e.g.,][]{chapman2005,yun2012}. In this case, from the estimated IR luminosities, their obscured star-formation rates (SFRs) are calculated to be SFR$_{\rm IR} \sim 30-300 M_\odot$ yr$^{-1}$ \citep{kennicutt1998b}. Recently, \textit{Herschel} observations have revealed that typical UV-selected galaxies such as Lyman-break galaxies (LBGs) have a median IR luminosity of $L_{\rm IR} \simeq 2.2 \times 10^{11} L_\odot$ \citep[][see also \citealt{leek2012,davies2013}]{reddy2012}, which is comparable to that of the faint SMGs. From a stacking analysis of \textit{Herschel} and ALMA data, \cite{decarli2014} have found that $K$-selected galaxies including star-forming BzK galaxies (sBzKs) have IR luminosities of $L_{\rm IR} = (5-11) \times 10^{11} L_\odot$. These results suggest that some of the faint SMGs might be FIR counterparts of UV- and/or $K$-selected galaxies. The spatial clustering of SMGs is an important observable, since its strength can be used to estimate an average mass of their hosting dark matter haloes. \cite{blain2004} have measured the clustering length of SMGs brighter than $5$ mJy at $850\mu$m, and found that the clustering length is significantly larger than those of optical/UV color-selected galaxies at similar redshifts, suggesting that SMGs are hosted by very massive dark haloes, with dark halo masses of $M_{\rm DH} \sim 10^{13} M_\odot$ \citep[see also,][]{webb2003,weiss2009,hickox2012}. Although several studies have investigated the clustering properties of SMGs, little attempt has been made for measuring those of faint SMGs with sub-mJy flux densities. This is because the previous large area surveys with the single-dish telescopes cannot detect faint SMGs due to the confusion limit. In this paper, we make use of multifield deep ALMA data, i.e., our own data for two independent fields and archival data with relatively long integration times, taken with the ALMA Band 6 and Band 7. Each field corresponds to a single primary beam area. We focus on serendipitously detected sources other than the targeted sources. The combination of the results of the deep ALMA surveys and those of a wide area survey in the literature yields robust estimates on the number counts of SMGs over a wide range of flux densities ($\simeq 0.1-5$ mJy), which % is currently one of the most reliable estimates on the abundance of SMGs.\footnote{ It is expected that the number counts of faint SMGs will be improved in the near future by combining results from ongoing ALMA deep field observations. } In addition, from the field-to-field scatter in their number counts, we carry out a pathfinder study for estimating the clustering properties of the faint SMGs. Finally, we report the serendipitous detection of a line emitter at $1.2$ mm using ALMA Band 6 data originally obtained for detecting [{\sc Cii}] emission from an extremely luminous Ly$\alpha$ blob at $z=6.595$, Himiko \citep{ouchi2013}. It is motivated by a recent discovery of a bright millimeter emission line beyond their target, nearby merging galaxies VV114 \citep{tamura2014}. Their spectral energy distribution (SED) analysis has shown that the detected line is likely a redshifted $^{12}$CO emission line from an X-ray bright galaxy at $z=2.467$, demonstrating that deep interferometric observations with high angular resolution can fortuitously detect emission lines not only from their main targets \citep{swinbank2012} but also from sources other than the targets \citep[see also,][]{kanekar2013b}. The outline of this paper is as follows. After describing the ALMA observations and data reduction in Section \ref{sec:data}, we perform source extractions and carry out simulations to derive the number counts of SMGs in Section \ref{sec:data_analysis}. In Section \ref{sec:number_counts}, after we construct the number counts, we compare them with the previous observational results and model predictions, and estimate the contributions from the resolved sources to the EBL at 1.2 mm. In the next section, we present the results of our counts-in-cells analysis for faint SMGs. In Section \ref{sec:serendipitous_lines}, we report detections of serendipitous submillimeter emission lines in our ALMA data. A summary is presented in Section \ref{sec:summary}. Throughout this paper, we assume a flat universe with $\Omega_{\rm m} = 0.3$, $\Omega_\Lambda = 0.7$, $n_{\rm s} = 1$, $\sigma_8 = 0.8$, and $H_0 = 70$ km s$^{-1}$ Mpc$^{-1}$. We use magnitudes in the AB system \citep{oke1983}. Following the method by \cite{hatsukade2013}, we scale the flux density of a source observed at a wavelength different from $1.2$ mm to the flux density at $1.2$ mm by using a modified blackbody with typical values for SMGs as noted above. For the data that we analyze in this paper, we adopt the flux density ratios summarized in Table \ref{tab:ratios_fnu}. For the other data, we use $S_{\rm 1.2mm} / S_{870\mu{\rm m}} = 0.43$, $S_{\rm 1.2mm} / S_{\rm 1.1mm} = 0.79$, and $S_{\rm 1.2mm} / S_{\rm 1.3mm} = 1.25$. \begin{deluxetable*}{ccccccc} \tablecolumns{7} \tablewidth{0pt} \tablecaption{ Survey Fields \label{tab:ratios_fnu}} \tablehead{ \colhead{Map} & \colhead{Target} & \colhead{$\lambda_{\rm obs}$} & \colhead{$\nu_{\rm obs}$} & \colhead{$\sigma$} & \colhead{$S_{1.2{\rm mm}} / S_{\rm obs}$} & \colhead{References} \\ \colhead{ } & \colhead{ } & \colhead{(mm)} & \colhead{(GHz)} & \colhead{(mJy beam$^{-1}$)} & \colhead{ } & \colhead{ } \\ \colhead{ } & \colhead{ } & \colhead{(1)} & \colhead{(2)} & \colhead{(3)} & \colhead{(4)} & \colhead{(5)} } \startdata $1$ & Himiko & $1.16$ & $259$ & $0.017$ & $0.90$ & (a) \\ $2$ & NB921-N-79144 & $1.22$ & $245$ & $0.051$ & $1.05$ & (b) \\ $3$ & LESS J033229.4$-$275619 & $1.21$ & $247$ & $0.075$ & $1.03$ & (c) \\ $4$ & CFHQS J0210$-$0456 & $1.20$ & $249$ & $0.031$ & $1.00$ & (d) \\ $5$ & CFHQS J2329$-$0301 & $1.20$ & $250$ & $0.021$ & $1.00$ & (d) \\ $6$ & ULAS J131911.29$+$095051.4 & $1.16$ & $258$ & $0.072$ & $0.91$ & (e) \\ $7$ & SDSS J104433.04$-$012502.2 & $1.04$ & $288$ & $0.088$ & $0.68$ & (e) \\ $8$ & SDSS J012958.51$-$003539.7 & $1.04$ & $288$ & $0.052$ & $0.68$ & (e) \\ $9$ & SDSS J231038.88$+$185519.7 & $1.14$ & $263$ & $0.058$ & $0.87$ & (e) \\ $10$ & SDSS J205406.49$-$000514.8 & $1.15$ & $261$ & $0.031$ & $0.89$ & (e) \enddata \tablecomments{ (1) Observed wavelength. (2) Observed frequency. (3) The $1\sigma$ noise measured in each map before primary beam correction. (4) Ratio of the flux density at $1.2$ mm, $S_{1.2{\rm mm}}$, to the observed flux density, $S_{\rm obs}$, on the assumption of a modified blackbody with typical values for SMGs. (5) (a) \cite{ouchi2013}; (b) R. Momose et al. in preparation; (c) \cite{nagao2012}; (d) \cite{willott2013b}; (e) \cite{wang2013}. } \end{deluxetable*} | \label{sec:summary} In this paper, we have presented the number counts and the spatial clustering of faint SMGs, and reported a serendipitous detection of an SLE with no multiwavelength continuum counterpart revealed by the deep ALMA observations. Exploiting the deep ALMA Band 6/Band 7 continuum data for the $10$ independent fields that reduce the effect of cosmic variance, we have detected faint SMGs with flux densities of $0.1-1.0$ mJy. In addition, we have conducted a blind search for line emitters in the ALMA data cubes, and identified SLE--1. Our main results are as follows. \begin{itemize} \item We have constructed the $1.2$ mm differential number counts of SMGs and found that the number counts increase with decreasing flux density down to $0.1$ mJy. We have also found that the slope of the number counts for the faint ($0.1-1$ mJy, or SFR$_{\rm IR} \sim 30-300 M_\odot$ yr$^{-1}$) SMGs is smaller than that for bright ($>1$ mJy) SMGs. Our number counts have revealed that the faint SMGs contribute about $50${\%} of the EBL, which is significantly larger than the contributions from the bright SMGs ($\sim 7${\%}). The remaining $40${\%} of the EBL is contributed by very faint SMGs with flux densities of $<0.1$ mJy (SFR$_{\rm IR} \lesssim 30 M_\odot$ yr$^{-1}$). \item From the field-to-field scatter in their number counts, we have obtained a coarse estimate of the galaxy bias of the faint SMGs, $b_{\rm g} < 4$, which suggests that the dark halo masses of the faint SMGs is $M_{\rm DH} \lesssim 7 \times 10^{12} M_\odot$. Their bias is found to be lower than those of bright SMGs (\citealt{webb2003}; \citealt{blain2004}; \citealt{weiss2009}; \citealt{hickox2012}; c.f., \citealt{williams2011}), indicating the clustering segregation with the FIR luminosity in SMGs. We also find that the galaxy bias of the faint SMGs is consistent with those of abundant star-forming galaxy populations at high redshifts such as sBzKs, LBGs, and LAEs, which implies that some of the faint SMGs might be their FIR counterparts. It should be noted that our estimates suffer from relatively large uncertainties mainly due to the small number statistics and unexplored redshift distribution of the faint SMG population, which will be overcome after a large number of deep ALMA maps and a redshift distribution of faint SMGs become available in the near future. \item We have found that SLE--1 has no counterpart in the multiwavelength images, suggesting that it would be a faint galaxy at a high redshift. SLE--1 shows a significant line detection with an SNR of $7.1$ at $249.9$ GHz. Taking advantage of the upper limits estimated from the deep images at wavelengths from the optical to $1.2$ mm, we have discussed what the detected line and the redshift of SLE--1 are. If the detection of SLE--1 is not induced by unknown systematic noise effects in ALMA data, the possible explanations for the detected line of SLE--1 are [{\sc Cii}]$158\mu$m from a dusty star-forming galaxy at $z=6.60$ or [{\sc Oiii}]$88\mu$m from a star-forming galaxy with a moderate metallicity of $Z/Z_\odot \simeq 0.2-0.3$ at $z=12.6$. \end{itemize} | 14 | 3 | 1403.4360 |
1403 | 1403.4156_arXiv.txt | We extensively reanalyze effects of a long-lived negatively charged massive particle, $X^-$, on big bang nucleosynthesis (BBN). The BBN model with an $X^-$ particle was originally motivated by the discrepancy between $^{6,7}$Li abundances predicted in standard BBN model and those inferred from observations of metal-poor stars. In this model $^7$Be is destroyed via the recombination with an $X^-$ particle followed by radiative proton capture. We calculate precise rates for the radiative recombinations of $^7$Be, $^7$Li, $^9$Be, and $^4$He with $X^-$. In nonresonant rates we take into account respective partial waves of scattering states and respective bound states. The finite sizes of nuclear charge distributions cause deviations in wave functions from those of point-charge nuclei. For a heavy $X^-$ mass, $m_X\gtrsim 100$ GeV, the $d$-wave $\rightarrow$ 2P transition is most important for $^7$Li and $^{7,9}$Be, unlike recombination with electrons. Our new nonresonant rate of the $^7$Be recombination for $m_X=1000$ GeV is more than 6 times larger than the existing rate. Moreover, we suggest a new important reaction for $^9$Be production: the recombination of $^7$Li and $X^-$ followed by deuteron capture. We derive binding energies of $X$-nuclei along with reaction rates and $Q$-values. We then calculate BBN and find that the amount of $^7$Be destruction depends significantly on the charge distribution of $^7$Be. Finally, updated constraints on the initial abundance and the lifetime of the $X^-$ are derived in the context of revised upper limits to the primordial $^6$Li abundance. Parameter regions for the solution to the $^7$Li problem are revised, and the primordial $^9$Be abundances is revised. | \label{sec1} Standard big bang nucleosynthesis (SBBN) is an important probe of the early universe. This model explains the primordial light element abundances inferred from astronomical observations except for the $^7$Li abundance. Additional nonstandard effects during big bang nucleosynthesis (BBN) may be required to explain the $^7$Li discrepancy. However, such models are strongly constrained from the consistency in the other elemental abundances. In this paper we re-examine in detail one intriguing solution to the $^7$Li problem, that due to a late-decaying negatively charged particle (possibly the stau as the next to lightest supersymmetric particle) denoted as the $X^-$. In previous work \citep{Kusakabe:2007fv} we showed that both decrease in $^7$Li and increase in $^6$Li abundances are possible in this model. Recently, however, the primordial $^6$Li abundance has been revised downward \citep{Lind:2013iza}, and there is now only an upper limit. Hence, it is necessary to re-evaluate the $X^-$ solution in light of these new measurements. We show that this remains a viable model for $^7$Li reduction without violating the new $^6$Li upper limit. \subsection{Primordial Li Observations} The primordial lithium abundance is inferred from spectroscopic measurements of metal-poor stars (MPSs). These stars have a roughly constant abundance ratio, $^7$Li/H$=(1-2) \times 10^{-10}$, as a function of metallicity~\citep{Spite:1982dd,Ryan:1999vr,Melendez:2004ni,Asplund:2005yt,bon2007,shi2007,Aoki:2009ce,Hernandez:2009gn,Sbordone:2010zi,Monaco:2010mm,Monaco:2011sd,Mucciarelli:2011ts,Aoki:2012wb,Aoki2012b}. The SBBN model, however, predicts a value that is higher by about a factor of $3-4$ [e.g., $^7$Li/H=$5.24 \times 10^{-10}$~\citep{Coc:2011az}] than the observational value when one uses the baryon-to-photon ratio determined in the $\Lambda$CDM model from an analysis of the power spectrum of the cosmic microwave background (CMB) radiation from the Wilkinson Microwave Anisotropy Probe \citep{Larson:2010gs, Hinshaw:2012fq} or the Planck data \citep{Coc:2013eea}). % This discrepancy suggests the need for a mechanism to reduce the $^7$Li abundance during or after BBN. Astrophysical processes such as the rotationally induced mixing \citep{Pinsonneault:1998nf,Pinsonneault:2001ub}, and the combination of atomic and turbulent diffusion~\citep{Richard:2004pj,Korn:2007cx,Lind:2009ta} might have reduced the $^7$Li abundance in stellar atmospheres although this possibility is constrained by the very narrow dispersion in observed Li abundances. In previous work the $^6$Li/$^7$Li isotopic ratios for MPSs have also been measured and $^6$Li detections have been reported for the halo turnoff star HD 84937~\citep{smith93,smith98,Cayrel:1999kx}, the two Galactic disk stars HD 68284 and HD 130551~\citep{Nissen:1999iq}, and other stars \citep{Asplund:2005yt,ino05,asp2008,gar2009,ste2010,ste2012}. A large $^6$Li abundance of $^6$Li/H$\sim 6\times10^{-12}$ has then been suggested \citep{Asplund:2005yt}. That abundance is $\sim$1000 times higher than the SBBN prediction, and is also significantly higher than the prediction from a standard Galactic cosmic-ray nucleosynthesis model \citep[cf.][]{pra2006,pra2012}. It has been noted for some time, however, \citep{smith2001,Cayrel:2007te} that convective motion in stellar atmospheres could cause systematic asymmetries in the observed atomic line profiles and mimic the presence of $^6$Li~\citep{Cayrel:2007te}. Indeed, in a subsequent detailed analyses, \citet{Lind:2013iza} found that most of the previous $^6$Li absorption feature could be attributed to a combination of 3D turbulence and nonlocal thermal equilibrium (NLTE) effects in the model atmosphere. For the present purposes, therefore, we adopt the 2$\sigma$ from their G64-12 NLTE model with 5 parameters, corresponding to $^6$Li$/$H$ = (0.85 \pm 4.33) \times 10^{-12}$. Abundances of $^9$Be \citep{boe1999,Primas:2000ee,Tan:2008md,Smiljanic:2009dt,Ito:2009uv,Rich:2009gj} and B \citep{dun1997,gar1998,Primas:1998gp,cun2000} in MPSs have also been measured. The observed abundances linearly scale with Fe abundances. The linear relation between abundances of light elements and Fe is expected in Galactic cosmic-ray nucleosynthesis models~\citep{ree1970,men1971,ree1974,pra2012}. Any primordial abundances, on the other hand, should be observed as plateau abundances as in the Li case. Be and B in the observed MPSs are not expected to be primordial. Nonetheless, primordial abundances of Be and B may be found by future observations. The strongest lower limit on the primordial Be abundance at present is log(Be/H)$<-14$ which has been derived from an observation of carbon-enhanced MPS BD+44$^\circ$493 of an iron abundance [Fe/H]$=-3.7$ \footnote{[A/B]$=\log(n_{\rm A}/n_{\rm B})-\log(n_{\rm A}/n_{\rm B})_\odot$, where $n_i$ is the number density of $i$ and the subscript $\odot$ indicates the solar value, for elements A and B.} with Subaru/HDS \citep{Ito:2009uv}. \subsection{$X^-$ Solution} As one of the solutions to the lithium problem, effects of negatively charged massive particles (CHAMPs or Cahn-Glashow particles) $X^-$~\citep{cahn:1981,Dimopoulos:1989hk,rujula90} during the BBN epoch have been studied \citep{Pospelov:2006sc,Kohri:2006cn,Cyburt:2006uv,Hamaguchi:2007mp,Bird:2007ge,Kusakabe:2007fu,Kusakabe:2007fv,Jedamzik:2007qk,Jedamzik:2007cp,Kamimura:2008fx,Pospelov:2007js,Kawasaki:2007xb,Jittoh:2007fr,Jittoh:2008eq,Jittoh:2010wh,Pospelov:2008ta,Khlopov:2007ic,Kawasaki:2008qe,Bailly:2008yy,Jedamzik:2009uy,Kamimura2010,Kusakabe:2010cb,Pospelov:2010hj,Kohri:2012gc,Cyburt:2012kp,Dapo2012}. Constraints on supersymmetric models have been derived through BBN calculations~\citep{Cyburt:2006uv,Kawasaki:2007xb,Jittoh:2007fr,Jittoh:2008eq,Jittoh:2010wh,Pradler:2007ar,Pradler:2007is,Kawasaki:2008qe,Bailly:2008yy}. In addition, cosmological effects of fractionally charged massive particles (FCHAMPs) have been studied although the nucleosynthesis has not yet been studied \citep{Langacker:2011db}. Such long-lived CHAMPs and FCHAMPs which are also called heavy stable charged particles (HSCPs) appear in theories beyond the standard model, and have been searched in collider experiments. Although the particles should leave characteristic tracks of long time-of-flights due to small velocities, and anomalous energy losses, they have never been detected. The most stringent limit on scaler $\tau$ leptons (staus) has been derived using data collected with the Compact Muon Solenoid detector for $pp$ collisions at the Large Hadron Collider during the 2011 ($\sqrt[]{\mathstrut s}=7$ TeV, 5.0 fb$^{-1}$) and 2012 ($\sqrt[]{\mathstrut s}=8$ TeV, 18.8 fb$^{-1}$) data taking period. The limit excludes stau mass below 500 GeV for the direct+indirect production model \citep{CMS2013JHEP}. The limit on FCHAMPs with spin 1/2 that are neutral under $SU$(3)$_{C}$ and $SU$(2)$_L$ has also been derived from Compact Muon Solenoid searches. It excludes the masses less than 310 GeV for charge number $q=2/3$, and masses less than 140 GeV for $q=1/3$ \citep{CMS:2012xi}. The $X^-$ particles and nuclei $A$ can form new bound atomic systems ($A_X$ or $X$-nuclei) with binding energies $\sim O(0.1-1)$~MeV in the limit that the mass of $X^-$, $m_X$, is much larger than the nucleon mass \citep{cahn:1981,Kusakabe:2007fv}. The $X$-nuclei are exotic chemical species with very heavy masses and chemical properties similar to normal atoms and ions. The superheavy stable (long-lived) particles have been searched for in experiments, and multiple constraints on respective $X$-nuclei have been derived. The spectroscopy of terrestrial water gives a limit on the number ratio of $X$/H$<10^{-28}-10^{-29}$ for $m_X=11-1100$ GeV \citep{Smith1982} while that of sea water gives the limits of $X$/H$<4\times 10^{-17}$ for $m_X=5-1500$ GeV \citep{Yamagata:1993jq} and $X$/H$<6\times 10^{-15}$ for $m_X=10^4-10^7$ GeV \citep{Verkerk:1991jf}. Limits have been derived from analyses of other material, (1) $X$/(Na/23)$<5\times 10^{-12}$ for $m_X=10^2-10^5$ GeV \citep{Dick:1985wk}, (2) $X$/(C/12)$<2\times 10^{-15}$ for $m_X\leq 10^5$ GeV \citep{Tur1984}, and (3) $X$/(Pb/200)$<1.5\times 10^{-13}$ for $m_X\leq 10^5$ GeV \citep{Norman:1988fd}. Furthermore, limits from analyses of H, Li, Be, B, C, O and F have been derived for $m_X= 10^2 -10^4$ GeV using commercial gases, lake and deep see water deuterium, plant $^{13}$C, commercial $^{18}$O, and reagent grade samples of Li, Be, B, and F \citep{Hemmick:1989ns}. If the $X^-$ particle exits during the BBN epoch, it opens new pathways of atomic and nuclear reactions and affects the resultant nucleosynthesis ~\citep{Pospelov:2006sc,Kohri:2006cn,Cyburt:2006uv,Hamaguchi:2007mp,Bird:2007ge,Kusakabe:2007fu,Kusakabe:2007fv,Jedamzik:2007qk,Jedamzik:2007cp,Kamimura:2008fx,Pospelov:2007js,Kawasaki:2007xb,Jittoh:2007fr,Jittoh:2008eq,Jittoh:2010wh,Pospelov:2008ta,Khlopov:2007ic,Kawasaki:2008qe,Bailly:2008yy,Kamimura2010,Kusakabe:2010cb,Pospelov:2010hj,Kohri:2012gc,Cyburt:2012kp,Dapo2012}. As the temperature of the universe decreases, positively charged nuclides gradually become electromagnetically bound to $X^-$'s. Heavier nuclei with larger mass and charge numbers recombine earlier since their binding energies are larger \citep{cahn:1981,Kusakabe:2007fv}. The formation of most $X$-nuclei proceeds through radiative recombination of nuclides $A$ and $X^-$ \citep{Dimopoulos:1989hk,rujula90}. However, the $^7$Be$_X$ formation proceeds also through the non-radiative $^7$Be charge exchange reaction between a $^7$Be$^{3+}$ ion and an $X^-$ \citep{Kusakabe:2013tra,2013PhRvD..88h9904K}. The recombination of $^7$Be with $X^-$ occurs in a higher temperature environment than that of lighter nuclides does. At $^7$Be recombination, therefore, the thermal abundance of free electrons $e^-$ is still very high, and abundant $^7$Be$^{3+}$ ions can exist. The charge exchange reaction then only affects the $^7$Be abundance. Because of relatively small binding energies, the bound states cannot form until late in the BBN epoch. At the low temperatures, nuclear reactions are already inefficient. Hence, the effect of the $X^-$ particles is not large. However, the $X^-$ particle can cause efficient production of $^6$Li~\citep{Pospelov:2006sc} with the weak destruction of $^7$Be~\citep{Bird:2007ge,Kusakabe:2007fu} depending on its abundance and lifetime \citep{Bird:2007ge,Kusakabe:2007fv,Kusakabe:2010cb}. The $^6$Li abundance can significantly increase through the $X^-$-catalyzed transfer reaction $^4$He$_X$($d$, $X^-$)$^6$Li~\citep{Pospelov:2006sc}, where 1(2$,$3)4 signifies a reaction $1+2\rightarrow 3+4$. The cross section of the reaction is six orders of magnitude larger than that of the radiative $^4$He($d$,$\gamma$)$^6$Li reaction through which $^6$Li is produced in SBBN model \citep{Hamaguchi:2007mp}. Other transfer reactions such as $^4$He$_X$($t$,$X^-$)$^7$Li, $^4$He$_X$($^3$He,$X^-$)$^7$Be, and $^6$Li$_X$($p$,$X^-$)$^7$Be are also possible~\citep{Cyburt:2006uv}. Their rates are, however, not so large as that of the $^4$He$_X$($d$,$X^-$)$^6$Li since the former reactions involve a $\Delta l=1$ angular momentum transfer and consequently a large hindrance of the nuclear matrix element~\citep{Kamimura:2008fx}. The most important reaction for a reduction of the primordial $^7$Li abundance~\footnote{$^7$Be produced during the BBN is transformed into $^7$Li by electron capture in the epoch of the recombination of $^7$Be and electron much later than the BBN epoch. The primordial $^7$Li abundance is, therefore, the sum of abundances of $^7$Li and $^7$Be produced in BBN. In SBBN with the baryon-to-photon ratio inferred from WMAP, $^7$Li is produced mostly as $^7$Be during the BBN.} is the resonant reaction $^7$Be$_X$($p$,$\gamma$)$^8$B$_X$ through the first atomic excited state of $^8$B$_X$~\citep{Bird:2007ge} and the atomic ground state of $^8$B$^\ast$($1^+$,0.770~MeV)$_X$, i.e., an atom consisting of the $1^+$ nuclear excited state of $^8$B and an $X^-$~\citep{Kusakabe:2007fu}. From a realistic estimate of binding energies of $X$-nuclei, however, the latter resonance has been found to be an inefficient pathway for $^7$Be$_X$ destruction~\citep{Kusakabe:2007fv}. The $^8$Be$_X$+$p$ $\rightarrow ^9$B$_X^{\ast{\rm a}} \rightarrow ^9$B$_X$+$\gamma$ reaction through the $^9$B$_X^{\ast{\rm a}}$ atomic excited state of $^9$B$_X$~\citep{Kusakabe:2007fv} produces the $A=$9 $X$-nucleus so that it can possibly lead to the production of heavier nuclei. This reaction, however, is not operative because of its large resonance energy~\citep{Kusakabe:2007fv}. The resonant reaction $^8$Be$_X$($n$, $X^-$)$^9$Be through the atomic ground state of $^9$Be$^\ast$($1/2^+$, 1.684~MeV)$_X$, is another reaction producing mass number 9 nuclide~\citep{Pospelov:2007js}. \citet{Kamimura:2008fx}, however, adopted a realistic root mean square charge radius for $^8$Be of 3.39~fm, and found that $^9$Be$^*$($1/2^+$, 1.684~MeV)$_X$ is not a resonance but a bound state located below the $^8$Be$_X$+$n$ threshold. A subsequent four-body calculation for an $\alpha+\alpha+n+X^-$ system confirmed that the $^9$Be$^*$($1/2^+$, 1.684~MeV)$_X$ state is located below the threshold~\citep{Kamimura2010}. This was also confirmed by \citet{Cyburt:2012kp} using a three-body model. The effect of the resonant reaction is, therefore, negligible. The detailed BBN calculations of \citet{Kusakabe:2007fv,Kusakabe:2010cb} precisely incorporate recombination reactions of nuclides and $X^-$ particles, nuclear reactions of $X$-nuclei, and their inverse reactions. These calculations have also included reaction rates estimated in a rigorous quantum few-body model \citep{Hamaguchi:2007mp,Kamimura:2008fx}. The most realistic calculation \citep{Kusakabe:2010cb} shows no significant production of $^9$Be and heavier nuclides. Reactions of neutral $X$-nuclei, i.e., $p_X$, $d_X$ and $t_X$ can produce and destroy Li and Be~\citep{Jedamzik:2007qk,Jedamzik:2007cp}. The rates for these reactions and the charge-exchange reactions $p_X$($\alpha$,$p$)$\alpha_X$, $d_X$($\alpha$,$d$)$\alpha_X$ and $t_X$($\alpha$,$t$)$\alpha_X$ have been calculated in a rigorous quantum few-body model~\citep{Kamimura:2008fx}. The cross sections for the charge-exchange reactions are much larger than those of the nuclear reactions so that the neutral $X$-nuclei $p_X$, $d_X$ and $t_X$ are quickly converted to $\alpha_X$ before they induce nuclear reactions. The production and destruction of Li and Be is not significantly affected by the presence of neutral $X$-nuclei \citep{Kamimura:2008fx}. This was confirmed in a detailed nuclear reaction network calculation \citep{Kusakabe:2010cb}. It has been shown in our previous work \citep{Kusakabe:2007fv,Kusakabe:2010cb} that concordance with the observational constraints on D, $^3$He, and $^4$He is maintained in the parameter region of $^7$Li reduction. In this paper we present an extensive study on effects of a CHAMP, $X^-$, on BBN. First, we study the effects of theoretical uncertainties in the nuclear charge distributions on the binding energies of nuclei and the $X^-$, reaction rates, and BBN. Next, we derive the most precise radiative recombination rates for $^7$Be, $^7$Li, $^9$Be, and $^4$He with an $X^-$. Finally, we suggest a new reaction for $^9$Be production, i.e. $^7$Li$_X$($d$, $X^-$)$^9$Be. Based upon our updated BBN calculation, it is found that the amount of $^7$Be destruction depends significantly upon the assumed charge density for the $^7$Be nucleus. The most realistic constraints on the initial abundance and the lifetime of the $X^-$ are then derived, and the primordial $^9$Be abundance is also estimated. In Sec. \ref{sec2}, models for the nuclear charge density are described. In Sec.~\ref{sec3}, binding energies of the $X$-nuclei are calculated with both of a variational method and the integration of the Sch$\ddot{\rm o}$dinger equation, for different charge densities. In Sec.~\ref{sec4}, reaction rates are calculated for the radiative proton capture of the $^7$Be$_X$($p$, $\gamma$)$^8$B$_X$ and $^8$Be$_X$($p$, $\gamma$)$^9$B$_X$ reactions. Theoretical uncertainties in the rates due to the assumed charge density shapes are deduced. In Sec.~\ref{sec5}, rates for the radiative recombination of $^7$Be, $^7$Li, $^9$Be, and $^4$He with $X^-$ particles are calculated. Both nonresonant and resonant rates are derived. The difference of the recombination rate for $X^-$ particles compared to that for electrons is shown. In Sec.~\ref{sec6}, a new reaction for $^9$Be production is pointed out. It is the radiative recombination of $^7$Li and an $X^-$ followed by deuteron capture. In Sec.~\ref{sec7}, the rates and $Q$-values for $\beta$-decays and nuclear reactions involving the $X^-$ particle are derived. In Sec.~\ref{sec8}, a new reaction network calculation code is explained. In Sec.~\ref{sec9}, we show the evolution of elemental abundances as a function of cosmic temperature, and derive the most realistic constraints on the initial abundance and the lifetime of the $X^-$. Parameter regions for the solution to the $^7$Li problem, and the prediction of primordial $^9$Be are presented. Sec. \ref{sec10} is devoted to a summary and conclusions. In Appendix \ref{app1}, we comment on the electric dipole transitions of $X$-nuclei which change nuclear and atomic states simultaneously. \footnote{Throughout the paper, we use natural units, $\hbar=c=k_{\rm B}=1$, for the reduced Planck constant $\hbar$, the speed of light $c$, and the Boltzmann constant $k_{\rm B}$. We use the usual notation 1(2$,$3)4 for a reaction $1+2\rightarrow 3+4$.} | \label{sec10} We have completed a new detailed study of the effects of a long-lived negatively charged massive particle, i.e., $X^-$, on BBN. The BBN model including the $X^-$ particle is motivated by the discrepancy between the $^7$Li abundances predicted in SBBN model and those inferred from spectroscopic observations of MPSs. In the BBN model including the $X^-$, $^7$Be is destroyed via a recombination reaction with the $X^-$ followed by a radiative proton capture reaction, i.e., $^7$Be($X^-$, $\gamma$)$^7$Be$_X$($p$, $\gamma$)$^8$B$_X$. Since the primordial $^7$Li abundance is mainly from the abundance of $^7$Be produced during BBN, this $^7$Be destruction leads to a reduction of the primordial $^7$Li abundance, and it can explain the observed abundances. In addition, $^6$Li is produced via the recombination of $^4$He and $X^-$ followed by a deuteron capture reaction, i.e., $^4$He($X^-$, $\gamma$)$^4$He$_X$($d$, $X^-$)$^6$Li. Although the effects of many possible reactions have been studied, the $^9$Be abundance is not significantly enhanced in this BBN model. In this paper, we have also made a new study of the effects of uncertainties in the nuclear charge distributions on the binding energies of nuclei and $X^-$ particles, the reaction rates, and the resultant BBN. We also calculated new radiative recombination rates for $^7$Be, $^7$Li, $^9$Be, and $^4$He with $X^-$ taking into account the contributions from many partial waves of the scattering states. We also suggest a new reaction of $^9$Be production that enhances the primordial $^9$Be abundance to a level that might be detectable in future observations of MPSs. In detail, this work can be summarized as follows. \begin{enumerate} \item We assumed three shapes for the nuclear charge density, i.e., Woods-Saxon, Gaussian, and homogeneous sphere types which were parameterized to reproduce the experimentally measured RMS charge radii. The potentials between the $X^-$ and nuclei were then derived by folding the Coulomb potential and the nuclear charge densities (Sec. \ref{sec2}). Binding energies for nuclei plus $X^-$ were calculated for the different nuclear charge densities and different masses of the $X^-$, $m_X$. Along with the binding energies of the GS $X$-nuclei, those of the first atomic excited states of $^8$B$_X^{\ast{\rm a}}$ and $^9$B$_X^{\ast{\rm a}}$ were derived since these states provide important resonances in the $^7$Be($p$, $\gamma$)$^8$B$_X$ and $^8$Be($p$, $\gamma$)$^9$B$_X$ reactions (Sec. \ref{sec3}). Resonant rates for the radiative proton capture were then calculated. We found that the different charge distributions result in reaction rates that can differ by significant factors depending upon the temperature. This is because the rates depend on the resonance energy heights that are sensitive to relatively small changes in binding energies of $X$-nuclei caused by the different nuclear charge distributions (Sec. \ref{sec4}). \item We also calculated new precise rates for the radiative recombinations of $^7$Be, $^7$Li, $^9$Be, and $^4$He with $X^-$ for four cases of $m_X$. For that purpose, binding energies and wave functions of the respective $X$-nuclei were derived for several bound states. In the recombination process for $^7$Be and $^7$Li, bound states of the nuclear first excited states, $^7$Be$^\ast$ and $^7$Li$^\ast$, with $X^-$ can operate as effective resonances. These resonant reaction rates as well as transition matrices, radiative decay widths of the resonances, and resonance energies were calculated using derived wave functions. For $^9$Be and $^4$He, however, there are no important resonances in the recombination processes since the resonance energies are much higher than the typical temperatures corresponding to the recombination epoch. (Sec. \ref{sec5}) \item For the four nuclei $^7$Be, $^7$Li, $^9$Be, and $^4$He, we calculated continuum-state wave functions for $l=0$ to 4, and nonresonant recombination rates for the respective partial waves of scattering states and bound states. It was found that the finite sizes of the nuclear charge distributions causes deviations in the bound and continuum wave functions compared to those derived assuming that nuclei are point charges. These deviations are larger for larger $m_X$ and for heavier nuclei with a larger charge. In addition, the effect of the finite charge distribution predominantly affects the wave functions for tightly bound states and those for scattering states with small angular momenta $l$. We found the important characteristics of the $^7$Be+$X^-$ recombination. That is, for the heavy $X^-$, $m_X\gtrsim 100$ GeV, the most important transition in the recombination is the $d$-wave $\rightarrow$ 2P. Transitions $f$-wave $\rightarrow$ 3D and $d$-wave $\rightarrow$ 3P are also more efficient than that for the GS formation. This fact is completely different from the formation of hydrogen-like electronic ions described by the point-charge distribution. In this case the transition $p$-wave $\rightarrow$ 1S is predominant. The same characteristics that the transition $d$-wave $\rightarrow$ 2P is most important was found for the recombinations of $^7$Li and $^9$Be. Since $^4$He is lighter and its charge is smaller than $^7$Li and $^{7,9}$Be, the effect of a finite charge distribution is smaller. In the $^4$He recombination, therefore, the transition $p$-wave $\rightarrow$ 1S is predominant similar to the case of a point charge nucleus. Recombination rates for other nuclei were estimated using a simple Bohr atomic model formula (Sec. \ref{sec5}). \item Our nonresonant rate for the $^7$Be($X^-$, $\gamma$)$^7$Be$_X$ reaction with $m_X=1000$ GeV is more than 6 times larger than the previously estimated rate \citep{Bird:2007ge}. This difference is caused by our treatment of many bound states and many partial waves for the scattering states (Sec. \ref{sec5}). This improvement in the rate provides an improved constraint on the $X^-$ particle properties (Sec. \ref{sec9}). \item We have also suggested a new reaction for $^9$Be production, i.e, $^7$Li$_X$($d$, $X^-$)$^9$Be. We adopted an example reaction rate using the astrophysical $S$-factor for the reaction $^7$Li($d$, $n\alpha$)$^4$He as a starting point (Sec. \ref{sec6}). This reaction was found to significantly enhance the primordial $^9$Be abundance from our BBN network calculation (Sec. \ref{sec9}). \item Using the binding energies of $X$-nuclei calculated in Sec. \ref{sec3}, mass excesses of $X$-nuclei along with rates and $Q$-values for reactions involving the $X^-$ particle were calculated for four cases of $m_X$. The reaction network included the $\beta$-decays of $X$-nuclei, nuclear reactions of $X$-nuclei and their inverse reactions. $Q$-values and reverse reaction coefficients were found to be heavily dependent on $m_X$ (Sec. \ref{sec7}). The $X^-$-particle mass dependence of the $Q$-value is especially important for the resonant reaction $^7$Be$_X$($p$, $\gamma$)$^8$B$_X$ (Sec. \ref{sec9}). \item We constructed an updated BBN code that includes the new reaction rates derived in this paper (Sec. \ref{sec8}). BBN calculations based on this code were then shown for four cases of $m_X$. It was found that the amounts of $^7$Be destruction depend significantly on the assumed charge distribution form of the $^7$Be nucleus for the $m_X=1000$ GeV case. Finally, we derived new most realistic constraints on the initial abundance and the lifetime of the $X^-$ particle. Parameter regions for the solution to the $^7$Li problem were identified for the respective $m_X$ cases. We also derived the expected primordial $^9$Be abundances predicted in the allowed parameter regions. The predicted $^9$Be abundances are larger than in the SBBN model, but smaller than the present observational upper limit from MPSs (Sec. \ref{sec9}). \item Some discussion was also given for E1 transitions that simultaneously change both nuclear and atomic states of $^7$Be$_X$ and $^7$Li$_X$. These are hindered because of the near orthogonality of the atomic and nuclear wave functions. It was suggested, however, that for exotic atoms composed of nuclei and an $X^-$ with mass much larger than the nucleon mass, this orthogonality in the atomic and nuclear wave functions can be somewhat broken. Such exotic atoms may, therefore, have large rates for E1 transitions that simultaneously change nuclear and atomic states (Appendix \ref{app1}). \end{enumerate} \appendix | 14 | 3 | 1403.4156 |
1403 | 1403.4973_arXiv.txt | \label{abstract} \textbf{The co-evolution of a supermassive black hole with its host galaxy\cite{gebhardt00} through cosmic time is encoded in its spin\cite{BertiVolonteri2008, Fanidakis2011, Volonteri2012}. At $z>2$, supermassive black holes are thought to grow mostly by merger-driven accretion leading to high spin. However, it is unknown whether below $z\sim1$ these black holes continue to grow via coherent accretion or in a chaotic manner\cite{Kingpringle2006}, though clear differences are predicted\cite{Fanidakis2011,Volonteri2012} in their spin evolution. An established method\cite{Risaliti2013Natur} to measure the spin of black holes is via the study of relativistic reflection features\cite{rossfabian1993} from the inner accretion disk. Owing to their greater distances, there has hitherto been no significant detection of relativistic reflection features in a moderate-redshift quasar. Here, we use archival data together with a new, deep observation of a gravitationally-lensed quasar at $z=0.658$ to rigorously detect and study reflection in this moderate-redshift quasar. The level of relativistic distortion present in this reflection spectrum enables us to constrain the emission to originate within $\lesssim3$ gravitational radii from the black hole, implying a spin parameter $a=0.87^{+0.08}_{-0.15} $ at the $3\sigma$ level of confidence and $a>0.66$ at the $5\sigma$ level. The high spin found here is indicative of growth via coherent accretion for this black hole, and suggests that black hole growth between $0.5\lesssim z \lesssim 1$ occurs principally by coherent rather than chaotic accretion episodes.} | \subsection{Chandra:}\label{dataReduction} \addcontentsline{toc}{subsection}{A: \chandra} All publicly available data on \rx\ was downloaded from the \chandra\ archive. As of March 13, 2013, this totalled 30 individual pointings and 347.4\ks\ of exposure, during a baseline of nearly 8 years starting on April 12, 2004 (ObsID 4814) and ending on November 9, 2011 (ObsID 12834). We refer the reader to\cite{size1104, DaiKochanek2010quasar, ChartasKochanek2012quasar} for details of the observations. We note that the work presented herein includes one extra epoch that was not used in the work of \cite{ChartasKochanek2012quasar}. This further observation (ObsID 12834) added 13.6\ks\ to their sample. Starting from the raw files, we reprocessed all data using the standard tools available\cite{ciao} in CIAO~4.5 and the latest version of the relevant calibration files, using the \textit{chandra\_repro} script. Subpixel images were created for each observation and one such image is shown in Figure 1 of the main manuscript (observation made on November 28, 2009; Sequence Number 702126; Obs ID number 11540). Sub-pixel event repositioning and binning techniques are now available\cite{Tsunemi01}, which improve the spatial resolution of \chandra\ beyond the limit imposed by the ACIS pixel size ($0.492''\times0.492''$). This algorithm, EDSER, is now implemented in CIAO and the standard \chandra\ pipeline. Rebinning the raw data to 1/8 the native pixel size takes advantage of the telescope dithering to provide resolution $\sim0.25''$. The EDSER algorithm now makes ACIS-S the highest resolution imager onboard the \chandra\ X-ray Observatory. For an example, see\cite{Wang2011subpix} for a detailed imaging study of the nuclear region of NGC 4151. Spectra were obtained from circular regions of radius 0.492\as\ centred on Images-A, B and C as shown in Figure 1 and from a circular region of radius 0.984\as\ for the relatively isolated Image-D. Background spectra were taken from regions of the same size as the source located 4'' away. In the case of images-B and C, the backgrounds were taken from regions north and south of the sources, respectively. Due to the high flux present in image-A, the presence of a read-out streak was clear in some observations. In those cases, the background for image-A was taken from a region centred on the read-out streak 4'' to the east of the source. When the readout streak intercepted image-D, the background for the latter was taken from a region also centred on the readout streak 4'' to the NW. In all other cases, the background for image-D was taken from a region to the west of the source. Source and background spectra were then produced using \textit{specextract} in a standard manner with the \textit{correctpsf} parameter set to ``yes". We produced 4 spectra representing images-A, B, C and D for each of the 30 epochs as well as 4 corresponding background for each epoch. All spectra were fit in the 0.3-8.0\kev\ energy range (observed frame) unless otherwise noted, and the data were binned using \grppha\ to have a minimum of 20 counts per bin to assure the validity of $\chisq$ fitting statistics. Some observations are known\cite{ChartasKochanek2012quasar} to suffer from the effects of pile-up\cite{pile_estimate,pileup2010}, we explore in \S~2 the effect this might have on the results.\\ \subsection{XMM-Newton:} \addcontentsline{toc}{subsection}{B: \xmm} We were awarded a 93\ks\ observation with \xmm\ via the Director's Discretionary Time program (Obs ID: 0727960301) starting on 2013-07-06. The observation was made with both the \epicpn\ and \epicmos\ in the small window mode to ensure a spectrum free of pile-up. The level 1 data files were reduced in the standard manner using the SAS v11.0.1 suite, following the guidelines outlined in the \xmm\ analysis threads which can be found at \href{http://xmm.esac.esa.int/sas/current/documentation/threads/}{(http://xmm.esac.esa.int/sas/current/documentation/threads/}. Some background flaring was present in the last $\sim 30$\ks\ of the observation and this was removed by ignoring periods when the 10--12\kev\ (PATTERN$==0$) count rate exceeded 0.4 ct/s, again following standard procedures. Spectra were extracted from a 30\as\ radius region centered on the source with the background extracted from a source free 52\as\ radius region elsewhere on the same chip. The spectra were extracted after excluding bad pixels and pixels at the edge of the detector, and we only consider single and double patterned events. Response files were created in the standard manner using \rmfgen\ and \arfgen. Finally, the spectrum was rebinned with the tool \grppha\ to have at least 25 counts per channel and was modelled over the 0.3-10\kev\ range. We also have experimented with \textit{specgroup} and grouped the data to a minimum S/N of 3, 5 and 10. We find that in all cases, the results are statistically indistinguishable from the “group min 25” command in \grppha. As the observation was taken in the small window mode with a live time of 71\% the final good exposure, after the exclusion of the background flares identified with \epicpn, was 59.3\ks. The observed \epicpn\ flux of $\sim1.41\ctsps$ is $\sim20$ times below the levels where pile-up is expected to occur for this observational mode (see \xmm\ documentations at \\ \href{http://xmm.esac.esa.int/external/xmm\_user\_support/documentation/uhb/epicmode.html}{http://xmm.esac.esa.int/external/xmm\_user\_support/documentation/uhb/epicmode.html}. The \epicpn\ camera has the highest collecting area across the full 0.3--10.0\kev\ band, and it is also the best calibrated camera for spectral fitting, therefore we have chosen to base our analysis on the spectrum obtained with this detector. However, we note that similar conclusions are also found with the \epicmos. The \epicpn\ \xmm\ spectrum is explored fully in the online SI.\\ \newpage | Here, we summarise briefly the key points demonstrated in the supplementary information. Full details regarding the analysis performed for the quadruply imaged quasar 1RXS~J113151.6-123158 (hereafter \rx) are given in the following sections. \subsection{A: The spin measurements with \chandra\ are found to be consistent for a variety of analysis techniques:} We demonstrate the presence of residuals to the standard powerlaw AGN continuum consistent with the soft excess commonly observed in local, unobscured Seyfert galaxies, and also with a relativistically broadened iron emission line, from which the spin of the black hole can be constrained. We do so first using a phenomenological model including a relativistic line profile, and then with a fully physically self-consistent reflection model, comparing the results obtained with a time-averaged and time-resolved analyses of the data from individual \chandra\ images. The spin constraints obtained with these various analyses are all found to be consistent, implying a rapidly rotating black hole.\\ \subsection{B: Spin determination is consistent for both \xmm\ and \chandra:} Having demonstrated the consistency of the results obtained with the individual \chandra\ images, we then constrain the spin of \rx\ using all the selected \chandra\ data simultaneously with the self-consistent reflection model, and obtain $a=0.90^{+0.07}_{-0.15}$ at 3$\sigma$ confidence. We also constrain the spin in the same manner with an independent \xmm\ observation, obtaining $a=0.64^{+0.33}_{-0.14}$ (again, 3$\sigma$ confidence), fully consistent with the \chandra\ constraint. Finally, modeling both the \chandra\ and \xmm\ datasets simultaneously in order to obtain the most robust measurement, we constrain the spin of \rx\ to be $a=0.87^{+0.08}_{-0.15}$ at the 3$\sigma$ level of confidence.\\ \subsection{C: The spin measurements are robust against absorption:} Lastly, we consider whether there is any evidence for absorption by partially ionised material, often seen in local Seyferts and other quasars, in the spectrum of \rx, and investigate any effect this might have on the spin constraint obtained. Through phenomenological modelling, we show that although ionised absorption could plausibly reproduce the soft excess, a relativistic iron emission line is still required, and a high spin is again obtained. Furthermore, when considering the self-consistent reflection model, which includes the soft emission lines that naturally accompany the iron emission, the addition of ionised absorption to the model does not improve the fit, and the spin constraint obtained again remains unchanged.\\ \newpage | 14 | 3 | 1403.4973 |
1403 | 1403.4697_arXiv.txt | We have examined the relationship between the velocity parameters of SiO masers and the phase of the long period variable stars (LPVs) from which the masers originate. The SiO spectra from the v=1, J=1-0 (43.122 GHz; hereafter $J_{1\rightarrow0}$) and the v=1, J=2-1 (86.2434 GHz; hereafter $J_{2\rightarrow1}$) transitions have been measured using the Mopra Telescope of the Australia Telescope National Facility. One hundred twenty one sources have been observed including 47 LPVs contained in the American Association of Variable Star Observer Bulletin (2011). The epoch of maxima and the periods of the LPVs are well studied. This database of spectra allows for phase dependent comparisons and analysis not previously possible with such a large number of sources observed almost simultaneously in the two transitions over a time span of several years. The velocity centroids ($VCs$) and velocity ranges of emission ($VRs$) have been determined and compared for the two transitions as a function of phase. No obvious phase dependence has been determined for the $VC$ or $VR$. The results of this analysis are compared with past observations and existing SiO maser theory. | \subsection{$VCs$ and $VRs$} The determination of the $VC$ and $VR$ were recently presented \citep{b5} and will only be briefly reviewed here. Mathematically, the $VC$ is the sum of the antenna temperature $T_a$ in each velocity channel times the velocity with respect to the local standard of rest $v_{lsr}$ of the velocity channel over the range of emission divided by the sum of the $T_a$ in each velocity channel over the range of emission as shown in equation \ref{vc}. \begin{equation}\label{vc} VC=\frac{\sum{(T_a v_{lsr})}}{\sum{T_a}} \end{equation} The summation extends over the range of emission. The $VR$ is calculated to be the region where the Ta exceeds three times the standard deviation of the antenna temperature of the background noise. The standard deviation is determined from velocity channels far away from the emission range of the source. \citet{b5} examined the $VCs$ and $VRs$ of the SiO maser transitions without regard to the phase of the star. In this work we found that the $VR_{1\rightarrow0}$ was generally broader than the $VR_{2\rightarrow1}$, $6.4 km s^{-1}$ to $4.2 km s^{-1}$, respectively. The $VC_{1\rightarrow0}$ are slightly more positive than the $VC_{2\rightarrow1}$. These differences indicate that the $J_{1\rightarrow0}$ occurs in a dynamically different region of the circumstellar environment than the $J_{2\rightarrow1}$. This conclusion is consistent with the observational results of \citet{b10, b12}. The $VC$ is a well suited parameter to compare our single dish observations to VLBI observations, as it is weighted to the brightest emission. This effect is apparent in the figures B1-B3 in \citet{b24}. \citet{b25} have compared SiO and H$_2$O maser emission from a 401 evolved stars. They find that SiO emission is less dependent on the optical phase than the H$_2$O masers. While this is a relative conclusion only, it indicates no strong phase relationship for the $VR$, a finding which complements our own. \citet{b9} concluded that $J_{1\rightarrow0}$ and $J_{2\rightarrow1}$ emission can form at comparable radii and in related columns of gas although their VLBI maps show few overlaps between the two transitions. Recent theoretical developments vary in predicting the radii at which the $J_{1\rightarrow0}$ and $J_{2\rightarrow1}$ emission occur. \citet{b2} have the $J_{2\rightarrow1}$ emission at larger or smaller radii than the $J_{1\rightarrow0}$ emission depending on the phase. \citet{b13} show no discernible difference in the radii at which the maser originate at any phase presented. \subsection{Phase Determination} The phase of the star is the optical phase. It is determined by subtracting the Julian Day Number of the most recent stellar maximum from the Julian Day Number of the observation and dividing by the stellar period. The phase varies from zero to one. The stellar maxima and periods were obtained from the AAVSO Bulletin 74 \citep{b1}. Stellar periods were added or subtracted from the Bulletin 74 maxima as needed to determine the most recent maxima. \ref{tab:lpvs} gives the stellar periods used for the sources. \subsection{Maser Velocity Parameters} \citet{b2} investigated the dynamics of the circumstellar region in which the SiO masers originate and have provided the most thoroughly developed theory for the maser spectra in LPVs from phase 0.1 to 0.4. They model and depict a shock traveling out from the star generating different velocities, red shifted or more positive and blue shifted or more negative, at different distances from the star. As the shock travels out the velocities change as a function of distance from the star and phase. \citet{b2} predicted a phase dependent $VR_{1\rightarrow0}$ varying from about $8$ to $13 km s^{-1}$. The $VR_{2\rightarrow1}$ varies from about $7$ to $12 km s^{-1}$. The difference between the $VRs$ in the two transitions is phase dependent and difficult to quantify from the information presented, but the $VR_{1\rightarrow0}$ always appears to be greater than the $VR_{2\rightarrow1}$. In their Figure 12 they show that at a phase of 0.4 only the $J_{1\rightarrow0}$ emission should be present, at a phase of 0.3 the maximum $VR$ should occur in the $J_{1\rightarrow0}$ emission, several $km s^{-1}$ broader than the emission at phase of 0.1, and the $VC_{1\rightarrow0}$ should undergo a redward shift with increasing phase. No shift in the peak or $VC$ is indicated for the $J_{2\rightarrow1}$ transition. \citet{b13} have developed a coupled escape probability model for SiO maser emission. They present several figures indicating the $VR$ in the $J_{1\rightarrow0}$ and $J_{2\rightarrow1}$ transitions. In their Figure 7, the $J_{2\rightarrow1}$ emission is consistently broader than the $J_{1\rightarrow0}$ emission, and the $VR_{1\rightarrow0}$ and $VR_{2\rightarrow1}$ vary by a factor of more than five over the stellar period. The emission at their epoch 6 is several times broader than the emission at the other epochs depicted. For their Epoch 11 the $J_{1\rightarrow0}$ is very weak and narrow. Since different distances from the star are affected differently by the proposed shock travelling out from the star, it is reasonable to expect that masers forming at different distances from the star will exhibit different $VRs$ and slightly different $VCs$. The observations of the velocity parameters of the emission provide information on the locations as well as the motion of the masing material. | The Mopra database provides the first large data set of LPV SiO maser spectra (essentially simultaneous observations in $J_{2\rightarrow1}$ and $J_{1\rightarrow0}$ from 2008 until 2012) to allow the comparison of the $VCs$ and $VRs$ versus phase with theoretical model predictions. The velocity comparisons extracted from these observations will inform and constrain the development of future models of the circumstellar environment and maser dynamics. The analysis of the $VCs$ and $VRs$ of 47 LPVs as a function of phase shows no shifts or variations that indicate the passage of a shock through the circumstellar material. | 14 | 3 | 1403.4697 |
1403 | 1403.3248_arXiv.txt | Interferometers play an increasingly important role for spatially resolved observations. If employed at full potential, interferometry can probe an enormous dynamic range in spatial scale. Interpretation of the observed visibilities requires the numerical computation of Fourier integrals over the synthetic model images. To get the correct values of these integrals, the model images must have the right size and resolution. Insufficient care in these choices can lead to wrong results. We present a new general-purpose scheme for the computation of visibilities of radiative transfer images. Our method requires a model image that is a list of intensities at arbitrarily placed positions on the image-plane. It creates a triangulated grid from these vertices, and assumes that the intensity inside each triangle of the grid is a linear function. The Fourier integral over each triangle is then evaluated with an analytic expression and the complex visibility of the entire image is then the sum of all triangles. The result is a robust Fourier transform that does not suffer from aliasing effects due to grid regularities. The method automatically ensures that all structure contained in the model gets reflected in the Fourier transform. | The technique of interferometry has a long history in radio astronomy and gains more and more popularity also at other wavelengths. In the millimetre and sub-millimetre domain arrays such as the SMA, Plateau de Bure and CARMA allow, for instance, young stellar objects and protoplanetary disks to be spatially resolved down to a few tens of AU. And soon, ALMA will achieve few-AU resolution at wavelengths ranging from 0.3 to 3 mm. In the mid- and near-infrared optical interferometry is maturing as well and has provided new insights into the physics of protoplanetary disks and active galactic nuclei. The interpretation of these data, however, often requires detailed comparisons with theoretical models. Typically a radiative transfer model is produced of the object of interest, and the results compared to the observations. This paper is about this process of comparing models to interferometric measurements. Interferometers probe the image of the object on the sky in the Fourier plane. Rather than measuring pixel-by-pixel intensities and thus immediately yielding an image for the observer to interpret, in radio and millimeter interferometry each pair of telescopes measures the so-called 'complex visibility' (the normalized correlation function between the signals measured by the two telescopes). In optical and infrared interferometers usually only the amplitude (not the phase) of the visibility is measured, which is in fact the ratio of the correlated flux density to the total flux density. According to the van Cittert-Zernike theorem the complex visibility as a function of baseline coordinates (u,v) is equal to the Fourier transform of the image on the sky divided by its total flux density. For each combination of three telescopes one can measure a ``closure phase'' which also directly follows from the complex Fourier values belonging to each telescope pair. Interferometry measurements are thus measurements in Fourier space, usually called the uv-plane. If sufficient baselines are available, i.e., the uv-plane is sufficiently well covered, then the inverse Fourier transform can be carried out and an image reconstructed. However, often the uv-plane is sparsely covered and image reconstruction is non-unique. In such cases, any model comparison will have to take place in the uv-plane itself, and model images must be Fourier transformed to the uv-plane before comparison can take place. Also for the case of a high uv-coverage, this ``forward method'' (adapting the model to the observations) can also be useful for predicting the feasibility of observing particular objects and phenomena. In some cases where the astronomical source has a simple structure which can adequately be described by an algebraic expression (e.g., point, sphere, disk, cylinder, ring, etc.), the complete Fourier transform is easily calculated analytically and used to model the data \citep{1999ASPC..180..335P,2014arXiv1401.4984M}. However, if a numerical model (typically the output from a radiative transfer code) is used to describe the source, the Fourier transform needs to be calculated numerically. The task of numerically calculating a Fourier transform of a model image may seem trivial. Algorithms such as Fast Fourier Transform (FFT) can do this with high precision and speed. It turns out, however, that for models that involve a large dynamic range in spatial scale this task can be difficult. For example, the problem of a collapsing molecular cloud core of $10^4$ AU size, with a proto-stellar disk inside of 100 AU size which surrounds a protostar of 0.1 AU size covers already 5 orders of magnitude in spatial scale. Although current interferometers are not able to observe all of these scales simultanously, it is still possible to cover 2-3 orders of magnitude in spatial scale with ALMA. Observations of a molecular line and the dust continuum will record large scale emission in the line centre and small scale emission in the line wings and surrounding continuum. Calculating the uv-plane image is then not trivial at all, and doing so without great care will inevitably lead to errors. For example, one would need to use sufficient padding with blank space around the source model in order to avoid mirror images in the Fourier transform. In this paper we will describe a new method of computing synthetic uv-plane ``observations'' which are extremely robust and yield proper results without much care. The method we present can easily be implemented into existing radiative transfer codes or it can be made into a stand-alone subroutine that can post-process the output from ray-tracing codes. All examples of the method presented in this paper has been made using a customised parallel version of the public available LIME code \citep{2010A&A...523A..25B}. | In this paper we have presented a method to create radiative transfer model images in arbitrary resolution and very high dynamic range using a finite, and much smaller number of rays than is needed for a raster image in comparable resolution. The method uses an unstructured (possibly random) distribution of rays out of which a Delaunay triangulation is calculated. Each Delaunay triangle is easily Fourier transformed using Eq.~\ref{mcinturff1}. Unfortunately, Eq.~\ref{mcinturff1} becomes very time consuming for large or ``complete'' sets of uv-spacings. The Fourier transformation method presented here requires O(N$^2$) operations which from a performance point of view is vastly outperformed, particularly for large N, by FFT which requires O(N log N) operations. However, like Eq.~\ref{mcinturff1}, ray-tracing in a 3-D radiative transfer model is also an O(N$^2$) process in the number of pixels per axis and so what is gained in speed from using FFT is quickly lost again from the increased ray-tracing time in order to reach high enough image resolution. One could consider taking an unstructured, and therefore high-resolution, set of rays and remap it onto a very high resolution raster in order to perform an FFT. Such a remapping, however, can potentially also be quite time consuming on its own and it still requires a somewhat arbitrary choice of minimum and maximum scale to be made. Increasing the number of pixels dramatically has the further disadvantage of producing very large FITS files, in particular when doing spectral line images, where the spectral axis can potentially hold hundreds of channels. For the example in Fig~\ref{twhya}, the FFT on the raster image is done in less than a few seconds (which means that the computation time is dominated by I/O and other overhead), whereas the Fourier transform on the trixel image takes a total of 1.5 minutes. However, this Fourier transform has about four times higher resolution than the FFT. In order to reach the same resolution in uv-space with FFT, the image has to be ray-traced at four times the resolution. The FFT operation on the higher resolution raster image does not take noticeable longer, but the ray-tracing time, in this example, goes from about 20 seconds to about 4 minutes and this does not include the additional time requirement when adding anti-aliasing in order to improve the image quality. It is also possible to lower the computation time significantly for the unstructured Fourier transform method, when comparing a model to interferometric data, by only calculating the uv-points which corresponds to the observed uv-spacings, rather than calculating complete sets of uv-points. Equation~\ref{sum_tri} is also trivially parallelisable which helps to speed up the calculations since most modern computers have multiple cores. There is currently no image container for unstructured triangulated images although the FITS format could in principle be used. One option would be to build the Fourier transformer directly into the ray-tracing code and let the code output a uv-FITS file rather than having the Fourier transformation be a post-processing tool that works on outputted images. Alternatively, trixels can be stored in standard FITS format as tabulated data. | 14 | 3 | 1403.3248 |
1403 | 1403.7231_arXiv.txt | We use K-band spectroscopy of the counterpart to the rapidly variable X-ray transient XMMU J174445.5-295044 to identify it as a new symbiotic X-ray binary. XMMU J174445.5-295044 has shown a hard X-ray spectrum (we verify its association with an Integral/IBIS 18-40 keV detection in 2013 using a short Swift/XRT observation), high and varying $N_H$, and rapid flares on timescales down to minutes, suggesting wind accretion onto a compact star. We observed its near-infrared counterpart using the Near-infrared Integral Field Spectrograph (NIFS) at Gemini-North, and classify the companion as $\sim$M2 III. We infer a distance of $3.1^{+1.8}_{-1.1}$ kpc (conservative 1$\sigma$ errors), and therefore calculate that the observed X-ray luminosity (2-10 keV) has reached to at least 4$\times10^{34}$ erg s$^{-1}$. We therefore conclude that the source is a symbiotic X-ray binary containing a neutron star (or, less likely, black hole) accreting from the wind of a giant. | Symbiotic binaries transfer mass via the winds of cold (usually late K or M) giants onto compact objects: white dwarfs, neutron stars or black holes \citep{Kenyon86}, with orbital periods typically in the 100s to 1000s of days \citep{Belczynski00}. They were first identified by the presence of high-ionization emission lines in optical spectra of otherwise cold giants, indicating the presence of two components of vastly different temperatures. ROSAT X-ray studies of symbiotic binaries distinguished three classes ($\alpha$, $\beta$, $\gamma$) by the X-ray spectral shape \citep{Murset97}, with higher energy X-ray measurements adding two further classes showing highly-absorbed spectra \citep{Luna13}. A small but rapidly increasing number of symbiotic systems have been identified as containing a neutron star as an accretor, through the measurement of pulsations and/or hard X-ray emission above 20 keV, and are known as symbiotic X-ray binaries \citep{Masetti06}. Only seven symbiotic X-ray binaries have been positively identified so far; GX 1+4, \citep{Davidsen77}; 4U 1700+24, \citep{Masetti02}; 4U 1954+319, \citep{Masetti06}; Sct X-1, \citep{Kaplan07}; IGR J16194-2810, \citep{Masetti07}; IGR J16358-4726, \citep{Nespoli10}; and XTE J1743-363, \citep{Bozzo13}. Several other likely candidate systems have also been proposed (e.g. \citealt{Nucita07}, \citealt{Masetti11}, \citealt{Hynes14}). The identification and characterization of a symbiotic X-ray binary requires clear information on the nature of the accretor (e.g. from pulsations or unusual luminosities) and the donor (e.g. from spectroscopy). \citet{Heinke09c} identified XMMU J174445.5-295044 as a rapidly variable (timescales down to 100s of seconds) Galactic transient, using nine \emph{XMM-Newton}, \emph{Chandra}, and \emph{Suzaku} observations. It showed 2-10 keV X-ray fluxes up to $>3\times10^{-11}$ erg cm$^{-2}$ s$^{-1}$, and variations in $N_H$, from $8\times10^{22}$ up to $15\times10^{22}$ cm$^{-2}$. The rapid variations and variable $N_H$ suggested accretion from a clumpy wind, rather than an accretion disk. \citet{Heinke09c} also identified a bright near-infrared (NIR) counterpart (2MASS J17444541-2950446) within the 2'' XMM error circle. \citet{Heinke09c} calculated the probability of a star of this brightness in $K_S$ appearing in the X-ray error circle as only 2\%, indicating that it is almost certainly the true counterpart. This star appears highly obscured and shows infrared colors typical of late-type stars, which Heinke et al. suggested indicates that XMMU J174445.5-295044 is a symbiotic star or symbiotic X-ray binary. The \emph{INTEGRAL} Galactic bulge monitoring program \citep{Kuulkers07} reported an X-ray transient detected by the JEM-X monitor on March 23, 2012 \citep{Chenevez12}, at 17:44:48, -29:51:00, with an uncertainty of 1.3$'$ at 95\% confidence, consistent with XMMU J174445.5-295044. The 10-25 keV flux of $1.5 \pm 0.3 \times 10^{-10}$ erg cm$^{-2}$ s$^{-1}$ is larger than previously reported for XMMU J174445.5-295044, but the high estimated $N_H$ (not specified, but the JEM-X source was undetected below 10 keV, indicating $N_H$$>$$10^{23}$ cm$^{-2}$) suggests that this is likely the same source, as it is known to exhibit similarly large intrinsic extinction \citep{Heinke09c}. In March 2013, the INTEGRAL IBIS telescope detected a hard transient at $9.3\pm1.4\times10^{-11}$ erg cm$^{-2}$ s$^{-1}$ (17-60 keV), at position 17:44:41.76, -29:48:18.0, uncertainty 4.2' \citep{Krivonos13}. Krivonos et al. note that this position is consistent with XMMU J174445.5-295044, but suggest that follow-up observations are needed to verify whether it is the same source. In this paper, we present Gemini NIFS spectroscopy of 2MASS J17444541-2950446, and conclude that its spectral type indicates a M2 III giant. We also describe a Swift/XRT observation permitting the confident identification of the 2013 INTEGRAL/IBIS transient \citep{Krivonos13} with XMMU J174445.5-295044. We combine these results to infer a peak $L_X$ $>$ 4 $\times10^{34}$ erg s$^{-1}$. These results together allow us to confidently identify XMMU J174445.5-295044 as a symbiotic X-ray binary containing a neutron star or black hole accretor, rather than a white dwarf. | \label{sec_discus} \subsection{Two-dimensional spectral classification} \citet{Comeron04} demonstrate that the $^{12}$CO feature for supergiants always shows an equivalent width (EW) of $>25$~{\AA} (see their figures 8-13). We obtained EW[$^{12}$CO]$\approx19.4\pm0.1$~\AA, which is typical for giants or dwarfs \citep{Comeron04}. Thus we can rule-out the possibility of a supergiant. \citet{Ramirez97} and \citet{Ivanov04} show that $\log$[EW(CO)/(EW(Ca I)+EW(Na I))] can be used to separate giants from dwarfs. \citet{Ramirez97} show that this quantity should be between -0.22 and 0.06 for dwarfs, vs.\ between 0.37 and 0.61 for giants. We found this quantity to be $0.67\pm{0.06}$ for our source, in agreement with the estimated range for giants. Presence of fairly strong $^{13}$CO bands in our spectrum is another indicator for a giant, as these features are invisible in a dwarf. To estimate the temperature of this source, we used the first-order relationship between effective temperature (T$_{eff}$) and EW[$^{12}$CO] (in angstroms) for giants proposed by \citet{Ramirez97}: \begin{equation} T_{eff} = (5019\pm79)-(68\pm4)\times $EW$[^{12}$CO$] \end{equation} Considering the uncertainty in EW[$^{12}$CO], we found T$_{eff}$~=~3700~$\pm$~160 K. According to \citet{vanBelle99}, T$_{eff}$~=~3700 K indicates M2 giant; using \citet{Richichi99} suggests M1.5 while the relation in \citet{Ramirez97} gives an M1.7 giant. Thus, adopting either the van Belle or Richichi calibration, the resulting spectral type is M2 III, with a reasonable range from M0 to M3. If we used the less detailed calibration from Ramirez et al 1997, we obtain a similar result of M1.7 (M0 to M3). Thus, we adopt M2 III as our spectral type, with a possible range from M0 to M3 III. There is no evidence for a feature at Br$\gamma$ in our spectrum, either before or after our telluric subtraction. \citet{Nespoli10} see Br$\gamma$ emission from two symbiotic X-ray binaries. However, the similar P-Cygni shape of the Br$\gamma$ feature in both stars, and also in the supergiant X-ray binary IGR J16493-4348 studied by them with the same method, lend support to their hypothesis that this feature is a residual artifact of their telluric removal procedure (which is more complex than ours, involving using a G star as a second telluric reference). \begin{figure*} \includegraphics[scale=0.42]{fig2.eps} \caption{K-band spectrum of XMMU 174445.5-295044. Identified line profiles are listed in Table~\ref{tab_lines}. Features identified using spectral libraries \citep{Kleinmann86, Ramirez97} are marked with solid lines while features identified using NIST/ASD are marked with dashed lines. Ambiguous and uncertain identifications are labeled with ``?".} \label{fig_spectrum} \end{figure*} \begin{table*} \centering \caption{Definition of spectral features, chosen continuum intervals and measured equivalent width. We used definitions in \citet{Comeron04} with a small modification to Ca I blue continuum (\S~\ref{sec:spec}).} \begin{tabular}{@{}cllllllc@{}} \hline & \multicolumn{2}{c}{Band} & \multicolumn{2}{c}{Blue continuum} & \multicolumn{2}{c}{Red continuum} & \\ Feature & Center(\AA) & $\delta \lambda$ & Center(\AA) & $\delta \lambda$ & Center(\AA) & $\delta \lambda$ & Equivalent width (\AA)\\ \hline Na I & 22075 & 70 & 21940 & 60 & 22150 & 40 & 2.23$\pm$0.14 \\ Ca I & 22635 & 110 & 22507 & 53 & 22710 & 20 & 1.93$\pm$0.40 \\ $^{12}$CO & 22955 & 130 & 22500 & 160 & 22875 & 70 & 19.36$\pm$0.13 \\ \hline \end{tabular} \label{tab_features} \end{table*} \begin{figure*} \begin{center} \includegraphics[scale=0.42]{fig3.eps} \caption{Chosen features and continua intervals used to obtain the spectral classification of the companion in this system. Left to right: Na I, Ca I, $^{12}$CO(2,0). Light shaded regions show chosen regions for features and dark shaded regions represent chosen continua regions. These regions are tabulated in Table~\ref{tab_features}. The dashed lines represent interpolated continuum level in each case.} \label{fig_eqw} \end{center} \end{figure*} \subsection{Extinction, distance, and nature of the accretor}\label{distance} We use our identification of the spectral type, with the 2MASS photometry \citep{Skrutskie06} reported by \citet{Heinke09c}, to estimate the extinction, and thus the distance, to XMMU J174445.5-295044, in a similar way as \citet{Kaplan07}, but explicitly accounting for the difference between the $K_S$ and $K$ bands. Although the 2MASS colors were measured at a different time from the NIFS spectroscopy reported here, we do not expect large variations in the temperature or observed extinction of the giant, as the stars most affected by this are of later ($>$M5) spectral types \citep{Habing96}. M2 III stars have an absolute magnitude of $M_J= -3.92$ and intrinsic $J$-$K_S$ colors of 1.12 \citep{Covey07}. \citet{Heinke09c} report a 2MASS magnitude of $m_J=14.89$ in $J$ for our object, and an observed $J$-$K_S$=4.72. We use $A_J/A_V$~=~0.282 \citep{Cardelli89}, and $A_J/A_{K_s}$~=~2.5$\pm0.2$ \citep{Indebetouw05}. Thus we infer $A_V$~=~$\frac{(J-K_S)_{obs}-(J-K_S)}{(A_J/A_V)-(A_Ks/A_V)}$~=~21.3$_{-0.1}^{+1.9}$, and $A_J$~=~6.0$_{-0.3}^{+0.5}$. The extinction measurement converts (using $N_H$ (cm$^{-2}$)~=~(2.21$\pm$ 0.09) $\times10^{21}$ $A_V$, \citealt{Guver09}), to $N_H~=~(4.7\pm0.5)\times10^{22}$ cm$^{-2}$, which is below the X-ray measured values (measurements of $8.6\pm0.4\times10^{22}$ cm$^{-2}$, and $16^{+5}_{-4}\times10^{22}$ cm$^{-2}$, from different observations) in \citet{Heinke09c}. This is consistent with expectations for a wind-accreting system, where much of the $N_H$ is expected to be local to the compact object, and with the evidence for variation in $N_H$ between different observations shown by \citet{Heinke09c}. Using this $A_J$ estimate, the expected $M_J$ for a M2 III star, and the observed $J$ magnitude, we can thus estimate $d$=3.1 kpc as the most likely distance to our object. The largest uncertainty in our distance estimate is our estimate of the absolute magnitude of the companion star. Allowing for a conservative 1-magnitude uncertainty on the absolute magnitude (estimated from \citealt{Breddels10}; this is probably more precise than 1$\sigma$), we find $d$= $3.1^{+1.8}_{-1.1}$ kpc. This distance is consistent with our (small) radial velocity estimate, which would be typical of a disk star observed at a very small Galactic latitude ({\it l}~=~359.1$^\circ$), and with our measurement of the relative strengths of the CO and Na lines, the ratio of which is more consistent with disk giants than with giants in the bulge \citep{Comeron04}. From this distance estimate, we can infer the X-ray luminosities of XMMU J174445.5-295044, as plotted in Figure~\ref{fig_xray_lc} (errors there do not include the distance uncertainties). The majority of the X-ray detections are between $10^{33}$ and $10^{34}$ erg s$^{-1}$, but the INTEGRAL/JEM-X detection in March 2012 \citep{Chenevez12} gives a (2-10 keV) X-ray luminosity of (1.1$\pm0.2)\times10^{35}$ erg s$^{-1}$ for $d$~=~3.1 kpc; even at the lower limit on the distance (d~=~2.0 kpc), the luminosity exceeds $4~\times~10^{34}$ erg s$^{-1}$ (Similarly, the March 2013 INTEGRAL/IBIS detection gives a (2-10 keV) $L_X~=~$2.5$~\times~10^{34}$ erg s$^{-1}$ for 3.1 kpc, or $1.1~\times~10^{34}$ erg s$^{-1}$ for the 2.0 kpc lower distance limit, which further confirms the high X-ray luminosity of XMMU J174445.5-295044). Combining this high peak X-ray luminosity (four times the maximum seen for any accreting white dwarf, \citealt{Stacey11}) with the hard X-ray spectrum inferred from the later Integral/IBIS detection above 17 keV \citep{Krivonos13}, we can confidently rule out a white dwarf nature for the accretor. Thus, we securely identify XMMU J174445.5-295044 as a symbiotic X-ray binary, with a neutron star (or, less likely, black hole) accreting from the wind of an M2 giant star. XMMU J174445.5-295044 stands out from other symbiotic X-ray binaries only in not showing detectable X-ray pulsations \citep{Heinke09c}. The complete absence of NIR spectroscopic evidence of accretion in our NIFS spectrum is typical of other symbiotic X-ray binaries with relatively low accretion rates. The lack of detected pulsations also means that the accretor could be a black hole, though black hole symbiotic X-ray binaries should be less common. The increasing number of symbiotic binaries without detected emission lines in high-quality spectra being detected recently \citep{vandenBerg06,vandenBerg12,Hynes14} strongly suggests that there should be many more symbiotic stars (with white dwarf accretors) which also do not show optical/NIR spectroscopic evidence of accretion \citep{vandenBerg06}. Symbiotic systems may make up an important portion of the faint Galactic X-ray source population. | 14 | 3 | 1403.7231 |
1403 | 1403.2308_arXiv.txt | We present a spectral and timing analysis of the black hole candidate MAXI J1543-564 during its 2011 outburst. As shown in previous work, the source follows the standard evolution of a black hole outburst. During the rising phase of the outburst we detect an abrupt change in timing behavior associated with the occurrence of a type-B quasi-periodic oscillation (QPO). This QPO and the simultaneously detected radio emission mark the transition between hard and soft intermediate state. We fit power spectra from the rising phase of the outburst using the recently proposed model \textsc{propfluc}. This assumes a truncated disc / hot inner flow geometry, with mass accretion rate fluctuations propagating through a precessing inner flow. We link the \textsc{propfluc} physical parameters to the phenomenological multi-Lorentzian fit parameters. The physical parameter dominating the QPO frequency is the truncation radius, while broad band noise characteristics are also influenced by the radial surface density and emissivity profiles of the flow. In the outburst rise we found that the truncation radius decreases from $r_o \sim 24$ to $10 R_g$, and the surface density increases faster than the mass accretion rate, as previously reported for XTE J1550-564. Two soft intermediate state observations could not be fitted with \textsc{propfluc}, and we suggest that they are coincident with the ejection of material from the inner regions of the flow in a jet or accretion of these regions into the BH horizon, explaining the drop in QPO frequency and suppression of broad band variability preferentially at high energy bands coincident with a radio flare. | \label{sec:int} Transient black hole binaries (BHBs) display outbursts exhibiting several states, characterized by both spectral and timing properties (e.g. Belloni et al. 2005; Remillard \& McClintock 2006; Belloni 2010; Gilfanov 2010). During the outburst, sources typically follow a 'q' shaped, anti-clockwise track on a plot of X-ray flux versus spectral hardness ratio (hardness-intensity diagram: HID), with the quiescent state occupying the bottom right corner. The initial transition from hard (LHS) to soft (HSS), via intermediate states, occurs when the power low component of the spectrum is observed to soften (photon index $\Gamma \sim$ 1.7--2.4) and a disc blackbody component (peaking in soft X-rays) becomes increasingly prominent. A power spectral analysis of the rapid variability reveals a quasi-periodic oscillation (QPO), which shows up as narrow harmonically related peaks, superimposed on broad band continuum noise. The QPO fundamental frequency is observed to increase from $\sim$ 0.1--10 Hz during the transition from the hard state, after which the X-ray emission becomes very stable in the soft state. Power spectral evolution correlates tightly with spectral evolution, with all the characteristic frequencies increasing with spectral hardness (e.g. Wijnands \& van der Klis 1998; Psaltis, Belloni \& van der Klis 1999; Homan et al. 2001). QPOs observed coincident with broad band noise are defined as Type-C QPOs (Remillard et al. 2002; Casella et al. 2005). Type-B QPOs (Wijnands, Homan \& van der Klis 1999), typically with a frequency of $\sim$ 6--10 Hz, are observed in the intermediate state when the broad band noise suddenly disappears. These features quickly evolve into Type-A QPOs (Wijnands, Homan \& van der Klis 1999), which are broader and weaker. Since the sudden suppression of the broad band noise hints a large physical change in the system, intermediate state observations displaying Type-C QPOs are classified as hard intermediate state (HIMS) and those displaying Type-A or B QPOs as soft intermediate state (SIMS). Additionally, a large radio flare, indicative of a jet ejection event, is often observed to be coincident with the onset of the SIMS (Fender, Belloni \& Gallo 2004), although this is not always exact (Fender, Belloni, \& Gallo 2005).\\ The spectral and timing properties of BHBs can be described by \textit{the truncated disc model} (e.g. Esin, McClintock \& Narayan 1997; Done, Gierli\'nski \& Kubota 2007) where an optically thick, geometrically thin accretion disc which produces the multi-temperature blackbody spectral component (Shakura \& Sunyaev 1973) truncates at some radius, $r_o$, larger than the innermost stable circular orbit (ISCO). In the region between this truncation radius $r_o$ and an inner radius $r_i$ ($r_o > r_i > r_{ISCO}$), accretion takes place via a hot, optically thin, geometrically thick accretion flow (hereafter inner flow). Compton up-scattering of cool disc photons by hot electrons in the flow produces the power low spectral component (Thorne \& Price 1975; Sunyaev \& Truemper 1979). In the hard state $r_o$ is large ($\sim 60 R_g$, where $R_g=GM/c^2$ is a gravitational radius), so only a small fraction of the disc photons illuminates the flow, giving rise to a weak direct disc component and hard power law emission. As the average mass accretion rate increases during the outburst, $r_o$ decreases, so more direct disc emission is seen and a greater luminosity of disc photons cool the flow, resulting in softer power law emission. When $r_o$ reaches the ISCO, the direct disc emission completely dominates the spectrum and the transition to the soft state is complete.\\ This scenario is the framework of the propagating fluctuations model \textsc{propfluc} (Ingram \& Done 2011, 2012, hereafter ID11, ID12; Ingram \& Van der Klis 2013, IK13), a model that can reproduce power density spectra by combining the effects of the propagation of mass accretion rate fluctuations in the inner flow (Lyubarskii 1997; Arevalo \& Uttley 2006), responsible for generating the broad band noise, with solid-body Lense-Thirring (LT) precession of this flow (Fragile et al. 2007; Ingram, Done \& Fragile 2009), producing QPOs. Mass accretion rate fluctuations are generated throughout the inner flow, with the contribution to the rms variability from each region peaking at the local viscous frequency (e.g. Lyubarskii 1997; Churazov, Gilfanov \& Revnivtsev 2001; Arevalo \& Uttley 2006), thus the fast variability originates from the inner regions and the slow variability from the outer regions. As material is accreted, fluctuations propagate inwards, modulating the faster variability generated in the inner regions. Emission is thus highly correlated from all regions of the flow, giving rise to the observed linear rms-flux relation (Uttley \& McHardy 2001; Uttley, Vaughan \& McHardy 2005).\\ In this paper we present a spectral and timing analysis of the source MAXI J1543-564 during its 2011 outburst. The source, discovered by MAXI/GSC (the Gas Slit Camera of the Monitor of ALL-sky X-ray Image; Matsuoka et al. 2009) on May 08 2011 (Negoro et al. 2011), was first analyzed by Stiele et al. (2012). Their analysis showed that the outburst evolution follows the usual BHBs behavior, the exponential flux decay is interrupted by several flares, and during the transition from LHS and HSS a type-C QPO is observed. Looking at other wavelengths, Miller-Jones et al. (2011) report the detection of radio emission at MJD 55695.73. In this work, we analyze the spectral and timing properties of the source in different energy bands and we use the power density spectra of the rising phase of the outburst to systematically explore for the first time the capabilities of \textsc{propfluc}.\\ \section[]{Observations and data analysis} \label{sec:obs} We analyzed data from the RXTE Proportional Counter Array (PCA; Jahoda et al. 1996) using 99 pointed observations collected between 10 May and 30 September 2011. Each observation consisted of between 300--4750 s of useful data. \\ We used Standard 2 mode data (16 s time resolution) to calculate a hard color (HC) as the 16.0--20.0 / 2.0--6.0 keV count rate ratio and define the intensity as the count rate in the 2.0--20.0 keV band. All the observations were background subtracted and all count rates were normalized by the corresponding Crab values closest in time to the observations.\\ We used the $\sim$ 125 $\mu$s time resolution Event mode and the $\sim$ 1 $\mu$s time resolution Good-Xenon mode data for Fourier timing analysis. We constructed Leahy-normalized power spectra using 128 s data segments and 1/8192 s time bins to obtain a frequency resolution of 1/128 Hz and a Nyquist frequency of 4096 Hz. After averaging these power spectra per observation, we subtracted the Poisson noise using the method developed by Klein-Wolt et al. (2004), based on the expression of Zhang et al. (1995), and renormalized the spectra to power density $P_{\nu}$ in units of $(rms / mean)^2$ / Hz. In this normalization the fractional rms of a variability component is directly proportional to the square root of its integrated power density: $rms = 100 \sqrt{ \int_{0}^{\infty} P_{\nu} d\nu} $ \%. No background or dead-time corrections were made in computing the power spectra. This procedure was performed in 4 different energy bands: 2.87--4.90 keV (band 1), 4.90--9.81 keV (band 2), 9.81--20.20 keV (band 3), and the full 2.87�-20.20 keV (band 0). The power spectra were fitted using a multi-Lorentzian function in which each Lorentzian contributing to the fit function is specified by a characteristic frequency $\nu_{max}=\sqrt{{\nu_0}^2+(FWHM/2)^2}$ (Belloni, Psaltis \& van der Klis 2002) and a quality factor $Q=\nu_{0}/FWHM$, where FWHM is the full width at half maximum and $\nu_0$ is the centroid frequency of the Lorentzian. All the power spectra shown in this paper were plotted using the power times frequency representation ($\nu P_{\nu}$), in order to visualize $\nu_{max}$ as the frequency where the Lorentzian's maximum occurs. \\ \section[]{Results} \label{sec:res} \subsection{Light curve} The light curve of the source is shown in Fig. \ref{fig:tid}$a$, where the 2-20 keV intensity is plotted versus time (MJD) for each pointed observation. \\ We subdivided the evolution of the outburst in 5 parts. In the first part of the outburst (MJD 55691--55696 ) the source rises to maximum intensity ($\sim$ 68 mCrab) in 5 days from the beginning of the RXTE observations. The second part (first grey area, MJD 55696--55713) is characterized by an intensity decay that is not smooth, but interrupted by 4 additional peaks with intensities between $\sim$ 47 and $\sim$ 58 mCrab. The third part (MJD 55713--55725, between the two grey areas) does not show any intensity peak but only a gradual decay. The following period (MJD 55725--55744, second grey area) is characterized by a broad maximum and several additional intensity peaks (between $\sim$ 34 and $\sim$ 42 mCrab) less luminous compared to those of the first grey area. Finally, the last part (MJD 55744--55834) consists of a relatively smooth decay until the end of the observations. \\ \begin{figure} \center \includegraphics[scale=0.4,angle=270]{fig1.ps} \caption{a) Intensity [mCrab], b) rms [\%], c) and hard color [Crab] versus time for the 99 pointed observations. The grey rectangular areas indicate 5 time intervals characterized by different long-term luminosity variability. Data points are plotted with 1$\sigma$ error bars.} \label{fig:tid} \end{figure} \subsection{Color diagrams} Fig. \ref{fig:hid} shows the hardness��-intensity diagram (HID), where the average intensity of each observation is plotted versus the HC. The source follows a counterclockwise path, starting and ending in the right (hard) part of the diagram at different luminosities. This is the usual behavior observed for black hole outbursts.\\ In order to better follow the spectral evolution of the source along the outburst, we also plotted in Fig. \ref{fig:tid}$c$ the HC versus time.\\ In the first observation the source is harder than Crab (HC = 1.71) and in the following 6 observations softens continuously, while at the same time its intensity increases from $\sim$ 24 to $\sim$ 68 mCrab. For the remaining observations the source remains in the soft part of the HID (HC$\le$0.5) except for the very last observation, where it goes back to a color harder than Crab (HC = 1.31$\pm$0.16).\\ As can be noted in Fig. \ref{fig:tid}$c$, the transitions between hard and soft spectrum happen on short time scales ($\sim$ 10 days) compared to the time spent by the source in the soft state ($\sim$ 125 days). However, while the initial transition from hard to soft state is simultaneous with a quick change in intensity (+ 188$\%$), the final transition (last observation) from soft to hard spectrum is characterized by a fractional intensity change of only + 16$\%$, i.e. increasing when the source gets harder.\\ \begin{figure} \center \includegraphics[scale=0.4,angle=270]{fig2.ps} \caption{Hard color versus Intensity normalized to the Crab. Points represent average intensity and hard color for each observation. 1$\sigma$ error bars are plotted for the hard color.} \label{fig:hid} \end{figure} \subsection{Time variability} The 1/129--10 Hz rms values as computed from the power spectra in band 0 are reported in Fig. \ref{fig:tid}$b$. The first 5 observations, during which the source rapidly becomes softer and brighter are characterized by rms values of $\sim$ 19--27$\%$. In the remaining observations the rms values are between $\sim$ 2$\%$ and $\sim$ 10$\%$ with few exceptions.\\ Integrated rms is systematically higher for higher energies. From the beginning of the observations, as the intensity increases, integrated rms decreases independently from photon energy, but the rms decrease trend is different between energy bands. In order to better show these differences, we plotted in Fig. \ref{fig:rmsall} the total fractional rms of the first 7 observations for all the energy bands. Band 1 (red) shows a smooth and continuous rms decrease with time, while in band 2 (green) and band 3 (blue), the rms decrease is characterized by a ''jump'' between observations \#5 and \#6 ($\Delta$rms $\sim$ --9$\%$ in band 2, $\Delta$rms $\sim$ --11$\%$ in band 3). Observation \#6 is also characterized by the detection of radio emission, indicated by the orange arrow.\\ \subsubsection{QPOs and broad band features} \begin{figure} \center \includegraphics[scale=0.4,trim=1cm 3cm 2cm 1cm,clip,angle=270]{fig3.ps} \caption{Multi-Lorentzian fit of the fifth power spectrum. Four main components were identified: a main QPO $L_{LF}$, its harmonic $L_{LF}^{+}$, a broad band noise component $L_b$, and another broad component at lower frequency $L_b^{-}$.} \label{fig:psex} \end{figure} \begin{figure} \center \includegraphics[scale=0.4,trim=1cm 3cm 1cm 0cm,clip,angle=270]{fig4.ps} \caption{Lorentzian fit of observation $\#$7 showing a type-B QPO.} \label{fig:typeb} \end{figure} Only in the first 7 observations we detect QPOs ($Q>2$) and/or broad band components ($Q<2$) in at least some energy bands. We used the power spectrum of the fifth observation in band 0 (MJD 55694.884, Fig. \ref{fig:psex}) as a reference to identify four significant ($\sigma>3$, single-trial) components: a main QPO $L_{LF}$, its harmonic $L_{LF}^{+}$, a broad band noise component $L_b$, and another broad band component $L_b^{-}$ at lower frequency. In our analysis we reported all components with single trial significance $\sigma \ge 3$ and additionally those components with significance between 2$\sigma$ and 3$\sigma$ that could be identified as $L_{LF}$, $L_{LF}^{+}$, $L_b$, or $L_b^{-}$. Table \ref{tab:fmax} shows $\nu$, Q, rms, significance ($\sigma$) and reduced $\chi^2$ for every fitted component in the 7 observations analyzed (\#1--7) for all the energy bands. We also report the 99.87\% upper limits calculated fixing $\nu$ and Q to values equal to the most significant corresponding component between the energy bands fitted in the same observation. Empty lines mean that no components were fitted and no upper limit could be determined.\\ Figs. \ref{fig:fmax}$g$--$h$ show the frequencies of the fitted QPOs (triangles for $L_{LF}$, diamonds for $L_{LF}^{+}$), broad band components (squares for $L_b$, circles for $L_b^{-}$), and unidentified narrow ($Q>2$) components (pentagons), and their rms versus time in band 0, respectively. Solid symbols indicate significant components and open symbols components with significance between 2$\sigma$ and 3$\sigma$. The 2--3$\sigma$ unidentified component of observation \#6 (see Table \ref{tab:fmax}, bottom) is included in our plot because its characteristic frequency matches with the subharmonic frequency of the identified component $L_{LF}$. Similarly, two 2--3$\sigma$ unidentified components fitted in observation \#7 (Fig. \ref{fig:typeb}) were reported, as one matches with the subharmonic frequency of $L_{LF}$, and the other with $L_{b}$. Squares and circles were slightly shifted to the right for clarity. \\ Always referring to band 0, in the first 5 observations one significant low frequency QPO ($L_{LF}$) was fitted for each spectrum and only the third observation shows a significant harmonic ($L_{LF}^{+}$). The $L_{LF}$ frequency increases with time from $\sim$ 1.1 Hz to $\sim$ 5.8 Hz while its rms decreases from $\sim$ 17$\%$ to $\sim$ 10$\%$ (see Table \ref{tab:fmax}). Observation \#7 shows a significant QPO with $\nu_{max}$ = 4.7 Hz (Fig. \ref{fig:typeb}). The peak characteristics ($\nu_{max}$ = 4.7 Hz, $Q$ = 9, rms $\sim$ 4.8) of and the low 1/128--10 Hz rms ($\sim$ 7.2\%) associated with this QPO, are characteristics of type-B QPOs (e.g. Casella et al. 2005). Considering also the 2--3$\sigma$ QPO fitted in observation \#6 ($\nu_{max}$ = 5.7 Hz, $\sigma$ $\sim$ 2.5), in observations \#6--7 $L_{LF}$ frequency and rms are not anti-correlated anymore. The characteristic frequency of $L_{LF}$ decreases from $\sim$ 5.8 Hz to $\sim$ 4.7 Hz while the rms still decreases from $\sim$ 6$\%$ to $\sim$ 5$\%$. \\ One significant broad band component ($L_b$) with $\nu_{max}$ in the interval $\sim$ 2--4 Hz was fitted in observations \#1--6. The rms of this component decreases with time (from 20$\%$ to 9$\%$), with a clear decreasing trend observable only in observations \#5--6, while its $\nu_{max}$ remains almost in the same frequency range (around 3 Hz). In observations \#5--7 we fitted another broad band component ($L_b^{-}$) characterized by an increasing $\nu_{max}$ (from observation \#5 to \#7) in the interval $\sim$ 0.06--0.66 Hz and rms between 2\% and 4\%.\\ The timing features in the other energy bands are reported in Fig. \ref{fig:fmax}$a$--$f$. Similarly to panels $g$--$h$, plots $a$--$b$, $c$--$d$, and $e$--$f$ show frequency and rms evolution for power spectral components fitted in bands 1, 2, and 3 respectively. No significant characteristic frequency shift was detected between energy bands in any power spectral component, while the rms values are systematically higher for higher energies (Table \ref{tab:fmax}). In band 1 $L_{LF}$ frequency increases with time (from $\sim$ 1.1 Hz to $\sim$ 6.5 Hz) in the first 6 observations while no-significant QPO was fitted in observation \#7. The behavior of $L_{LF}$ characteristic frequency in bands 2-3 is mostly identical to band 1 for observations \#1--5, but we observe some differences in observations \#6--7. The 2--3$\sigma$ QPO ($\sigma$ $\sim$ 2.2) fitted in observation \#6 (band 2) seems to break the anti-correlation between frequency and rms shown in observations \#1--5, but in band 3 the QPO frequency error bar is too big to infer any trend. However, the anti-correlation is evident in observation \#7, where a significant QPO was fitted in bands 2-3 with lower characteristic frequency compared to observation \#5. The rms of $L_{LF}$ in band 1 decreases as the QPO frequency increases, but in bands 2--3 this trend is progressively weaker. Indeed, in band 3 the $L_{LF}$ rms slightly oscillates around $\sim$ 17$\%$ and $\sim$ 11$\%$ in the first 5 observations and decreases to $\sim$ 11$\%$ only in the last 2 observations. \\ The broad band component ($L_b$) frequency slightly varies around $\sim$ 5 Hz in observations \#3--5 (band 1), while no significant broad band components were fitted in observations \#6 and \#7. In band 2 $L_b$ frequency shows a clear decreasing trend only in the last three observations ($\nu$ $\sim$ 4.4--1.6 Hz), while does not show any clear trend in band 3. $L_b$ rms decreases smoothly with time in band 1 (from $\sim$ 22$\%$ to $\sim$ 15$\%$), but it does not show the same trend in the other two energy bands. In band 2 we observe a clear decrease of $L_b$ rms only in observations \#5--6 (from $\sim$ 20$\%$ to $\sim$ 7$\%$ ) and in band 3 it oscillates between 22\% and 27\%.\\ Apart from the full energy, $L_b^{-}$ was fitted only in observation \#5 (band 1) and \#6 (band 1--3), but it is significant just at low photon energy (band 1, \#6). $L_b^{-}$ frequency and rms behavior in band 1 is mostly identical to band 0. \\ \begin{figure*} \center \includegraphics[scale=0.35,trim=0cm 0cm 0cm 0cm,angle=270]{fig5.ps} \caption{Characteristic frequency and rms of $L_{LF}$ (triangles), $L_{LF}^+$ (diamonds), $L_b$ (squares), $L_b^{-}$ (circles), and other significant unidentified components (pentagons) fitted in the first 7 observations in all the energy bands ($L_b$ and $L_b^{-}$ have been shifted slightly to the right with respect to the original position for clear reading). Open symbols indicate components with significance between 2 and 3 $\sigma$ while full symbols stand for $\sigma > 3$ significant components. All values are plotted with 1$\sigma$ error bars.} \label{fig:fmax} \end{figure*} \begin{figure} \center \includegraphics[scale=0.4,trim=0cm 0cm -1cm 0cm,angle=270]{fig6.ps} \caption{Fractional integrated 1/128--10 Hz rms versus time in the first 7 observations for the bands considered. The orange arrow represents the time of the radio emission. All values are plotted with 1$\sigma$ error bars.} \label{fig:rmsall} \end{figure} | We analyzed the evolution of MAXI J1543-564 during its 2011 outburst identifying the transition between LHS/HIMS and SIMS, occurring between observation \#5 and \#6. Analyzing the source in different energy bands, we found that in this transition changes in rms are more evident at higher photon energy. Using the mass accretion rate fluctuation/precessing flow model \textsc{propfluc}, we provided a physical interpretation of the first 5 observations in terms of truncation radius, fractional variability, mass accretion rate, and surface density evolution. We suggest that the source behavior in observation \#6 and \#7, and so the transition between LHS and SIMS, might be caused by mass depletion in the innermost part of the accretion flow due to ejection and/or enhanced accretion associated with the simultaneous radio emission. This physical scenario is consistent with our timing analysis in different energy bands. | 14 | 3 | 1403.2308 |
1403 | 1403.2414_arXiv.txt | {} {The phase scintillation of the European Space Agency's (ESA) Venus Express (VEX) spacecraft telemetry signal was observed at X-band ($\lambda=3.6\,$cm) with a number of radio telescopes of the European VLBI Network (EVN) in the period $2009$--$2013$.} {We found a phase fluctuation spectrum along the Venus orbit with a nearly constant spectral index of $-2.42\,\pm\,0.25$ over the full range of solar elongation angles from $0^\circ$ to $45^\circ$, which is consistent with Kolmogorov turbulence. Radio astronomical observations of spacecraft signals within the solar system give a unique opportunity to study the temporal behaviour of the signal's phase fluctuations caused by its propagation through the interplanetary plasma and the Earth's ionosphere. This gives complementary data to the classical interplanetary scintillation (IPS) study based on observations of the flux variability of distant natural radio sources.} {We present here our technique and the results on IPS. We compare these with the total electron content (TEC) for the line of sight through the solar wind. Finally, we evaluate the applicability of the presented technique to phase-referencing Very Long Baseline Interferometry (VLBI) and Doppler observations of currently operational and prospective space missions.} {} | The determination of the Doppler parameters and state vectors of spacecraft by means of radio interferometric techniques opens up a new approach to a broad range of physical processes. The combination of Very Long Baseline Interferometry (VLBI) and Doppler spacecraft tracking has been successfully exploited in a number of space science missions, including tracking of VEGA balloons for determining the wind field in the atmosphere of Venus~\citep{Preston}, VLBI tracking of the descent and landing of the Huygens Probe in the atmosphere of Titan~\citep{Bird}, VLBI tracking of the impact of the European Space Agency's (ESA) Smart-$1$ Probe on the surface of the Moon with the European VLBI Network (EVN) radio telescopes, and the recent VLBI observations of ESA's Venus Express (VEX)~\citep{Duev}, and of the Mars Express (MEX) Phobos flyby~\citep{Molera}. The Planetary Radio Interferometry and Doppler Experiment (PRIDE) is an international enterprise led by the Joint Institute for VLBI in Europe (JIVE). PRIDE focusses primarily on tracking planetary and space science missions through radio interferometric and Doppler measurements~\citep{Duev}. PRIDE provides ultra-precise estimates of the spacecraft state vectors based on Doppler and VLBI phase-referencing~\citep{Beasley} techniques. These can be applied to a wide range of research fields including precise celestial mechanics of planetary systems, study of the tidal deformation of planetary satellites, study of geodynamics and structure of planet interiors, characterisation of shape and strength of gravitational field of the celestial bodies, and measurements of plasma media properties in certain satellites and of the interplanetary plasma. PRIDE has been included as a part of the scientific suite on a number of current and future science missions, such as Russian Federal Space Agency's RadioAstron, ESA’s Gaia, and Jupiter Icy Satellites Explorer (JUICE). The study of interplanetary scintillation (IPS) presented in this paper was carried out within the scope of the PRIDE initiative via observations of the ESA’s VEX spacecraft radio signal. Venus Express was launched in 2005 to conduct long-term in-situ observations to improve understanding of the atmospheric dynamics of Venus~\citep{Titov}. The satellite is equipped with a transmitter capable of operating in the S and X-bands ($2.3\,$GHz, $\lambda=13\,$cm, and $8.4\,$GHz, $\lambda=3.6\,$cm, respectively). Our measurements focussed on observing the signal transmitted in the X-band. The VEX two-way data communication link has enough phase stability to meet all the requirements for being used as a test bench for developing the spacecraft-tracking software and allowing precise measurements of the signal frequency and its phase. During the two-way link, the spacecraft transmitter is phase locked to the ground-based station signal, which has a stability, measured by the Allan variance, better than $10^{-14}$ in $100$--$1000$ seconds. The added Allan variance of the spacecraft transmitter is better than $10^{-15}$ in the same time span. The signal is modulated with the data stream. The modulation scheme leaves $25\%$ of the power in the carrier line, which is sufficient for its coherent detection. We used observations of the VEX downlink signal as a tool for studying IPS. In this paper, we describe the solar wind and the nature of the interplanetary scintillation by analysing the phase of the signal transmitted by a spacecraft in Sect.~\ref{sec:the}. The observational setup at the radio telescopes, a short description of the tracking software, and analysis of the phase fluctuations are summarised in Sect.~\ref{sec:met}. Results from analysis of the phase scintillation in the signal from VEX are presented in Sect.~\ref{sec:res}. Finally, conclusions are presented in Sect.~\ref{sec:con}. | \label{sec:con} We estimated the spectral power density of the phase fluctuations at different solar elongations. They showed an average spectral index of $-2.42\,\pm\,0.25$, which agrees with the turbulent media described by Kolmogorov. From all our measurements, the slope of the phase fluctuations appears to be independent of the solar elongation. Interplanetary scintillation indices calculated from the spacecraft phase variability were presented in this paper. The phase scintillation indices were measured for two Venus orbits around the Sun. The results obtained here provide a method for comparing the total electron content at any solar elongation and distance to the Earth with the measured phase scintillation index. The TEC values and phase scintillation present an error of $0.28$ on logarithmic scale. Our measurements can be improved by considering the contribution of the Earth's ionosphere in our analysis. The improvement by including the ionosphere data is calculated to be of the order of $6.4\%$. This study is also important for estimates of the spacecraft's state vectors using the VLBI phase-referencing technique. An important factor for VLBI phase-referencing is to select an optimal nodding cycle between the target and reference source. The results obtained from Venus Express offer precise information on the VLBI requirements and estimate the scintillation level at any epoch. The results of this study are applicable to future space missions. It intends to be the basis for both calibration of state vectors of planetary spacecraft and further studies of interplanetary plasma with probe signals. | 14 | 3 | 1403.2414 |
1403 | 1403.7836_arXiv.txt | We use very high-S/N stacked spectra of $\sim$29,000 nearby quiescent early-type galaxies (ETGs) from the Sloan Digital Sky Survey (SDSS) to investigate variations in their star formation histories (SFHs) with environment at fixed position along and perpendicular to the Fundamental Plane (FP). We define three classifications of local group environment based on the `identities' of galaxies within their dark matter halos: central `Brightest Group Galaxies' (BGGs); Satellites; and Isolateds (those `most massive' in a dark matter halo with no Satellites). We find that the SFHs of quiescent ETGs are almost entirely determined by their structural parameters $\sigma$ and $\Delta I_e$. Any variation with local group environment at fixed structure is only slight: Satellites have the oldest stellar populations, 0.02 dex older than BGGs and 0.04 dex older than Isolateds; BGGs have the highest Fe-enrichments, 0.01 dex higher than Isolateds and 0.02 dex higher than Satellites; there are no differences in Mg-enhancement between BGGs, Isolateds, and Satellites. Our observation that, to zeroth-order, the SFHs of quiescent ETGs are fully captured by their structures places important qualitative constraints on the degree to which late-time evolutionary processes (those which occur after a galaxy's initial formation and main star-forming lifetime) can alter their SFHs/structures. | There are reasons to expect early-type galaxy (ETG) star formation histories (SFHs) to depend on a galaxy's environment because the efficiency of relevant processes such as quenching by massive haloes (see, e.g., \citealt{keres05, cattaneo08}), cooling flows (see, e.g., Miller, Melott \& Gorman 1999), and satellite quenching (see, e.g., \citealt{gunn72, lea76, gisler76}) should vary with environment. Thus, studies of differences in the star formation histories of early-type galaxies with environment can provide insight into mechanisms governing the formation and evolution of these galaxies. However, such studies have yielded somewhat contradictory results. Many of them have suggested that, at a given luminosity, ETGs in low-density environments are younger and more metal-rich than those in clusters (e.g. \citealt{bernardi98, trager00b, poggianti01, terlevich02}; Caldwell, Rose \& Concannon 2003; \citealt{proctor04, thomas05, sanchez06, cooper10, rogers10}). There are, however, conflicts with the above results: using a sample of ETGs from the Sloan Digital Sky Survey (SDSS), \citet{bernardi06} found galaxies in the most dense environments to be older than their counterparts in the least dense environments by $\sim$1 Gyr, but found no significant differences in metallicity with environment. Also using a sample of ETGs from SDSS, \citet{gallazzi06} found evidence that ETGs in low-density environments were less metal-rich than those in high-density environments. \citet{harrison11}, studying a sample of ETGs drawn from four clusters, found no significant differences in the ages, metallicities, or $\alpha$-element abundance ratios between galaxies within clusters and those found in their outskirts. Comparing early-type galaxies in clusters and their field contemporaries, \citet{rettura11} found no difference in the ages of cluster and field ETGs, but found that field ETGs were formed over longer time-scales than those in clusters. It is clear that there is still much to be understood about the relationship between the stellar population properties (SPPs) of early-type galaxies and their environments. One possible explanation for the contradictions among these results is entanglement between trends in the stellar populations with environment and those with other galaxy parameters. For example, the above studies distinguished the environments of ETGs by comparing galaxies in groups, clusters, and the field. Such an environmental distinction is made ambiguous by known correlations between ETG $M_{\star}$ and environment, such that both high-mass, bright red galaxies and low-mass, faint red galaxies are preferentially found in denser environments (e.g. \citealt{hogg03, mo04, blanton05b, croton05, hoyle05}). If the SPPs show trends with galaxy parameters that are environmentally dependent, such as $M_{\star}$, then studies comparing the stellar populations of all galaxies at fixed environment vs. those comparing galaxies at, for example, fixed $M_{\star}$ or fixed central stellar velocity dispersion $\sigma$ and fixed environment will all give different results. To zeroth-order, ETGs are observed to form a one-dimensional (1D) family with their SPPs showing strong trends with galaxy mass. Studies have sought to characterize these trends in the stellar populations of ETGs along luminosity $L$, $M_{\star}$, or $\sigma$. In general, these studies have found stellar metallicities to increase with increasing galaxy mass -- the well-known mass-metallicity relation (e.g. \citealt{henry99, nelan05}; Smith, Lucey \& Hudson 2007; \citealt{koleva11}). These studies have also established that the SPPs of more massive ETGs tend to be older (to have formed the bulk of their stars at earlier times) and to have formed over shorter time-scales than their lower-mass counterparts (e.g. \citealt{heavens04, kodama04, juneau05, nelan05, thomas05, jimenez07, smith07}). This has come to be called `archaeological downsizing'. The results of these studies are consistent with those which demonstrate that $\sigma$ is the best predictor of the SFHs of ETGs over $L$, $M_{\star}$, or dynamical mass $M_{dyn}$ (\citealt{trager00b}; Graves, Faber \& Schiavon 2009a,b; \citealt{rogers10, vanderwel09}; Wake, van Dokkum \& Franx 2012). The SFHs of ETGs, however, are not purely a 1D family, but are observed to compose at least a 2-parameter family (\citealt{trager00b, graves09b}; Graves, Faber \& Schiavon 2010; \citealt{springob12}). \citet{graves10b} showed the stellar populations to map onto the Fundamental Plane (FP) of ETGs (see e.g. \citealt{dressler87, djorgovski87}; J\o rgensen, Franx \& Kj\ae rgaard 1996) such that it is possible to estimate the SPPs and, thus, the SFHs of quiescent ETGs by their locations in FP-space. The stellar populations of quiescent ETGs were shown to scale systematically with two relevant structural parameters: $\sigma$ and surface brightness residuals from the FP, $\Delta I_{e}$. Mean light-weighted age, [Fe/H], [Mg/H], and [Mg/Fe] all increase with increasing $\sigma$, while at fixed $\sigma$, as $\Delta I_{e}$ increases, [Fe/H] and [Mg/H] increase while age and [Mg/Fe] decrease. These results led the authors to propose a premature truncation model in which the onset time and duration of star formation in quiescent ETGs depend on $\sigma$ such that higher-$\sigma$ galaxies are older and were formed over shorter time-scales, while at fixed $\sigma$, galaxies offset to lower $\Delta I_{e}$ had star formation truncated earlier than those offset to higher $\Delta I_{e}$. Thus, according to this model, star formation is similar for galaxies at similar $\sigma$, while their SPPs can still vary according to differences in truncation time, in a way that scales systematically with $\Delta I_{e}$. Furthermore, stellar mass-to-light ratios at fixed $\sigma$ are nearly constant \citep{graves10a}. Thus the $I_{e}$ variations are primarily variations in stellar mass surface density $\Sigma_{\star}$, and high-$I_{e}$ galaxies are in fact galaxies with high $\Sigma_{\star}$ and (typically) high $M_{\star}$ for a given $\sigma$. If the SFHs of ETGs are a 2D family, then studies characterizing trends in the SPPs with environment at fixed $M_{\star}$ vs. at fixed $\sigma$, for example, will in general yield different and ambiguous results. We therefore study the environmental dependence of the SPPs of ETGs at fixed position in FP-space (along fixed structural parameters shown by \citet{graves10b} to control the SFHs of ETGs). This allows us to identify stellar population differences due solely to environment, as opposed to those due also to differences in galaxy structure as a function of environment. To quantify galaxy environment we use the halo-based group-finding algorithm of \citet{yang07},\footnote[1]{We use the results of the group-finding algorithm of \citet{yang07} applied to SDSS DR7.} which assigns individual galaxies to their respective dark matter haloes, assigns halo masses, and distinguishes between the most massive galaxies in groups, and satellites. Previous authors have introduced various other measures of galaxy environment. Two of the most common of these are the projected number density of galaxies above a given magnitude limit (e.g. \citealt{dressler80, lewis02, gomez03, goto03, balogh04a, balogh04b, tanaka04, cooper08, cooper10, cooper12}) and the clustering strength of galaxies using the two-point correlation function (e.g. \citealt{wake04, croom05, li06, vandenbosch07}) or a marked correlation function (e.g. \citealt{beisbart00, sheth04, skibba06, skibba09}). The first of these has the disadvantage that its physical interpretation depends on the environment itself (see \citet{weinmann06}), while the latter has the disadvantage that it assigns halo masses to galaxies in a statistical sense, rather than for individual galaxies \citep{pasquali10}. We use the halo-based group-finding algorithm of \citet{yang07} because it is free of these ambiguities and provides an intuitive measure of an individual galaxy's environment. \citet{rogers10} recently used the galaxy group catalogues of \citet{yang07} to study differences in local ETG SPPs with environment (central vs. satellite and group halo mass $M_{H}$) at fixed $\sigma$, and found centrals to have younger ages and significant recent star formation compared to satellites of the same $\sigma$. \citet{pasquali10} conducted a similar study for local galaxies at fixed $M_{\star}$, and found satellite galaxies to be older and more metal-rich than centrals at fixed $M_{\star}$ (we note, however, that comparison of the results we present here with those of \citet{pasquali10} is ambiguous because the authors did not make any morphological or emission cut to their galaxy sample, whereas we study a sample of spectroscopically early-type galaxies). These studies, however, characterized the environmental dependence of the SPPs only along galaxy mass, whereas we have said that the SPPs have been shown to comprise at least a 2D family, with at least two structural controlling parameters ($\sigma$ and $I_{e}$). We therefore ask a similar question as \citet{rogers10} and \citet{pasquali10} except, extending upon the work of \citet{graves10b}, we study differences in the stellar population properties of quiescent early-type galaxies with environment at fixed position in FP-space. When we compare galaxies at the same place in FP-space, do there exist further trends in these SPPs with environment? The answer to this question has important implications for the formation processes of ETGs in different environments, as manifested in their SFHs. We select a sample of 28,954 quiescent early-type galaxies from the SDSS DR7. We map these galaxies and their derived stellar population properties age, [Fe/H], and [Mg/Fe] onto and through the Fundamental Plane. We also divide our galaxy sample into three classifications of environment, derived from the galaxy group catalogues of \citet{yang07}. After confirming the trends seen in the SPPs with the relevant FP parameters by \citet{graves10b} in our own sample, we then go on to quantify any differences in the SPPs of our sample with environment at fixed FP position. In section 2 we describe the data used in this analysis, including sample selection. In section 3 we describe our analysis, including sample classification and stellar population analysis. In section 4 we present our results for variations in the stellar population properties of our quiescent early-type galaxy sample with local group environment at fixed structure. In section 5 we discuss our results in the context of a few late-time evolutionary processes. Finally, section 6 summarizes our conclusions. | In this analysis we used very high S/N, stacked spectra of $\sim$29,000 SDSS quiescent early-type galaxies to study variations in the stellar population properties age, [Fe/H], and [Mg/Fe] with local group environment (BGG, Isolated, and Satellite) at fixed position along and through the Fundamental Plane. By fixing galaxies along the Fundamental Plane parameters $\sigma$ and $\Delta I_{e}$ which were previously shown to be well-correlated to the star formation histories of early-type galaxies (\citealt{graves10b, springob12}), we were able to study variations in the stellar population properties of early-type galaxies due solely to environment. We find the following results for the stellar populations of quiescent early-type galaxies: \begin{enumerate} \item We confirm the trends in the stellar population properties with galaxy structure seen by \citet{graves10b} and \citet{springob12}: the ages, [Fe/H], and [Mg/Fe] of our galaxy sample all increase with $\sigma$. Along decreasing $\Delta I_{e}$, galaxy age and [Mg/Fe] increase while [Fe/H] decreases. \item Our central result is that, to zeroth-order, the star formation histories of our early-type galaxy sample are fully captured by the structural parameters $\sigma$ and $\Delta I_{e}$, and any differences in the star formation histories with environment at fixed structure are only slight. The SFH-structure correlation we observe constrains the degree to which late-time evolutionary processes can alter the SFHs/structures of early-type galaxies in our sample. \item On top of the zeroth-order SFH-structure correlation, there are slight variations in the SFHs of early-type galaxies in our sample with environment: Isolated galaxies have the youngest ages, while BGGs are 0.02 dex older, and Satellites have the oldest stellar populations, 0.04 dex older than Isolateds. BGGs are found to have the highest Fe-enrichments, 0.01 dex higher than Isolateds and 0.02 dex higher than Satellites. Satellites and Isolateds have equal Fe-enrichments. There are no differences in Mg-enhancement between BGG, Isolated, and Satellite galaxies. \end{enumerate} Quiescent early-type galaxies in our sample obey a SFH-structure correlation that is determined early-on and preserved throughout late-time evolution. On top of this correlation there are only slight trends in SFH with environment. Although satellite quenching is found not to be the main mechanism causing the truncation sequence observed along the $\Delta I_{e}$ dimension of FP-space, as proposed by \citet{graves10b}, our observation that Satellites are slightly offset to older ages than BGGs and Isolateds is consistent with a weak, slow satellite quenching process. Although we do not see any slight offset of BGGs to younger ages than Isolateds, as one might expect from cooling flows, this could be due to quenching in the more massive haloes in which BGGs reside. The strong SFH-structure correlation we observe may well be inconsistent with a large amount of dry minor merging in our sample. Future numerical simulations are needed to place quantitative constraints on the degree to which dry minor merging can affect our galaxy sample. | 14 | 3 | 1403.7836 |
1403 | 1403.5372_arXiv.txt | Recent accurate measurements of cosmic-ray (CR) species by ATIC-2, CREAM, and PAMELA reveal an unexpected hardening in the proton and He spectra above a few hundred GeV, a gradual softening of the spectra just below a few hundred GeV, and a harder spectrum of He compared to that of protons. These newly-discovered features may offer a clue to the origin of high-energy CRs. We use the \fermi{} Large Area Telescope observations of the \gray{} emission from the Earth's limb for an indirect measurement of the local spectrum of CR protons in the energy range $\sim 90$~GeV--6~TeV (derived from a photon energy range 15~GeV--1~TeV). Our analysis shows that single power law and broken power law spectra fit the data equally well and yield a proton spectrum with index $2.68 \pm 0.04$ and $2.61 \pm 0.08$ above $\sim 200$~GeV, respectively. | 14 | 3 | 1403.5372 |
||
1403 | 1403.2287_arXiv.txt | Binarity is often invoked to explain peculiarities that can not be explained by the standard theory of stellar evolution. Detecting orbital motion via the Doppler effect is the best method to test binarity when direct imaging is not possible. However, when the orbital period exceeds length of a typical observing run, monitoring often becomes problematic. Placing a high-throughput spectrograph on a small semi-robotic telescope allowed us to carry out a radial velocity survey of various types of peculiar evolved stars. In this review we highlight some findings after the first four years of observations. Thus, we detect eccentric binaries among hot subdwarfs, barium, S stars, and post-AGB stars with disks, which are not predicted by the standard binary interaction theory. In disk objects, in addition, we find signs of the on-going mass transfer to the companion, and an intriguing line splitting, which we attribute to the scattered light of the primary. | HERMES is the state-of-the-art fiber echelle spectrograph attached to the Flemish 1.2-m telescope Mercator on La Palma (Raskin {\em et al.\/} \cite{Raskin2011}). Our survey is based on the use of a high-resolution mode ($R=85,000$). High throughput allows to observe stars up to $V=14$ (S/N$\sim$20 in 1 hr), while temperature-controlled environment allows radial velocity (RV) determination with precision better than 200 m s$^{-1}$. Spectra are reduced with the \textsc{python}-based pipe-line, and RVs are determined by cross correlation using line masks adapted for a range of spectral types and metallicities. The survey of post-Main sequence (MS) stars is the largest program that has been continuously run since the commissioning of HERMES in 2009. The observations are carried out in the queue mode, by the HERMES consortium members led by the Catholic University of Leuven. Stars are observed with frequencies ranging from once per week to several times per semester, depending on the expected time-scale of variability. Such mode of operation allows to collect a unique time series of spectra. The main goal of the survey is to verify, by means of the RV monitoring, binarity (or the absence of it in the control samples) among all main groups of post-MS stars where it was invoked to explain various peculiarities: post-asymptotic giant branch (post-AGB) stars with hot and cold dust, central stars of planetary nebulae (PNe), silicate J-type stars, subdwarfs, symbiotics, chemically peculiar giants, R CrB and W Ser types, and several others (see Van Winckel {\em et al.\/} \cite{VanWinckel2010}). Some of these classes must reflect different evolutionary stages of the same systems, but the empirical evidence for it is largely missing. By characterizing orbits, stellar properties, and dynamics of the circumstellar matter, we hope to fill in this gap. In this contribution we present some major findings of the first four years of the survey and their implications for the theory of binary evolution. | Within the HERMES survey of evolved stars we detected many suspected binaries, characterized their orbits, abundances, and the circumstellar environment. In this contribution we presented evidence for the existence of eccentric and long-period systems that can not be explained by the standard binary interaction theory. In post-AGB stars with disks we detected in addition an active mass transfer and a scattered light component in the spectra. We plan to employ Doppler tomography and interferometry to follow up on these spectroscopic discoveries. | 14 | 3 | 1403.2287 |
1403 | 1403.8066_arXiv.txt | We present results about the effect of the use of a stiffer equation of state, namely the ideal-fluid $\Gamma=2.75$ ones, on the dynamical bar-mode instability in rapidly rotating polytropic models of neutron stars in full General Relativity. We determine the change on the critical value of the instability parameter $\beta$ for the emergence of the instability when the adiabatic index $\Gamma$ is changed from 2 to 2.75 in order to mimic the behavior of a realistic equation of state. In particular, we show that the threshold for the onset of the bar-mode instability is reduced by this change in the stiffness and give a precise quantification of the change in value of the critical parameter $\beta_c$. We also extend the analysis to lower values of $\beta$ and show that low-beta shear instabilities are present also in the case of matter described by a simple polytropic equation of state. | \label{sec:intro} Non-axisymmetric deformations of rapidly rotating self-gravitating bodies are a rather generic phenomenon in nature and could appear in a variety of astrophysical scenarios like stellar core collapses~\cite{Shibata:2004kb,Ott:2005gj}, accretion-induced collapse of white dwarfs~\cite{Burrows:2007yx}, or the merger of two neutron stars~\cite{Shibata:2003ga,Shibata:2005ss}. Over the years, a considerable amount of work has been devoted to the search of unstable deformations that, starting from an axisymmetric configuration, can lead to the formation of highly deformed rapidly rotating massive objects~\cite{Shibata:2000jt,Baiotti:2006wn,Kruger:2009nw,Kastaun:2010vw,Lai:1994ke}. Such deformations would lead to an intense emission of high-frequency gravitational waves (i.e. in the kHz range), potentially detectable on Earth by next-generation gravitational-wave detectors such as Advanced LIGO~\cite{Harry:2010}, Advanced VIRGO and KAGRA~\cite{KAGRA:2012} in the next decade~\cite{LIGOVIRGO:2013}. From the observational point of view, it is import to get any insight on the possible astrophysical scenarios where such instabilities (unstable deformation) are present. It is well known that rotating neutron stars are subject to non-axisymmetric instabilities for non-radial axial modes with azimuthal dependence $\mathrm{e}^{i m \phi}$ (with $m = 1,2,\ldots$) when the instability parameter $\beta \equiv T/|W|$ (i.e. the ratio between the kinetic rotational energy $T$ and the gravitational potential energy $W$) exceeds a critical value $\beta_c$. The instability parameter plays an important role in the study of the so-called dynamical bar-more instability, i.e. the $m=2$ instability which takes place when $\beta$ is larger than a threshold ~\cite{Baiotti:2006wn}. Previous results for the onset of the classical bar-mode instability have already showed that the critical value $\beta_c$ for the onset of the instability is not an universal quantity and it is strongly influenced by the rotational profile~\cite{Shibata:2003yj,KarinoEriguchi03}, by relativistic effects~\cite{Shibata:2000jt,Baiotti:2006wn}, and, in a quantitative way, by the compactness~\cite{Manca:2007ca}. However, up to now, significant evidence of their presence when realistic Equation of State (EOS) are consider is still missing. For example in \cite{Corvino:2010}, using the unified SLy EOS~\cite{Douchin01}, was shown the presence of shear-instability but no sign of the classical bar-mode instability and of its critical behavior have been found. The main aim of the present work is to get more insight on the behavior of the classical bar-mode instability when the matter is described by a stiffer more realistic EOS. The investigation in the literature on its dependence on the stiffness of EOS usually focused on the values of $\Gamma$ (i.e. the adiabatic index of a polytropic EOS) in the range between $\Gamma=1$ and $\Gamma=2$~\cite{Lai:1994ke,2007PhRvD..76b4019Z,Kastaun:2010vw}, while the expected value for a real neutron star is more likely to be around $\Gamma=2.75$ at least in large portions of the interior. Such a choice for the EOS has already been implemented in the past~\cite{Oechslin2007aa}, even quite recently~\cite{Giacomazzo:2013uua}, with the aim of maintaining the simplicity of a polytropic EOS and yet obtaining properties that resemble a more realistic case. Indeed, as it is shown in Fig.~\ref{fig:EOSs}, a polytropic EOS with $K=30000$ and $\Gamma=2.75$ is qualitatively similar to the Shen proposal~\cite{shen98,shen98b} in the density interval between $2 \times 10^{13} \text{g/cm}^3$ and $10^{15} \text{g/cm}^3$. For the sake of completeness, in Fig.~\ref{fig:EOSs} we also report the behavior of the $\Gamma=2$ polytrope used in~\cite{Baiotti:2006wn,Manca:2007ca} and of the unified SLy EOS~\cite{Douchin01} which describes the high-density cold (zero temperature) matter via a Skyrme effective potential for the nucleon-nucleon interactions~\cite{Corvino:2010}. The organization of this paper is as follows. In Sect.~\ref{sec:setup} we describe the main properties of the relativistic stellar models we investigated and briefly review the numerical setup used for their evolutions. In Sect.~\ref{sec:results} we present and discuss our results, showing the features of the evolution for models that lie both above and below the threshold for the onset of the bar-mode instability and quantifying the effects of the compactness on the onset of the instability. Conclusions are finally drawn in Sect.~\ref{sec:conclusions}. Throughout this paper we use a space-like signature $-,+,+,+$, with Greek indices running from 0 to 3, Latin indices from 1 to 3 and the standard convention for summation over repeated indices. Unless otherwise stated, all quantities are expressed in units in which $c=G=M_\odot=1$. | \label{sec:conclusions} We have presented a study of the dynamical bar-mode instability in differentially rotating NSs in full General Relativity for a wide and systematic range of values of the rotational parameter $\beta$ and the conserved baryonic mass $M_0$, using a polytropic/ideal-fluid EOS characterized by a value of the adiabatic index $\Gamma=2.75$, which allows us to resemble the properties of a realistic EOS. In particular, we have evolved a large number of NS models belonging to five different sequences with a constant rest-mass ranging from $0.5$ to $2.5 \, M_\odot$, with a fixed degree of differential rotation ($\hat{A} = 1$) and with many different values of $\beta$ in the range $[0.140,0.272]$. For all the models with a sufficiently high initial value of $\beta$ we observe the expected exponential growth of the $m=2$ mode which is characteristic of the development of the dynamical bar-mode instability. We compute the growth time $\tau_2$ for each of these bar-mode unstable models by performing a nonlinear least-square fit using a trial function for the quadrupole moment of the matter distribution. The growth time clearly depends on both the rest-mass and the rotation and in particular we find that the relation between the instability parameter $\beta$ and the inverse square of $\tau_2$, for each sequence of constant rest-mass, is linear. This allows us to extrapolate the threshold value $\beta_c$ for each sequence corresponding to the growth time going to infinity, using the same procedure already employed in \cite{Manca:2007ca}. Once the five values of $\beta_c$ have been computed, we are able to extrapolate the critical value of the instability parameter for the Newtonian limit, which is found to be $\beta_c^N |_{\Gamma = 2.75} = 0.2527$. This value can be directly compared with the one found in \cite{Manca:2007ca} for the ``standard'' $\Gamma = 2$ case, which is $\beta_c^N |_{\Gamma = 2}=0.266$. Our results suggest that, even if one can now consider just two values for the adiabatic index, namely the values $\Gamma = 2.75$, considered in the present work, and $\Gamma = 2$, considered in \cite{Baiotti:2006wn,Manca:2007ca}, the use of a stiffer, more realistic EOS should be expected to have the effect of reducing the threshold for the onset of the dynamical bar-mode instability. Unfortunately, the actual reduction of the threshold $\beta_c$ is just of the order of $5\%$, and indeed this reduction does not lead to a significantly higher probability for it to occur in real astrophysical scenarios. We also evolved many models belonging to the same five sequences but having lower values of the instability parameter $\beta$. We find that many of them show the growth of one or more modes even though their initial value of $\beta$ is below the threshold for the onset of the dynamical bar-mode instability. The modes that show a growth are mainly $m=2$ and the $m=3$. We compute the frequencies of these growing modes and compare them with the corotation band for their progenitor models, finding that all those frequencies are within this band. We can conclude that such instabilities have to be defined as \textit{shear instabilities}, as the ones that were already observed in \cite{Corvino:2010}. Unfortunately, we are not able to measure their growth time, since their dynamics change significantly by changing the resolution of the simulations. In fact, while at a coarse resolution we usually observe only one mode growing exponentially, when improving the resolution other modes develop as well and the interplay between these prevent a clear exponential growth of only one mode which could dominate the evolution. In order to make a quantitative assessment about this phenomenon, either much higher resolution has to be used to see if one of the modes is able to dominate, or seed perturbations have to be introduced with the aim of selecting only a particular mode at a time. We leave this treatment to futures studies. | 14 | 3 | 1403.8066 |
1403 | 1403.3697_arXiv.txt | We study the dissipation of small-scale adiabatic perturbations at early times when the Universe is hotter than $T \simeq 0.5\,\keV$. When the wavelength falls below the damping scale $\kD^{-1}$, the acoustic modes diffuse and thermalize, causing entropy production. Before neutrino decoupling, $\kD$ is primarily set by the neutrino shear viscosity, and we study the effect of acoustic damping on the relic neutrino number, primordial nucleosynthesis, dark-matter freeze-out, and baryogenesis. This sets a new limit on the amplitude of primordial fluctuations of $\Delta_{\mathcal R}^2 < 0.007$ at $10^4 \,\Mpc^{-1}\lesssim k\lesssim 10^5\,\Mpc^{-1}$ and a model dependent limit of $\Delta_{\mathcal R}^2 \lesssim 0.3$ at $k \lesssim 10^{20-25}{\rm Mpc}^{-1}$. | 14 | 3 | 1403.3697 |
||
1403 | 1403.6417_arXiv.txt | We present synthetic \HI\ and CO observations of a numerical simulation of decaying turbulence in the thermally bistable neutral medium. We first present the simulation, which produces a clumpy medium, with clouds initially consisting of clustered clumps. Self-gravity causes these clump clusters to merge and form more homogeneous dense clouds. We apply a simple radiative transfer algorithm, throwing rays in many directions from each cell, and defining every cell with $\avgAv\ > 1$ as molecular. We then produce maps of \HI, CO-free molecular gas, and CO, and investigate the following aspects: i) The spatial distribution of the warm, cold, and molecular gas, finding the well-known layered structure, with molecular gas being surrounded by cold \HI\ and this in turn being surrounded by warm \HI. ii) The velocity of the various components, finding that the atomic gas is generally flowing towards the molecular gas, and that this motion is reflected in the frequently observed bimodal shape of the \HI\ profiles. This conclusion is, however, tentative, because we do not include feedback that may produce \HI\ gas receding from molecular regions. iii) The production of \HI\ self-absorption (HISA) profiles, and the correlation of HISA with molecular gas. In particular, we test the suggestion of using the second derivative of the brightness temperature \HI\ profile to trace HISA and molecular gas, finding significant limitations. On a scale of several parsecs, some agreement is obtained between this technique and actual HISA, as well as a correlation between HISA and the molecular gas column density. This correlation, however, quickly deteriorates towards sub-parsec scales. iv) The column density PDFs of the actual \HI\ gas and those recovered from the \HI\ line profiles, finding that the latter have a cutoff at column densities where the gas becomes optically thick, thus missing the contribution from the HISA-producing gas. We also find that the power-law tail typical of gravitational contraction is only observed in the molecular gas, and that, before the power-law tail develops in the total gas density PDF, no CO is yet present, reinforcing the notion that gravitational contraction is needed to produce this component. | \begin{figure*} \resizebox{0.9\columnwidth}{!}{\includegraphics{Figures/3d_t096_rha_1_3_rhm_044_13}} \resizebox{0.9\columnwidth}{!}{\includegraphics{Figures/3d_t192_rha_15_3_rhm_044_50}} \resizebox{0.9\columnwidth}{!}{\includegraphics{Figures/3d_t259_rha_15_3_rhm_044_50}} \caption{\label{fig:3dplots} Three-dimensional distribution of the atomic density (green) and the molecular density (red) at the three timesteps through iso-surface renderings made with Starlink's {\sc GAIA} \citep[see e.g.][]{2007ASPC..376..695D}. Two green intensity levels were used, of 1 and 3, 1.5 and 3, and 1.5 and 3 $M_{\odot}\ \rm{pc}^{-3}$, for the timesteps at 12.5, 25.0, and 33.7 Myr, respectively. Similarly, red levels of 0.44 and 13, 0.44 and 50, and 0.44 and 50 $M_{\odot}\ \rm{pc}^{-3}$, were respectively used at each of the timesteps to image the molecular gas density. The numbered axes correspond to the x, y and z axes respectively (with the x-axis pointing towards the lower left corner. \textit{A color version this figure is available in the online version of this journal.}} \end{figure*} The present view of the interstellar medium (ISM) is that it is in general highly turbulent \citep[e.g.,][]{2000ApJ...540..271V,2004RvMP...76..125M,2004ARA&A..42..211E,2007RMxAA..43..123B}, and that dense, cold clouds form where turbulent compressions or larger-scale instabilities produce converging flows that in turn cause the density to increase locally \citep[e.g.,][]{2014MNRAS.437L..31D,2014arXiv1402.6196M}. Indeed, numerical simulations of converging flows in the warm neutral medium (WNM) including self-gravity but no stellar feedback \citep{2007ApJ...657..870V,2011MNRAS.414.2511V,2008A&A...486L..43H,2008ApJ...689..290H,2009ApJ...704.1735H,2009pjc..book..421B} show in general that, once a dense cloud is formed by this mechanism, it quickly becomes gravitationally unstable, and begins to undergo gravitational collapse. An important feature of this collapse is that it begins in gas that should be primarily atomic, with molecule formation occurring as a consequence of the gravitational contraction, as initially proposed on theoretical grounds by \citet{1986PASP...98.1076F} and \citet{2001ApJ...562..852H}. Simulations including a self-consistent treatment of the chemistry \citep[e.g.,][]{2012MNRAS.424.2599C,2013arXiv1306.5714C} indeed show that this is so, and that H$_2$ molecule formation occurs relatively early during the collapse, while CO formation only occurs some 2 Myr before star formation (SF) starts. Finally, simulations including stellar feedback \citep{2010ApJ...715.1302V,2013MNRAS.tmp.2055C} show that the infalling motions of the dense gas are not overturned by the action of the feedback. Rather, the clouds are progressively evaporated, with the escaping gas being warm and diffuse, while the dense gas continues to fall in. The above picture of molecular clouds formed by flows implies that CO-identified molecular clouds (MCs) should be surrounded by cold atomic gas, perhaps mixed with CO-dark molecular gas, and that this medium should in turn be embedded in warm \HI, similarly to the classical picture of the ISM \citep[see, e.g., the review by ][and references therein]{1993prpl.conf..125B}, except for the additional property that the atomic and CO-dark components are expected to be flowing towards the MCs. Observations partially support this view, since giant molecular clouds (GMCs), whose largest dimension reaches up to $\sim 100$ pc, appear to be the densest regions in the ISM, and are known to be embedded in CO-free molecular gas, which in turn is embedded in \HI\ superclouds, of sizes of up to 1 or 2 kpc \citep[see, e.g.,][and references therein]{2014arXiv1402.6196M}. In order to distinguish whether molecular clouds are formed by \HI\ flows, it is necessary to establish the motions of the \HI. However, observationally establishing the direction of the motions that produce GMCs is a formidable problem, due to the confusion caused by the ubiquitous presence of \HI\ gas in the Galactic disk. Several studies have been conducted to determine the signatures in line profiles and position-velocity (PV) space arising from the density and velocity features produced in the simulations. Early studies simply investigated line-of-sight (LOS) projections of the density field from numerical simulations, and perhaps investigated the column density in the velocity coordinate (line profiles and channel maps) \citep[e.g.,][] {1999ApJ...527..285B,2000ApJ...532..353P,2001ApJ...546..980O,2002ApJ...570..734B}, although without performing synthetic observations based on integration of the radiative transfer (RT) equation for the various lines involved. A further step was made by \citet{2002ApJ...570..734B} who, in order to study the internal structure of molecular clouds, created synthetic CO and CS line profiles in local thermal equilibrium from numerical simulations of isothermal molecular clouds. They found that the density-size relation, rather than an intrinsic property of molecular clouds, is an artifact of the observational procedure \citep[see also][]{2012MNRAS.427.2562B}. More recently, synthetic observations have been performed to varying degrees of approximation, and used to study to what extent the actual density and velocity structure of the atomic gas can be inferred from the line profiles \citep{2007A&A...465..445H}, or to show that infalling motions in the molecular gas produce realistic CO spectra: For example by matching the linewidths as well as the magnitude of the velocity dispersions seen in the $^{13}$CO filamentary structure of Galactic molecular clouds \citep{2009ApJ...704.1735H}. In this contribution we take one step further in this direction by combining synthetic observations of atomic {\it and} molecular gas from a numerical simulation of the formation of dense clouds in the turbulent ISM. These can help interpreting actual observations of MCs and their atomic envelopes, and help understanding the atomic-to-molecular transition in the ISM. Specifically, we use a simple radiative transfer (RT) algorithm on the output of a numerical simulation of decaying, self-gravitating turbulence in the ISM, in order to classify the gas as either being atomic or molecular, and then investigate various aspects, such as the spatial distribution of the molecular and cold and warm atomic components, as well as their velocities, and the signatures of these motions on the line profiles and intensity maps. An important issue to assess is the production of \HI\ self-absorption (HISA) features by the cold atomic gas expected to surround the CO clouds. Although HISA is perhaps the most reliable method to detect cold \HI, it is not free from uncertainties and ambiguities. In particular, it is important to be able to distinguish between a true HISA feature, and lack of background emission. To this end, \citet{1974AJ.....79..527K} proposed four criteria, namely i) a fairly narrow dip (less than about 7 km s$^{-1}$) appearing in the \HI\ spectrum; ii) an \HI\ velocity feature corresponding to a molecular emission line; iii) a dip appearing `on-cloud' but not on the `off-cloud' calibration profile, and iv) a slope of the dip steeper than the slope of the background emission profile to exclude the possibility of a line profile composed of a double gaussian peak. Based on later findings, the HISA detection criteria changed somewhat, most notably since molecular line emission is not always detected when a molecular cloud is positively identified, for example through optical extinction. \citet{2000ApJ...540..851G} generalized the steepness requirement by stating that the profile wings need to be steeper than what superposition of neighboring emission lines would cause. Additionally, a certain amount of small-scale angular structure was required, as well as a certain minimum background level. An automated HISA detection algorithm taking into account spatial and spectral features was developed by \citet{2005ApJ...626..214G}, while another example was presented by \citet{2005ApJ...626..887K}, who used both the first and second derivative (thresholded at certain levels) of the \HI\ profile to find HISA. They found that 60\% of the HISA they detected coincides with molecular (CO) gas and proposed that HISA is related to an atomic-to-molecular phase transition. \citet{2005ApJ...622..938G} determined that the cold \HI\ gas coincides with $^{13}$CO in five dark clouds they considered, and their models show that the cold \HI\ in those structures has densities between 2 and 6 cm$^{-3}$ as compared to H$_2$ central densities of 800 to 3000 cm$^{-3}$. Where the absorption coincides with molecular lines and has approximately the same linewidth, it has been referred to as `\HI\ narrow self-absorption', or HINSA \citep{2003ApJ...585..823L}. Since strictly speaking we do not require molecular (CO) emission in order to identify \HI\ self-absorption, we will refer to these features as HISA regardless of whether they may be considered `narrow', as a matter of convenience. In general, \HI\ profiles typically look bi-modal on or near molecular clouds and it can be hard to distinguish without additional information what physical properties are behind this profile shape. In this contribution we therefore also aim to explore the extent to which we can distinguish true HISA from separate emission peaks, and how well the HISA is correlated with the presence of molecular gas at various evolutionary stages of the clouds. This paper is structured as follows: First, we introduce the simulation and the synthetic observations derived from it in Sec.\ \ref{sec:method}, and discuss the density and velocity structure of the various gas components in real space, emphasizing the general trend of inflow onto the dense gas in Sec.\ \ref{sec:gral_morph}. Then, in Sec.\ \ref{sec:synth_prof} we discuss the nature of the synthetic line profiles, and in Sec.\ \ref{sec:HISA_molec} we address the identification of HISA features and compare these features to those of the molecular gas. Next, we discuss the structure of the probability density functions of the simulation and the structure of the gas in the context of colliding gas flows. In Sec.\ \ref{sec:concl} we close with a brief discussion and summary of our results. | \label{sec:concl} \subsection{Limitations} \label{sec:limitations} One of the key missing features in the present study is the inclusion of supernova (SN) feedback, which should maintain the turbulence driving at the scales of our simulation. This may, in turn, have a significant effect on the evolution and structure of the clouds, although its relative importance compared to the gravitational driving of the motions in the clouds is uncertain. Supernovae tend to explode in regions that have been previously evacuated by ionizing radiation and winds from the massive stars, and numerical simulations of this scenario (albeit without self-gravity) suggest that the dense clouds are not strongly affected by the supernovae \citep[e.g.,][] {2004A&A...425..899D, 2012ApJ...750..104H}. In a future study, we plan to repeat our analysis in the presence of ionization and SN feedback, as well as the magnetic field and ambipolar diffusion. Another important and obvious improvement we can make to our model is a more detailed treatment of the formation and destruction of molecular gas \citep[cf. e.g.][]{2014arXiv1403.1589S}. For example, we assume that the formation of molecular gas happens on a timescale that is very small relative to the timesteps we considered, but if we follow the evolution of the gas particles more closely a minimum timescale for molecular gas formation can be imposed as well as a certain balance of photodissociation. As a consequence, the molecular and CO fractions in our simulation must be considered as upper limits. At least our approach provides a first order approximation to the evolution of atomic and molecular gas. \subsection{Summary} \label{sec:summary} We have presented a numerical simulation where `molecular' clouds are identified by using a \textit{post-facto} processing of the particles in the simulation volume: molecular gas was assumed to have formed if the local temperature had dropped below 50 K and the local average extinction, $\Av> 1$. We also created synthetic observations of \HI\ and CO, the latter assumed to exist at a grid cell if the conditions for molecular gas were satisfied and besides the local density satisfied $n > n_{\rm crit}$, where $n_{\rm crit}$ is the critical density for CO formation (cf.\ Sec.\ \ref{sec:synth_obs}). We used two different methods to identify (potential) HISA features. One, we used the amplitude of the second derivative of the \HI\ brightness temperature profile, dubbed `HISA strength'. Two, we identified HISA features by matching local minima in the \HI\ brightness temperature profile to peaks in the calculated opacity at the same velocity (that we call `HISA mask'), thereby distinguishing between dips in the profile caused by HISA and those caused by the absence of atomic gas. Although this method is superior because it uses local gas opacity information, it cannot be applied to actual observations since this kind of information is generally not available. Nevertheless, it provides a means of testing the goodness of the first method. We then compared the location and intensity of the HISA features with the presence of molecular gas and looked at the structure of the gas. Finally, we investigated the density and column-density PDFs of the gas, the latter obtained both directly from the simulation data and from the synthetic observations. We focused on three timesteps of the simulation, namely at $t=12.5$, 25 and 33.7 Myr. The first timestep corresponds to a time when the clouds are still developing and no SF is occurring in the simulation. The second corresponds to a time when SF is at an early phase, while the latter corresponds to a time when SF is copious. At this time, the effects of stellar feedback are clearly missing and should be included in future studies. Our main results were as follows: \begin{itemize} \item At the first of the timesteps studied, significant quantities of molecular gas exist, but no significant CO has formed yet. That is, the molecular gas is `CO-free'. Instead, at the last of the three timesteps, CO is abundant. This result is consistent with the scenario that gravitational contraction drives the formation of CO molecules \citep[cf.][]{2008ApJ...689..290H}. \item At the chosen resolution of our gridded data (0.5 pc pixel separation), there is a very poor spatial correlation between the HISA strength and the HISA mask. The correlation improves if matches with neighbouring pixels (in the projected plane of the sky, POS) are allowed. \item However, both HISA indicators show a weak but significant correlation with the molecular gas column density on a scale of a few to several parsecs on the POS. At smaller scales no correlation is visible. This suggests that HISA is located on the periphery of the molecular emission, rather than coincident with it. \item The volume and column density PDFs extracted from the simulation show the expected transition from a purely log-normal shape to one with a power-law tail when molecular clouds form. However, the power-law tail is only seen in the molecular components, and not in the \HI, suggesting again that molecule formation is directly correlated with gravitational infall. \item At least in our simulation, most of the multi-peaked \HI\ line profiles in the neighborhood of molecular clouds are caused by bulk \HI\ flows into the molecular clouds, rather than by \HI\ self-absorption. \end{itemize} | 14 | 3 | 1403.6417 |
1403 | 1403.1001_arXiv.txt | { \noindent The empirical mass of the Higgs boson suggests small to vanishing values of the quartic Higgs self--coupling and the corresponding beta function at the Planck scale, leading to degenerate vacua. This leads us to suggest that the measured value of the cosmological constant can originate from supergravity (SUGRA) models with degenerate vacua. This scenario is realised if there are at least three exactly degenerate vacua. In the first vacuum, associated with the physical one, local supersymmetry (SUSY) is broken near the Planck scale while the breakdown of the $SU(2)_W\times U(1)_Y$ symmetry takes place at the electroweak (EW) scale. In the second vacuum local SUSY breaking is induced by gaugino condensation at a scale which is just slightly lower than $\Lambda_{QCD}$ in the physical vacuum. Finally, in the third vacuum local SUSY and EW symmetry are broken near the Planck scale.} | The observation of the Higgs boson with a mass around $\sim 125-126$~GeV, announced by the ATLAS \cite{:2012gk} and CMS \cite{:2012gu} collaborations at CERN, is an important step towards our understanding of the mechanism of the electroweak (EW) symmetry breaking. It is also expected that further exploration of TeV scale physics at the LHC may lead to the discovery of new physics phenomena beyond the Standard Model (SM) that can shed light on the stabilisation of the EW scale. In the Minimal Supersymmetric (SUSY) Standard Model (MSSM) based on the softly broken SUSY the scale hierarchy is stabilized because of the cancellation of quadratic divergences (for a review see \cite{Chung:2003fi}). The unification of gauge coupling constants, which takes place in SUSY models at high energies \cite{5}, allows the SM gauge group to be embedded into Grand Unified Theories (GUTs) \cite{4} based on gauge groups such as $SU(5)$, $SO(10)$ or $E_6$. However, the cosmological constant in SUSY extensions of the SM diverges quadratically and excessive fine-tuning is required to keep its size around the observed value \cite{6}. Theories with flat \cite{7} and warped \cite{8} extra spatial dimensions also allow one to explain the hierarchy between the EW and Planck scales, providing new insights into gauge coupling unification \cite{9} and the cosmological constant problem \cite{10}. Despite the compelling arguments for physics beyond the SM, no signal or indication of its presence has been detected at the LHC so far. Of critical importance here is the observation that the mass of the Higgs boson discovered at the LHC is very close to the lower bound on the Higgs mass in the SM that comes from the vacuum stability constraint \cite{1}-\cite{2}. In particular, it has been shown that the extrapolation of the SM couplings up to the Planck scale leads to (see \cite{Buttazzo:2013uya}) \begin{equation} \lambda(M_{Pl})\simeq 0 \,, \qquad\quad \beta_{\lambda}(M_{Pl})\simeq 0\,, \label{2} \end{equation} where $\lambda$ is the quartic Higgs self--coupling and $\beta_{\lambda}$ is its beta--function. Eqs.~(\ref{2}) imply that the Higgs effective potential has two rings of minima in the Mexican hat with the same vacuum energy density \cite{12}. The radius of the little ring equals the EW vacuum expectation value (VEV) of the Higgs field, whereas in the second vacuum $\langle H\rangle \sim M_{Pl}$. The presence of such degenerate vacua was predicted \cite{12} by the so-called Multiple Point Principle (MPP) \cite{mpp}-\cite{mpp-nonloc}, according to which Nature chooses values of coupling constants such that many phases of the underlying theory should coexist. This scenario corresponds to a special (multiple) point on the phase diagram of the theory where these phases meet. The vacuum energy densities of these different phases are degenerate at the multiple point. In previous papers the application of the MPP to the two Higgs doublet extension of the SM was considered \cite{2hdm-1}--\cite{2hdm-2}. In particular, it was argued that the MPP can be used as a mechanism for the suppression of the flavour changing neutral current and CP--violation effects \cite{2hdm-2}. The success of the MPP in predicting the Higgs mass \cite{12} suggests that we might also use it for explaining the extremely low value of the cosmological constant. In particular, the MPP has been adapted to models based on $(N=1)$ local supersymmetry -- supergravity (SUGRA) \cite{Froggatt:2003jm}--\cite{Froggatt:2005nb}. As in the present article, we used the MPP assuming the existence of a vacuum in which the low--energy limit of the theory is described by a pure SUSY model in flat Minkowski space. Then the MPP implies that the physical vacuum and this second vacuum have the same vacuum energy densities. Since the vacuum energy density of supersymmetric states in flat Minkowski space is just zero, the cosmological constant problem is thereby solved to first approximation. However, the supersymmetry in the second vacuum can be broken dynamically when the SUSY gauge interaction becomes non-perturbative at the scale $\Lambda_{SQCD}$, resulting in an exponentially suppressed value of the cosmological constant which is then transferred to the physical vacuum by the assumed degeneracy \cite{Froggatt:2003jm}--\cite{Froggatt:2005nb}. A new feature of the present article is that we arrange for the hidden sector gauge interaction to give rise to a gaugino condensate near the scale $\Lambda_{SQCD}$. This condensate then induces SUSY breaking at an appreciably lower energy scale, via non-renormalisable terms. The results of our analysis indicate that the appropriate value of the cosmological constant in the second vacuum can be induced if $\Lambda_{SQCD}$ is rather close to $\Lambda_{QCD}$, that is near the scale where the QCD interaction becomes strong in the physical vacuum. In this paper we also argue that both the tiny value of the dark energy density and the small values of $\lambda(M_{Pl})$ and $\beta_{\lambda}(M_{Pl})$ can be incorporated into the (N=1) SUGRA models with degenerate vacua. This requires that SUSY is not broken too far below the Planck scale in the physical vacuum and that there exists a third vacuum, which has the same energy density as the physical and second vacuum. In this third vacuum local SUSY and EW symmetry should be broken near the Planck scale. Our attempt to estimate the small deviation of the cosmological constant from zero relies on the assumption that the physical and SUSY Minkowski vacua are degenerate to very high accuracy. Although in the next section we argue that in the framework of the $(N=1)$ supergravity the supersymmetric and non--supersymmetric Minkowski vacua can be degenerate, it does not shed light on the possible mechanism by which such an accurate degeneracy may be maintained. In principle, a set of approximately degenerate vacua can arise if the underlying theory allows only vacua which have similar order of magnitude of space-time 4-volumes at the final stage of the evolution of the Universe\footnote{This may imply the possibility of violation of a principle that future can have no influence on the past \cite{mpp-nonloc}.}. Since the sizes of these volumes are determined by the expansion rates of the corresponding vacua associated with them, only vacua with similar order of magnitude of dark energy densities are allowed. Thus all vacua are degenerate to the accuracy of the value of the cosmological constant in the physical vacuum. The paper is organized as follows: In the next section we specify an $(N=1)$ SUGRA scenario that leads to the degenerate vacua mentioned above. In sections 3 and 4 we estimate the dark energy density in such a scenario and discuss possible implications for Higgs phenomenology. Our results are summarized in section 5. | In this note, inspired by the observation that the mass of the recently discovered Higgs boson leads naturally to Eq.~(\ref{2}) and degenerate vacua in the Standard Model, we have argued that SUGRA models with degenerate vacua can lead to a rather small dark energy density, as well as small values of $\lambda(M_{Pl})$ and $\beta_{\lambda}(M_{Pl})$. This is realised in a scenario where the existence of at least three exactly degenerate vacua is postulated. In the first (physical) vacuum SUSY is broken near the Planck scale and the small value of the cosmological constant appears as a result of the fine-tuned precise cancellation of different contributions. In the second vacuum the breakdown of local supersymmetry is induced by gaugino condensation, which is formed at the scale $\Lambda_{SQCD}$ where hidden sector gauge interactions become strong. If $\Lambda_{SQCD}$ is slightly lower than $\Lambda_{QCD}$ in the physical vacuum, then the energy density in the second vacuum is rather close to $10^{-120}M_{Pl}^4$. Because of the postulated degeneracy of vacua, this tiny value of the energy density is transferred to the other vacua including the one where we live. In the case of the hidden sector gauge group being $SU(3)$, the measured value of the cosmological constant \cite{6} is reproduced for a value of $\alpha_X(M_{Pl})$ which is only slightly above that of the strong gauge coupling at the Planck scale in the physical vacuum. Finally, the presence of the third degenerate vacuum, where local SUSY and EW symmetry are broken somewhere near the Planck scale, can constrain $\lambda(M_{Pl})$ and $\beta_{\lambda}(M_{Pl})$ in the physical vacuum. This may happen if the VEV of the Higgs field is considerably smaller than $M_{Pl}$ (say $\langle H\rangle \lesssim M_{Pl}/10$). Then the large Higgs VEV may not affect much the VEVs of the hidden sector fields. As a consequence $m^2$ in the Higgs effective potential is expected to be much smaller than $M_{Pl}^2$ and $\langle H^{\dagger} H\rangle$ in the third vacuum. Thus the existence of such a third vacuum with vanishingly small energy density would still imply that $\lambda(M_{Pl})$ and $\beta_{\lambda}(M_{Pl})$ are approximately zero in this vacuum. Since we are taking the VEVs of the hidden sector fields to be almost identical in the physical and third vacua, we also expect $\lambda(M_{Pl})$ and $\beta_{\lambda}(M_{Pl})$ to be almost the same. Consequently we obtain $\lambda(M_{Pl})\approx \beta_{\lambda}(M_{Pl})\approx 0$ in the physical vacuum. It is worth noting that our estimate of the tiny value of the cosmological constant makes sense only if the vacua mentioned above are degenerate to very high accuracy. The identification of a mechanism that can give rise to a set of vacua which are degenerate to such high accuracy is still a work in progress. Here we just remark that vacua with very different dark energy densities should result in very different expansion rates and ultimately in very different space--time volumes for the Universe. If the underlying theory allows only vacua which lead to the similar order of magnitude of space-time 4-volumes then such vacua should be degenerate to the accuracy of the value of the dark energy density in the physical vacuum. \vspace{-5mm} | 14 | 3 | 1403.1001 |
1403 | 1403.4624_arXiv.txt | We combine new and archival \Chandra\ observations of the globular cluster NGC 6752 to create a deeper X-ray source list, and study the faint radio millisecond pulsars (MSPs) of this cluster. We detect four of the five MSPs in NGC 6752, and present evidence for emission from the fifth. The X-rays from these MSPs are consistent with thermal emission from the neutron star surfaces, with significantly higher fitted blackbody temperatures than other globular cluster MSPs (though we cannot rule out contamination by nonthermal emission or other X-ray sources). NGC 6752 E is one of the lowest-$L_X$ MSPs known, with $L_X$(0.3-8 keV)=$1.0^{+0.9}_{-0.5}\times10^{30}$ ergs s$^{-1}$. We check for optical counterparts of the three isolated MSPs in the core using new HST ACS images, finding no plausible counterparts, which is consistent with their lack of binary companions. We compile measurements of $L_X$ and spindown power for radio MSPs from the literature, including errors where feasible. We find no evidence that isolated MSPs have lower $L_X$ than MSPs in binary systems, omitting binary MSPs showing emission from intrabinary wind shocks. We find weak evidence for an inverse correlation between the estimated temperature of the MSP X-rays and the known MSP spin period, consistent with the predicted shrinking of the MSP polar cap size with increasing spin period. | \label{s:intro} The cores of globular clusters (GCs) may reach high stellar densities, up to $10^6$ times that of local space, that can lead to significant dynamical interactions, producing compact binary systems that can engage in mass transfer. Thus, GCs are very efficient at producing interacting binary stars, including low-mass X-ray binaries \citep[LMXBs,][]{Clark75}, radio millisecond pulsars \citep[MSPs,][]{Johnston92}, and cataclysmic variables \citep[CVs,][]{Pooley03}. MSPs are the progeny of LMXB evolution, in which a low mass star transfers angular momentum to a neutron star (NS), spinning up the rotational period of the NS to millisecond timescales \citep{Bhattacharya91,Papitto13}. MSPs can produce both thermal and nonthermal X-rays \citep{Becker02, Zavlin02, Zavlin07}. The nonthermal radiation (dominant in the MSPs with the highest spindown power, \.{E}) is attributed to the pulsar magnetosphere, is generally highly beamed (and thus sharply pulsed), and typically described by a power-law with a photon index $\sim$1.1-1.2 \citep{Becker99,Zavlin07}. The thermal radiation is blackbody-like radiation from a portion of the NS surface around the magnetic poles, heated by a flow of relativistic particles in the pulsar magnetosphere to $\sim$1 MK \citep{Harding02}. The X-ray spectra and rotation-induced pulsations of the nearby MSPs that exhibit thermal radiation are well-described by hydrogen atmosphere models \citep{Zavlin98,Bogdanov07,Bogdanov09}. X-ray observations of a large sample of MSPs allow study of how the thermal radiation from MSPs relates to other pulsar parameters \citep{Kargaltsev12}. Due to the high density of MSPs in GCs, and the well-known distances and reddening to GCs, GCs are ideal targets for such studies. NGC 6752 is a GC located at a distance of $4.0 \pm 0.2$ kpc \citep[][2010 revision]{Harris96}.\footnote{http://physwww.physics.mcmaster.ca/$\sim$harris/mwgc.dat} Its reddening of $E_{B-V}=0.046$ \citep{Gratton05} can be converted to a neutral gas column of $N_H=3.2\times10^{20}$ cm$^{-2}$ using the relation of \citet{Guver09}. The center of the cluster has been measured, using {\it Hubble Space Telescope (HST)} images, to be at (J2000) $19^h10^m52^s.11$,~-59\deg 59\arcmin 04.4\arcsec (\cite{Goldsbury2010}). We adopt a core radius of 10.2\arcsec, and half-mass radius of 1.91\arcmin \citep[][2010 revision]{Harris96}, though the central parts of the surface brightness profile are poorly described by a single King model \citep[see, e.g.,][]{Thomson12}. The cluster was first detected at X-ray wavelengths by \citet{Grindlay93} using the {\it ROSAT} satellite. Deeper {\it ROSAT} studies identified multiple X-ray sources within the cluster \citep{Johnston94,Verbunt00b}, and two CVs were identified in HST images at the positions of two X-ray sources \citep{Bailyn96}. \citet{Pooley02a} used the \Chandra\ X-ray Observatory to resolve the cluster emission into 19 X-ray sources within the half-mass radius, and used {\it HST} and {\it Australian Telescope Compact Array} radio images to confirm the two counterpart suggestions by Bailyn et al. and identify 6-9 more CVs, 1-2 chromospherically active binaries, and 1-3 background galaxies. Five MSPs have been discovered in the cluster \citep{D'Amico02}, three of which lie within the core radius and show extreme line-of-sight accelerations indicative of a high mass density in the cluster core. One pulsar (MSP A) lies 3.3 half-mass radii from the cluster center, suggesting that the pulsar either has been ejected (perhaps by an encounter with a massive black hole, or binary black hole, \citealt{Colpi02}), or is not associated with the cluster \citep{Bassa06}. Four of the five MSPs are isolated, with only MSP A being in a binary system with an optically identified helium white dwarf companion \citep{Ferraro03,Bassa03}. \citet{D'Amico02} note that MSP D matches Pooley et al's CX11, which was identified by Pooley et al. as a CV or galaxy, based on their suggested optical counterpart (see below). \citet{D'Amico02} also identify X-ray emission from MSP C, which lies outside the half-mass radius, and tentatively suggest X-ray emission from MSP B. We have obtained a new \Chandra\ observation, and combined it with the archival 2000 \Chandra\ observation to produce a deeper image of NGC 6752 and create a larger source catalog. In this paper, we describe our X-ray analysis and the new source catalog, and focus on the X-ray properties of the MSPs in NGC 6752. In particular, we clearly identify X-ray emission from four MSPs, and find less certain evidence for X-ray emission from the fifth (MSP E). A companion paper, Lugger et al.\ (in prep) identifies optical counterparts for our extended X-ray source catalog using newly acquired {\it HST Advanced Camera for Surveys (ACS)} data. | The NGC 6752 MSPs appear to have unusually low X-ray luminosities, but high temperatures, when compared to the populations of MSPs observed in the other nearby globular clusters 47 Tuc, NGC 6397, M28, M4, and M71 (see \citealt{Bogdanov06,Bogdanov10,Bogdanov11,Bassa04,Elsner08}). This cannot be attributed to differences in sensitivity, since our observations do not reach to as low X-ray luminosities as those of 47 Tuc, NGC 6397, or M4. Below we consider whether we can identify clear variations in either luminosity or temperature, and whether there may be an obvious explanation if so. \begin{figure} \includegraphics[width=\columnwidth]{MSPs.eps} \caption{Reported values of $L_X$ vs. spindown power for radio MSPs, from this work and the literature (Table 4). Red symbols indicate binary MSPs, blue symbols indicate solitary MSPs, and green symbols indicate binary MSPs with evidence for X-ray emission from an intrabinary wind shock. Filled circles indicate MSPs in globular clusters, while asterisks indicate MSPs outside clusters. MSPs without reliable spindown power measurements have their $L_X$ plotted in the small box on the right.} \label{lxvEdot} \end{figure} Kolmogorov-Smirnov tests comparing the luminosity distributions of the NGC 6752 MSPs with either the 47 Tuc MSPs, or to the MSPs in all clusters listed above, indicate a probability $>$10\% of obtaining this result by chance. Thus we quickly conclude that there is no evidence that the $L_X$ values of the NGC 6752 MSPs are unusual. However, this draws our attention to another possibility. Several very X-ray faint ($L_X\simle 10^{30}$ ergs/s) MSPs, in both the field and globular clusters, are isolated; PSR B1257+12 \citep[which has planets, but no companion, so is considered isolated][]{Pavlov07}, PSR J1024-0719, \citep{Becker99}; PSR J1744-1134, \citep{Kargaltsev12} ; and now NGC 6752 E. This is of particular interest given recent evidence that the radio luminosities of binary and isolated recycled pulsars differ \citep{Burgay13}. We have compiled estimates of the X-ray luminosity (in the 0.3-8 keV band, as this corresponds reasonably to what can actually be measured) for MSPs (pulsars with P$<$20 ms) both in clusters and the field, in Table 4. We include errors on the fluxes and distances (in many cases from parallax measurements); the distance errors typically dominate $L_X$ uncertainties for field MSPs, while the flux measurements dominate uncertainties in $L_X$ for globular cluster MSPs. We include spindown luminosities where possible, and plot $L_X$ vs. spindown power (with errors where calculated) in Figure 8. It is clear that, although there are more X-ray faint isolated MSPs than X-ray faint MSPs in binaries, there is not a significant statistical difference between the thermal $L_X$ of the two populations. Ignoring the three MSPs with high spindown energy, and those binary MSPs showing evidence (typically from orbital variability) for a shocked intrabinary wind producing the majority of X-rays \citep[e.g.,][]{Bogdanov05}, the binary and isolated MSPs have consistent distributions. A Kolmogorov-Smirnov test gives a probability $>$10\% of measuring such a difference even if the two groups have the same parent distribution. There is also no evidence for a difference in the spindown power distributions of binary vs. isolated MSPs, or for a difference in the relation of $L_X$ and spindown power for the two groups. The best fit spectral models of the NGC 6752 MSPs predict generally higher temperatures than seen in the other clusters (Figure~\ref{Lx_T}). (Note that the NSATMOS hydrogen atmosphere model gives lower estimates of the temperatures (Table \ref{BB_NSATMOS}), while the NSATMOS unabsorbed luminosity estimates agree with those from the BB model. ) Unlike for the X-ray luminosities, here we identify a statistically significant difference. A Kolmogorov-Smirnov test, between the inferred blackbody temperatures of the NGC 6752 MSPs and those of the 47 Tuc MSPs, gives a $<$1\% probability of obtaining such dramatically different samples if the parent temperature distributions were identical. This temperature difference, combined with the similar or smaller luminosities, suggests that the emitting regions of the MSPs in NGC 6752 are smaller. Modelling the effective radius and temperature simultaneously in XSPEC (Figure~\ref{T_R}), we confirm that smaller effective emitting radii are required to model the NGC 6752 MSPs. \begin{figure} \includegraphics[width=\columnwidth]{spin_cap.eps} \caption{Fitted (blackbody) polar cap radius vs. spin period for radio MSPs in NGC 6752 (blue), NGC 6397 (green), and 47 Tuc (red). The best-fitting power-law is indicated, with a best-fit slope of -0.65$\pm0.40$, consistent with the theoretically predicted -0.5.} \label{spin_cap} \end{figure} The high modeled temperatures of the NGC 6752 MSPs could be due to X-ray source confusion in the region (photons from higher-temperature sources nearby biasing the temperature estimates), or to magnetospheric emission from these MSPs (a high-energy power-law component, which cannot be identified with these low-statistic spectra). More interestingly, the predicted size of the polar cap region $R_{\rm pc}=(2\pi R_{\rm NS}/(cP))^{1/2} R_{\rm NS}$ (e.g. \citealt{Lyne06}) depends inversely on the spin period. Since the NGC 6752 MSPs have longer periods on average than the 47 Tuc MSPs, there is thus a clear rationale for them to have smaller polar caps and (given similar luminosities) relatively higher polar cap temperatures. To test this idea, we plot inferred MSP effective radii (from single-temperature blackbody fits) vs. spin periods for the MSPs in 47 Tuc, NGC 6397, and NGC 6752 (Figure~\ref{spin_cap}), which suggests a correlation. Fitting the effective radii measurements with a power-law in spin period, we find a best-fit index of -0.65$\pm0.40$ (1$\sigma$ errorbars), which is indeed consistent with the predicted index of -0.5 (though it has rather large errorbars). This correlation could easily be weakened by the (unknown) differences in geometries of the pulsars, and by variations in the strength of unmodeled nonthermal radiation. Nevertheless, following this suggested correlation up with detailed analyses of high-quality archival X-ray spectra of nearby MSPs, and deeper observations of globular cluster MSP populations (including NGC 6752 and 47 Tuc), might verify this long-predicted relation. | 14 | 3 | 1403.4624 |
1403 | 1403.3232_arXiv.txt | { In this paper a comprehensive analysis of VLT / X-Shooter observations of two jet systems, namely ESO-H$\alpha$ 574 a K8 classical T Tauri star and Par-Lup 3-4 a very low mass (0.13~\Msun) M5 star, is presented. Both stars are known to have near-edge on accretion disks. A summary of these first X-shooter observations of jets was given in a 2011 letter. The new results outlined here include flux tables of identified emission lines, information on the morphology, kinematics and physical conditions of both jets and, updated estimates of $\dot{M}_{out}$ / $\dot{M}_{acc}$. Asymmetries in the \eso flow are investigated while the \para jet is much more symmetric. The density, temperature, and therefore origin of the gas traced by the Balmer lines are investigated from the Balmer decrements and results suggest an origin in a jet for \eso while for \para the temperature and density are consistent with an accretion flow. $\dot{M}_{acc}$ is estimated from the luminosity of various accretion tracers. For both targets, new luminosity relationships and a re-evaluation of the effect of reddening and grey extinction (due to the edge-on disks) allows for substantial improvements on previous estimates of $\dot{M}_{acc}$. It is found that log($\dot{M}_{acc}$) = -9.15 $\pm$ 0.45~\Msun yr$^{-1}$ and -9.30 $\pm$ 0.27~\Msun yr$^{-1}$ for \eso and \para respectively. Additionally, the physical conditions in the jets (electron density, electron temperature, and ionisation) are probed using various line ratios and compared with previous determinations from iron lines. The results are combined with the luminosity of the [SII]$\lambda$6731 line to derive $\dot{M}_{out}$ through a calculation of the gas emissivity based on a 5-level atom model. As this method for deriving $\dot{M}_{out}$ comes from an exact calculation based on the jet parameters (measured directly from the spectra) rather than as was done previously from an approximate formula based on the value of the critical density at an assumed unknown temperature, values of $\dot{M}_{out}$ are far more accurate. Overall the accuracy of earlier measurements of $\dot{M}_{out}$ / $\dot{M}_{acc}$ is refined and $\dot{M}_{out}$ / $\dot{M}_{acc}$ = 0.5 (+1.0)(-0.2) and 0.3 (+0.6)(-0.1) for the \eso red and blue jets, respectively, and 0.05 (+0.10)(-0.02) for both the \para red and blue jets. While the value for the total (two-sided) $\dot{M}_{out}$ / $\dot{M}_{acc}$ in \eso lies outside the range predicted by magneto-centrifugal jet launching models, the errors are large and the effects of veiling and scattering on extinction measurements, and therefore the estimate of $\dot{M}_{acc}$, should also be considered. ESO-H$\alpha$ 574 is an excellent case study for understanding the impact of an edge-on accretion disk on the observed stellar emission. The improvements in the derivation of $\dot{M}_{out}$ / $\dot{M}_{acc}$ means that this ratio for \para now lies within the range predicted by leading models, as compared to earlier measurements for very low mass stars. \para is one of a small number of brown dwarfs and very low mass stars which launch jets. Therefore, this result is important in the context of understanding how $\dot{M}_{out}$ / $\dot{M}_{acc}$ and, thus, jet launching mechanisms for the lowest mass jet driving sources, compare to the case of the well-studied low mass stars.} | The mass outflow phase is a key stage in the star formation process. Protostellar jets (fast, collimated outflows) are an important manifestation of the outflow phenomenon and it is generally accepted that they are strongly connected with accretion \citep{Cabrit07}. The importance of these jets lies in the fact that they are the likely mechanism by which angular momentum is removed from the star-disk system \citep{Coffey04}. While jets from low mass young stellar objects (YSOs) have been studied now for several decades, more recently it has been found that very low mass stars (VLMS) and brown dwarfs (BDs) also drive jets during their formation \citep{Joergens13}. The jets of VLMSs and BDs have many similarities with jets of low mass YSOs. For example, they have been found to be collimated, episodic, and asymmetric \citep{Whelan09, Whelan12}, they can have multi-component velocity profiles \citep{Whelan09}, and they are associated with a molecular counterpart \citep{Monin13}. Therefore, it is possible that the mechanisms responsible for the launching and collimation of protostellar jets also operate down to substellar masses \citep{Whelan09}. Although a magneto-centrifugal jet launching model is favoured for the production of jets, the precise scenario is still debated \citep{Ferreira13, Frank14} and, thus observational constraints are currently needed. Protostellar jets are characterised by an abundance of shock excited emission lines and analysis of these emission regions, using both spectroscopy and imaging, offers a wealth of information pertinent to jet launching models. Of particular value are spectroscopic observations which simultaneously cover different wavelength regimes. Such observations allow outflow and accretion properties to be investigated from the same dataset and over a significant range in wavelength, leading, for example, to more accurate estimates of the ratio of mass outflow to accretion ($\dot{M}_{out}$ / $\dot{M}_{acc}$). For instance, studies have shown that the optimum way to measure $\dot{M}_{acc}$ is by using several different accretion indicators, found in different wavelength regimes \citep{Rigliaco11, Rigliaco12}. In this way any spread in $\dot{M}_{acc}$ due to different indicators probing different regimes of accretion or having varying wind/jet contributions can be overcome. Magneto-centrifugal jet launching models place an upper limit (per jet) of $\sim$ 0.3 on $\dot{M}_{out}$ / $\dot{M}_{acc}$ \citep{Ferreira06, Cabrit09}. Therefore it is important to measure this ratio not only in low mass YSOs but also in VLMSs and BDs. $\dot{M}_{out}$ / $\dot{M}_{acc}$ has been constrained at 1~$\%$ to 10~$\%$ for T Tauri stars (class II low mass YSOs; TTSs; \cite{ Hartigan95, Melnikov09, Agra09}). Initial attempts at estimating $\dot{M}_{out}$ / $\dot{M}_{acc}$ in VLMSs / BDs produced results which suggested that this ratio is higher in BDs than in TTSs and at most to be comparable \citep{Comeron03, Whelan09}. More recent studies of the VLMS ISO 143 and the BD FU Tau A, produced ratios which are more in line with studies of TTS \citep{Joergens12b, Stelzer13} . Thus much work is still needed to constrain $\dot{M}_{out}$ / $\dot{M}_{acc}$ at the lowest masses. This work will involve overcoming various observational uncertainties which are outlined in the above papers and addressed as part of this work. X-Shooter, one of the newest instruments on the European Southern Observatory's (ESO) Very Large Telescope (VLT), provides contemporaneous spectra in the ultraviolet (UVB), visible (VIS), and near-infrared (NIR) regimes, with a total coverage of $\sim$ 300~nm to 2500~nm. With the aim of exploring the advantages of \xsh for the study of outflow and accretion activity in YSOs / BDs, we conducted in 2010 a pilot \xsh study of two YSOs. The targets were the classical T Tauri star (CTTS) ESO-H$\alpha$ 574 and the VLMS Par-Lup3-4 (see Section 2). As these were the first \xsh observations of YSO jets the initial results and spectra were published in a letter \citep{Bacciotti11}. In addition, the numerous [Fe II] and [Fe III] detected lines have been discussed in Giannini et al. (2013), where the potential of UV-VIS-NIR Fe line diagnostics in deriving the jet physical parameters is demonstrated. The [Fe II] line analysis revealed that the ESO-H$\alpha$ 574 jet is, on average, colder, less dense, and more ionised than the Par-Lup 3-4 jet. The physical conditions derived from the iron lines were also compared with shock models which pointed to the ESO-Hα 574 shock likely being faster and more energetic than the Par-Lup 3-4 shock. In \cite{Bacciotti11}, outflow and accretion tracers present in the spectra were examined, the effect of extinction by an edge-on disk was discussed, and first estimates of $\dot{M}_{acc}$ and the jet parameters n$_{e}$ and $\dot{M}_{out}$ were given. However, this analysis was tentative, and is now expanded and improved on here with new up-to-date procedures. The study of \para was relevant to the question of the applicability of jet launching models at the lowest masses and one of the particular aims of the work presented here was to overcome difficulties with previous estimates of $\dot{M}_{out}$ / $\dot{M}_{acc}$ in Par-Lup3-4. Firstly, tables of all the emission lines detected in the spectra of the two YSOs and their measured fluxes are presented. The goal here is to provide an important reference for future observational and computational studies of jets. Section 4.1 describes the line identification process and the tables of identified lines can be found in Appendices A1 and A2. Secondly, the kinematics and morphology of the two jets are discussed in greater detail than in \cite{Bacciotti11} (Sections 4.2 and 4.3). For ESO-H$\alpha$ 574, high spectral resolution UVES spectra taken from the ESO archive are included in order to improve the kinematical analysis. Since \cite{Bacciotti11}, several \xsh studies of accretion in YSOs have been conducted with participation from members of our team \citep{Rigliaco12, alcala13, Manara13}. This experience has allowed us to refine our methods for estimating $\dot{M}_{acc}$. Hence, thirdly an updated analysis of $\dot{M}_{acc}$ in both targets is given, including a more detailed investigation of how the extinction of both sources can be evaluated and how it effects $\dot{M}_{acc}$ estimates (Sections 4.4, 4.5). As both targets have edge-on disks, the obscuration affects of the disks are particularly relevant to our study of $\dot{M}_{acc}$. Furthermore, an improved approach for measuring $\dot{M}_{out}$ is presented along with a more accurate determination of $\dot{M}_{out}$ / $\dot{M}_{acc}$ for both sources. Finally, the origin of the permitted emission in both sources is explored by examining their Balmer decrements (see Section 4.6). \begin{figure} \includegraphics[width=11cm, trim= 3cm 0cm 0cm 0cm, clip=true]{comp_eso.pdf} \caption{Different notation systems for the knots of \eso found in the literature. Left: HST WFPC images of \eso published in \cite{Robberto12}. For the purpose of this figure we obtained the reduced image from the HST archive (http://archive.stsci.edu/hst/). Right: Position-velocity diagram of the \eso jet in H$\alpha$ made from our \xsh data and first published in \cite{Bacciotti11}. In this paper we follow the notation of \cite{Bacciotti11}. } \label{comp_eso} \end{figure} | \subsection{Constraining $\dot{M}_{out}$ / $\dot{M}_{acc}$ in BDs and VLMSs} The current sample of BDs and VLMSs with outflows is small and consists of $\sim$ 10 objects \citep{Joergens13}, ranging in mass from 0.024~\Msun\ to 0.18~\Msun\ \citep{Whelan12, Joergens12, Joergens12b}. Small spatial scales and the faintness of the jet emission limit the capability of studying these outflows with respect to the case of jets from CTTSs \citep{Ray07}, however $\dot{M}_{out}$ / $\dot{M}_{acc}$ has been investigated in some cases. The first investigations seemed to suggest that as mass decreases from from CTTSs to BDs, $\dot{M}_{out}$ / $\dot{M}_{acc}$ may in fact increase from the 1~$\%$ to 10~$\%$ typically measured for CTTSs. If this trend is confirmed it would present a challenge for current magneto-centrifugal jet launching models. For example, \cite{Ferreira06} give an upper limit of 0.3 on the (one-sided) mass ejection to accretion ratio that can be sustained by a disk wind model and \cite{Cabrit09} discuss how high values of $\dot{M}_{out}$ / $\dot{M}_{acc}$ are energetically challenging to stellar and x-wind models. Results from the first studies of $\dot{M}_{out}$ / $\dot{M}_{acc}$ in BDs / VLMSs include, \cite{Whelan09} where this ratio is estimated for the BDs ISO-Cha~I 217, LS-RCr~A1 and ISO-Oph 102. For all three objects $\dot{M}_{out}$ / $\dot{M}_{acc}$ (one-sided) was measured to be $\sim$ 1. More recently, \cite{Stelzer13} report $\dot{M}_{out}$ / $\dot{M}_{acc}$ for the blue-shifted outflow of the 0.050~\Msun\ BD FU Tau~A to be $\sim$ 0.3, \cite{Joergens12b} measure $\dot{M}_{out}$ / $\dot{M}_{acc}$ to be between 1~$\%$ to 20~$\%$ for the VLMS ISO~143 and in \cite{Bacciotti11} we place this ratio for \para in the range 0.3-0.5. It should be considered however that these investigations of accretion-ejection connection in BDs have had several limitations. Firstly, BD jet velocities are not well constrained and estimates of the full jet velocity or at least of the tangential velocities are needed to calculate $\dot{M}_{out}$. Determination of the full jet velocity from the radial velocity is hampered by the uncertainty on the inclination angle of the system. Proper motion measurements are more accurate but they have been conducted only for the Par Lup 3-4 case. In other cases jet velocities have been inferred from the kinematics and shape of FEL profiles \citep{Whelan09}. A second source of uncertainty is the estimate of n$_{e}$, normally derived from the [SII] lines. However, for BDs the [SII] lines have not been detected in majority of cases. A third factor is the reliability of estimates of $\dot{M}_{acc}$ which were often only derived using the H$\alpha$ line. Additionally, the effect of extinction on the source (and thus estimate of $\dot{M}_{acc}$) and on the jet was not well known in most cases. Finally, and most significantly, methods for calculating $\dot{M}_{out}$ relied on uncertain values of the critical densities of the different jet tracers. As studies of $\dot{M}_{out}$ / $\dot{M}_{acc}$ in BDs are an important basis for comparing outflow activity in BDs and low mass stars it is essential that the difficulties outlined above are resolved. The revised analysis of $\dot{M}_{out}$ / $\dot{M}_{acc}$ in \para presented in the paper demonstrates how this can be done and hence this work is highly relevant to future studies of $\dot{M}_{out}$ / $\dot{M}_{acc}$ in BDs and VLMSs. Indeed, for \para $\dot{M}_{out}$ / $\dot{M}_{acc}$ (one-sided) is reduced from $\sim$ 0.25 to 0.05 as compared to results presented in \cite{Bacciotti11}. As the velocities of the \para jets are well known from proper motion studies and as \para contains sufficiently bright FELs for the jet parameters to be constrained \para was an ideal source for constraining $\dot{M}_{out}$ / $\dot{M}_{acc}$. Our analysis demonstrates the importance of understanding the effects of extinction and outlines an improved method for doing this. Furthermore, the importance of estimating $\dot{M}_{acc}$ from a range of tracers is clearly established along with the fact \xsh is currently one of the best instruments for doing this. Finally, the issue with uncertainties in critical density estimates is overcome through the use of the same exact calculation for $\dot{M}_{out}$ that is usually adopted for CTTSs jets. It is shown that the use of approximate formulas involving critical densities can lead to severe overestimates in $\dot{M}_{out}$ for T$_{e}$ $>$ 8-9 ~10$^3$ K, thus introducing a strong bias and it is likely that this factor has lead to substantial over-estimations of $\dot{M}_{out}$ / $\dot{M}_{acc}$ in early studies. It would be very beneficial to now revisit the BDs studied in \cite{Whelan09} for example and investigate $\dot{M}_{out}$ / $\dot{M}_{acc}$ in the same manner as done here for Par-Lup3-4. Also note that studies aimed at measuring the proper motions and consequently velocities of known BD jets are currently under-way \citep{Whelan13}. \subsection{Origin of \eso jet asymmetries} \cite{Hirth94} first began the discussion of asymmetries in protostellar jets. In their paper they presented the particular cases of the asymmetric jets driven by the CTTSs RW Aur and DO Tau. They also noted, that a literature search of protostellar jets known at the time, showed that in 50~$\%$ of these jets the blue and red-shifted lobes were asymmetric in velocity. As well as velocity asymmetries the blue and red lobes of protostellar jets can also differ in their morphology and in the number of distinct knots (\eso), in the electron densities of the lobes \citep{Caratti13, Podio10} and in $\dot{M}_{out}$ \citep{Whelan09}. While no dedicated observational study of asymmetric jets has been conducted since \cite{Hirth94}, numerous examples have been observed individually using state-of-the-art observing techniques \citep{Dougados00, Melnikov09, Podio10, Caratti13}. This includes the asymmetric jet from the BD candidate ISO-ChaI 217 \citep{Whelan09, Joergens12a}. For the case of ISO-ChaI 217 the radial velocity of the red-shifted lobe was observed to be up to twice that of the blue-shifted lobe, however the exact magnitude of the velocity asymmetry will not be known until spectra taken along the derived jet PA become available. The red-shifted lobe was also found to be much brighter than the blue and the mass flux in the red flow was estimated at twice that of the blue flow. This is an interesting result as it highlights that the mechanism responsible for such asymmetries also operates at sub stellar masses. Observationally it seems that protostellar jets are more typically like \eso and symmetric jets like \para are observed less often. In particular, jets which exhibit strong morphological symmetry, i.e. in the number and spacing of their knots are rare \citep{Zinnecker98} and even in cases where there are no velocity asymmetries it is normal that both lobes have different numbers of knots and that the spacing between these knots be variable. This is true for \para where the jets are symmetric in velocity but a counterpart to the red-shifted knot HH~600 is not seen in the blue-shifted jet. Morphological asymmetries can be due to non-uniformities in the ambient medium and / or variability in the frequency at which material is ejected into the different lobes. A variable frequency and velocity of ejection (as discussed for \eso in Section 4.2.1), differences in densities and in mass flux, can all be explained in terms of current jet models \citep{Fendt13, Matsakos12}. \cite{Matsakos12} used numerical simulations to investigate the possibility that asymmetric jet velocities could be introduced either due to unaligned magnetic fields or when both lobes experienced different outer pressures. That is the cause is either intrinsic to the jet launching mechanism or extrinsic and originates due to inhomogenities in the ambient medium. Overall they found that both multi-polar magnetic moments and non-uniform environments could equally well explain the observed asymmetries. The idea of an inhomogeneous environment causing velocity asymmetries has been used before to explain cases of asymmetric jets where $\dot{M}_{out}$ is not found to be different in the two lobes \citep{Podio10, Melnikov09}. \cite{Fendt13} also explore numerically methods for generating jet asymmetries which are intrinsic to the launch mechanism. To do this they begin with a highly symmetric jet and then disturb the symmetry in the disk to induce asymmetries in the jets. Interestingly they find that the disk asymmetries result in outflows where $\dot{M}_{out}$ can differ by up to 20$\%$ in the two lobes. Comparing $\dot{M}_{out}$ in the blue-shifted lobe of \eso with $\dot{M}_{out}$ in the red-shifted lobe, it is seen that $\dot{M}_{out}$ red is $\sim$ 60$\%$ of $\dot{M}_{out}$ blue. Therefore this case would fit in well with the models of \citep{Fendt13}. $\dot{M}_{out}$ blue is the average of $\dot{M}_{out}$ measured for knots A1, A and B while $\dot{M}_{out}$ red is the value measured for knot E | 14 | 3 | 1403.3232 |
1403 | 1403.1237_arXiv.txt | Splitting of the nuclei of comets into multiple components has been frequently observed but, to date, no main-belt asteroid has been observed to break-up. Using the Hubble Space Telescope, we find that main-belt asteroid P/2013 R3 consists of 10 or more distinct components, the largest up to 200 m in radius (assumed geometric albedo of 0.05) each of which produces a coma and comet-like dust tail. A diffuse debris cloud with total mass $\sim$2$\times$10$^8$ kg further envelopes the entire system. The velocity dispersion among the components, $\Delta V \sim$ 0.2 to 0.5 m s$^{-1}$, is comparable to the gravitational escape speeds of the largest members, while their extrapolated plane-of-sky motions suggest break-up between February and September 2013. The broadband optical colors are those of a C-type asteroid. We find no spectral evidence for gaseous emission, placing model-dependent upper limits to the water production rate $\le$1 kg s$^{-1}$. Breakup may be due to a rotationally induced structural failure of the precursor body. | Main-belt object P/2013 R3 (Catalina-Pan STARRS, hereafter ``R3'') was discovered on UT 2013 September 15 and announced on September 27 (Hill et al.~2013). Its orbital semimajor axis, eccentricity and inclination are 3.033 AU, 0.273 and 0.90\degr, respectively, firmly establishing R3 as a member of the main asteroid belt, although its dusty appearance resembles that of a comet. The Tisserand parameter relative to Jupiter, $T_J$ = 3.18, is significantly larger than the nominal ($T_J$ = 3) dividing line separating dynamical comets ($T_J <$ 3) from asteroids ($T_J >$ 3, c.f.~Kresak 1980). The combination of asteroid-like orbit and comet-like appearance together qualify R3 as an active asteroid (Jewitt 2012) or, equivalently, a main-belt comet (Hsieh and Jewitt 2006). The mechanism responsible for mass loss in the majority of such objects is unknown. In this brief report, we describe initial observations taken to establish the basic properties of this remarkable object. At the time of observation, R3 had just passed perihelion ($R$ = 2.20 AU) on UT 2013 August 05. | Break-up of cometary nuclei has been frequently observed (Boehnhardt 2004) and variously interpreted as due to tidal stresses (Asphaug and Benz 1996), the build up of internal pressure forces from gases generated by sublimation (Samarasinha 2001), impact (Toth 2001) and rotational bursting (Jewitt 1992). The orbit of R3 (perihelion 2.20 AU, aphelion 3.8 AU) prevents close approaches to the sun or planets, so that tidal forces can be ignored. To estimate the highest possible gas pressure on R3 we solved the energy balance equation for black ice sublimating at the subsolar point. The resulting equilibrium temperature, $T_{SS}$ = 197 K at 2.25 AU, corresponds to gas pressure $P \sim$ 0.04 N m$^{-2}$, which is far smaller than both the central hydrostatic pressure and the $\sim$10$^3$ N m$^{-2}$ tensile strengths of even highly porous dust aggregates (Blum and Schr{\"a}pler~2004, Meisner et al.~2012, Seizinger et al.~2013). A more volatile ice (e.g.~CO), if present, could generate higher pressures but the long term stability of such material in the asteroid belt seems highly improbable. We conclude that sublimation gas pressure cracking is not a viable mechanism, although, if ice does exist in R3, its exposure after break-up could contribute to the continued dust production. Several observations argue against an impact origin. The separation times of the components are staggered over several months, whereas impact should give a single time. Ejecta from an impact should be consistent with a single synchrone date whereas in R3 the fitted dates differ. The scattering cross-section increases between October 01 and 29 and decreases very slowly thereafter (Table \ref{photometry}), inconsistent with an impulsive origin. and unlike the best-established asteroid impact event (on (596) Scheila, c.f.~Bodewits et al.~2011, Jewitt et al.~2011, Ishiguro et al.~2011). Furthermore, impacts produce ejecta with a broad spectrum of velocities, from sub-escape to the impact speed (Housen and Holsapple 2011) whereas our data provide no evidence for fast ejecta, even in the earliest observations. For these reasons, we suspect that impact does not provide a natural explanation of the properties of R3, although we cannot rule it out. Rotational breakup of a strengthless body should occur when the centripetal acceleration on the surface exceeds the gravitational acceleration towards the center. For a sphere of density $\rho$ = 10$^3$ kg m$^{-3}$ the critical period for breakup is $\sim$3.3 hr, while for elongated bodies, the instability occurs at longer periods. Solar radiation provides a torque (the ``YORP'' torque) capable of driving the spin of a sub-kilometer asteroid to the critical value in less than a million years, making rotational breakup a plausible mechanism for R3 and other small asteroids (Marzari et al.~2011). (A tangential jet from sublimating ice carrying 1 kg s$^{-1}$ (i.e.~satisfying our spectral upper limit) could spin-up a 200 m radius body on a timescale of months). Aspects of R3 consistent with rotational breakup include the absence of fast ejecta, the low velocity dispersion of the major fragments (comparable to the gravitational escape speeds) and their peculiar alignment (along the ABC axis in Figure \ref{images}), which we interpret as the rotational equator of the disrupted parent body. Rotational instability is a potential source of bound (e.g.~Walsh et al.~2012) and unbound asteroid pairs (Jacobson et al.~2014, Polishook et al.~2014) and of chaotic systems in which mass is both re-accreted and shed from interacting ejecta (Jacobson and Scheeres 2011). Six-tailed object P/2013 P5 has been interpreted as the product of rotational instability, although its morphology is quite different from that of R3 (Jewitt et al.~2013). Depending on the body shape and material properties, the criteria for shedding instability and structural failure can be quite different (Hirabayashi and Scheeres 2014). We suggest that P/2013 P5 is episodically shedding only its regolith while the multiple components of R3 indicate that a more profound structural failure has occurred. Fresh observational effort is warranted to secure additional high-resolution measurements of the motions of the fragments in order to better constrain the dynamics of R3. Continued physical observations are also needed to isolate the embedded nuclei, and so to determine their sizes, shapes and rotational states. \clearpage | 14 | 3 | 1403.1237 |
1403 | 1403.6621_arXiv.txt | Axino as the superpartner of axion that solves the strong CP problem can be a good candidate of dark matter. Inspired by the 3.5 keV X-ray line signal found to be originated from galaxy clusters and Andromeda galaxy, we study axino models with R-parity violations, and point out that axino dark matter with trilinear R-parity violations is an attractive scenario that reproduces the X-ray line. The Peccei-Quinn scale is required to be $f_a\sim{\cal O}(10^{9}-10^{11})~\GEV$ for trilinear R-parity violating couplings $\lambda\sim {\cal O} (10^{-3}-10^{-1})$ in order to explain the line signal. Moreover, the right-handed stau is predicted to be light, i.e.~$\sim{\cal O}(100)$ GeV, and thus can be looked for at the LHC. Cosmological aspects of the model are also discussed in this study. | Axion is a product of the Peccei-Quinn (PQ) mechanism introduced to solve the strong CP problem~\cite{Kawasaki:2013ae}. In supersymmetric (SUSY) models of axion, $a$, axino, $\ax$ (saxion, $\sigma$) appears as the fermionic (scalar) partner of axion. Axino is neutral and if it is the lightest supersymmetric particle (LSP), it can be a candidate of dark matter (DM) which makes up 27 \% of the energy density of the universe~\cite{Choi:2013lwa}. With the introduction of R-parity violating (RPV) terms into the SUSY models, the LSP is destabilized and starts decaying into Standard Model (SM) particles. If the lifetime of the LSP is similar or longer than the age of the universe, it still can play the role of DM. Furthermore, this scenario opens up an opportunity to observe the signature of DM decay. This kind of signature might have been discovered by two independent groups looking for X-ray line emissions originated from galaxy clusters as well as Andromeda galaxy~\cite{Bulbul:2014sua,Boyarsky:2014jta}. It has been found that there is an excess of X-ray emissions at around 3.5 keV, which has no explanation with known physics. Slowly decaying DM of mass $\mdm \sim 7$ keV is a viable interpretation of the line signal, although one requires its confirmation from other observational experiments and astrophysical objects. If the line signal is to be interpreted as DM decaying into a photon, its lifetime is estimated to be $\tdm\sim 10^{28}$ sec. The X-ray line observations have generated interests in interpreting the line signal with axino DM~\cite{Kong:2014gea,Choi:2014tva}.~\footnote{Axino DM with RPV has also been studied, albeit in different contexts, in~\cite{Kim:2001sh,Hooper:2004qf,Chun:2006ss,Endo:2013si}. See also~\cite{Hasenkamp:2011xh}, where gravitino is the LSP and axino is the heavier SUSY particle.}~These studies have been focusing on bilinear RPV, where axino decays into a photon and a neutrino via gaugino mixing with neutrinos. It is shown in~\cite{Kong:2014gea} that the PQ scale, $f_a$ needs to be $\sim 10^8-10^9~\GEV$ in order to reproduce the X-ray line. However, when the supernova (SN 1987A) bound on axion is taken into account, where the following relation has to be satisfied~\cite{Raffelt:2006cw}: \begin{equation} f_a \gtrsim 4 \times 10^8~\GEV, \end{equation} the remaining viable parameter region becomes much constrained. In~\cite{Choi:2014tva}, $f_a$ is pushed to values $10^9-10^{11}~\GEV$ by requiring the bino mass to be $\lesssim 10~\GEV$. On the other hand, other neutralinos must be kept much heavier than the bino, which is technically possible but raises questions of how and why such hierarchy exists. In the present work, we instead explore axino DM with trilinear RPV in light of the anomalous 3.5 keV X-ray line. Axino with trilinear RPV can decay into a photon and a neutrino ($\ax \to \gamma + \nu$) via Feynman diagrams with a loop involving fermion and sfermion. As will be shown in the following sections, $f_a$ can be made larger than $10^9 \GEV$, evading various astrophysical bounds on axion while being consistent with the X-ray line from axino DM decay. Within this framework, sfermions are relatively light, and RPV couplings are predicted to be large, making this model testable using colliders. The rest of the paper is organized as follows. In Section~\ref{sc:fr}, we lay down the framework of our study, discussing SUSY axion models and RPV. In Section~\ref{sc:ax}, we study in detail decaying axino DM that explains the 3.5 keV X-ray line. Before we conclude, we describe cosmological and phenomenological consequences and implications of our model. | In the present work, we have considered axino DM with trilinear RPV in the wake of an anomalous X-ray line found to be originated from galaxy clusters and Andromeda galaxy. We have found several interesting features within this framework, including a consistent interpretation of the line signal with the phenomenologically viable ``axion window" ($f_a \sim{\cal O}(10^{9}-10^{11})~\GEV$), as well as a light stau ($\sim{\cal O}(100)$ GeV). Cosmological constraints can also be satisfied in general. The next run of the LHC will be crucial at identifying or disfavoring the model by searching for the characteristic R-parity violating decay of stau. Finally, let us remark that it is also vital to obtain experimental confirmation of the tentative line signal from other X-ray telescopes, such as the forthcoming ASTRO-H. | 14 | 3 | 1403.6621 |
1403 | 1403.5308_arXiv.txt | This is the second of two papers describing the second data release (DR2) of the Australia Telescope Large Area Survey (ATLAS) at 1.4~GHz. In Paper I we detailed our data reduction and analysis procedures, and presented catalogues of components (discrete regions of radio emission) and sources (groups of physically associated radio components). In this paper we present our key observational results. We find that the 1.4~GHz Euclidean normalised differential number counts for ATLAS components exhibit monotonic declines in both total intensity and linear polarization from millijansky levels down to the survey limit of $\sim100$~$\mu$Jy. We discuss the parameter space in which component counts may suitably proxy source counts. We do not detect any components or sources with fractional polarization levels greater than 24\%. The ATLAS data are consistent with a lognormal distribution of fractional polarization with median level 4\% that is independent of flux density down to total intensity $\sim10$~mJy and perhaps even 1~mJy. Each of these findings are in contrast to previous studies; we attribute these new results to improved data analysis procedures. We find that polarized emission from 1.4~GHz millijansky sources originates from the jets or lobes of extended sources that are powered by an active galactic nucleus, consistent with previous findings in the literature. We provide estimates for the sky density of linearly polarized components and sources in 1.4~GHz surveys with $\sim10\arcsec$ resolution. | \label{sec:1} A number of studies have reported an anti-correlation between fractional linear polarization and total intensity flux density for extragalactic 1.4~GHz sources; faint sources were found to be more highly polarized \citep{2002A&A...396..463M, 2004MNRAS.349.1267T,2007ApJ...666..201T,2010ApJ...714.1689G,2010MNRAS.402.2792S}. As a result, the Euclidean-normalised differential number-counts of polarized sources have been observed to flatten at linearly polarized flux densities $L$~{\footnotesize $\lesssim$}~1~mJy to levels greater than those expected from convolving the known total intensity source counts with plausible distributions for fractional polarization \citep{2008evn..confE.107O}. The flattening suggests that faint polarized sources may exhibit more highly ordered magnetic fields than bright sources, or may instead suggest the emergence of an unexpected faint population. The anti-correlation trend for fractional linear polarization is not observed at higher frequencies \citep[$\ge4.8$~GHz;][]{2011ApJ...732...45S,2011MNRAS.413..132B,2013MNRAS.436.2915M}. To investigate possible explanations for the fractional polarization trend seen in previous studies, we have produced the second data release of the Australia Telescope Large Area Survey (ATLAS DR2) as described in Paper I \citep{halesPI} of this two paper series. ATLAS DR2 comprises reprocessed and new 1.4~GHz observations with the Australia Telescope Compact Array (ATCA) about the {\it Chandra} Deep Field-South \citep[CDF-S; Galactic coordinates $l\approx224\degree$, $b\approx-55\degree$;][]{2006AJ....132.2409N} and European Large Area {\it Infrared Space Observatory} Survey-South 1 \citep[ELAIS-S1; $l\approx314\degree$, $b\approx-73\degree$;][]{2008AJ....135.1276M} regions in total intensity, linear polarization, and circular polarization. The mosaicked multi-pointing survey areas for ATLAS DR2 are 3.626~deg$^2$ and 2.766~deg$^2$ for the CDF-S and ELAIS-S1 regions, respectively, imaged at approximately $12\arcsec\times6\arcsec$ resolution. Typical source detection thresholds are 200~$\mu$Jy in total intensity and polarization. In Paper I we presented our data reduction and analysis prescriptions for ATLAS DR2. We presented a catalogue of components (discrete regions of radio emission) comprising 2416 detections in total intensity and 172 independent detections in linear polarization. No components were detected in circular polarization. We presented a catalogue of 2221 sources (groups of physically associated radio components; grouping scheme based on total intensity properties alone, as described below), of which 130 were found to exhibit linearly polarized emission. We described procedures to account for instrumental and observational effects, including spatial variations in each of image sensitivity, bandwidth smearing with a non-circular beam, and instrumental polarization leakage, clean bias, the division between peak and integrated flux densities for unresolved and resolved components, and noise biases in both total intensity and linear polarization. Analytic correction schemes were developed to account for incompleteness in differential component number counts due to resolution and Eddington biases. We cross-identified and classified sources according to two schemes, summarized as follows. In the first scheme, described in \S~6.1 of Paper I, we grouped total intensity radio components into sources, associated these with infrared sources from the {\it Spitzer} Wide-Area Infrared Extragalactic Survey \citep[SWIRE;][]{2003PASP..115..897L} and optical sources from \citet{2012MNRAS.426.3334M}, then classified them according to whether their energetics were likely to be driven by an active galactic nucleus (AGN), star formation (SF) within a star-forming galaxy (SFG), or a radio star. Due to the limited angular resolution of the ATLAS data, in Paper I we adopted the term {\it lobe} to describe both jets and lobes in sources with radio double or triple morphologies. The term {\it core} was similarly defined in a generic manner to indicate the central component in a radio triple source. Under this terminology, a core does not indicate a compact, flat-spectrum region of emission; restarted AGN jets or lobes may contribute or even dominate the emission observed in the regions we have designated as cores. AGNs were identified using four selection criteria: radio morphologies, 24~$\mu$m to 1.4~GHz flux density ratios, mid-infrared colours, and optical spectral characteristics. SFGs and stars were identified solely by their optical spectra. Of the 2221 ATLAS DR2 sources, 1169 were classified as AGNs, 126 as SFGs, and 4 as radio stars. We note that our classification system was biased in favour of AGNs. As a result, the ATLAS DR2 data are in general unsuited for statistical comparisons between star formation and AGN activity. In the second scheme, described in \S~6.2 of Paper I, we associated linearly polarized components, or polarization upper limits, with total intensity counterparts. In most cases it was possible to match a single linearly polarized component with a single total intensity component, forming a one-to-one match. In other cases this was not possible, due to ambiguities posed by the blending of adjacent components; for example, a polarized component situated mid-way between two closely-separated total intensity components. In these cases, we formed group associations to avoid biasing measurements of fractional polarization. We classified the polarization--total intensity associations according to the following scheme, which we designed to account for differing (de-)polarized morphologies (see Paper~I for graphical examples): \begin{itemize} \item[] \noindent {\it Type 0} -- A one-to-one or group association identified as a lobe of a double or triple radio source. Both lobes of the source are clearly polarized, having linearly polarized flux densities within a factor of 3. (The ratio between lobe total intensity flux densities was found to be within a factor of 3 for all double or triple ATLAS DR2 sources.) \item[] \noindent {\it Types 1/2} -- A one-to-one or group association identified as a lobe of a double or triple radio source that does not meet the criteria for Type 0. A lobe classified as Type 1 indicates that the ratio of polarized flux densities between lobes is greater than 3. A lobe classified as Type 2 indicates that the opposing lobe is undetected in polarization and that the polarization ratio may be less than 3, in which case it is possible that more sensitive observations may lead to re-classification as Type 0. Sources with lobes classified as Type 1 exhibit asymmetric depolarization in a manner qualitatively consistent with the Laing-Garrington effect \citep{1988Natur.331..149L,1988Natur.331..147G}, where one lobe appears more fractionally polarized than the opposite lobe. \item[] \noindent {\it Type 3} -- A group association representing a source, involving a linearly polarized component situated midway between two total intensity components. It is not clear whether such associations represent two polarized lobes, a polarized lobe adjacent to a depolarized lobe, or a polarized core. \item[] \noindent {\it Type 4} -- An unclassified one-to-one or group association representing a source. \item[] \noindent {\it Type 5} -- A one-to-one association clearly identified as the core of a triple radio source (where outer lobes are clearly distinct from the core). \item[] \noindent {\it Type 6} -- A source comprising two Type 0 associations, or a group association representing a non-depolarized double or triple radio source where blended total intensity and linear polarization components have prevented clear subdivision into two Type 0 associations. \item[] \noindent {\it Type 7} -- A source comprising one or two Type 1 associations. \item[] \noindent {\it Type 8} -- A source comprising one Type 2 association. \item[] \noindent {\it Type 9} -- An unpolarized component or source. \end{itemize} In this work (Paper II) we present the key observational results from ATLAS DR2, with particular focus on the nature of faint polarized sources. This paper is organised as follows. In \S~\ref{ch5:SecRes} we present the ATLAS DR2 source diagnostics resulting from infrared and optical cross-identifications and classifications, diagnostics resulting from polarization--total intensity cross-identifications and classifications, differential component number-counts, and our model for the distribution of fractional polarization. In \S~\ref{ch5:SecDisc} we compare the ATLAS DR2 differential counts in both total intensity and linear polarization to those from other 1.4~GHz surveys, and discuss asymmetric depolarization of classical double radio sources. We present our conclusions in \S~\ref{ch5:SecConc}. This paper follows the notation introduced in Paper I. We typically denote flux density by $S$, but split into $I$ for total intensity and $L$ for linearly polarized flux density when needed for clarity. | \label{ch5:SecConc} In this work we have presented results and discussion for ATLAS DR2. Our key results are summarised as follows. For convenience we use the term `millijansky' loosely below to indicate flux densities in the range $0.1-1000$~mJy. \begin{enumerate}[(i)] \item Radio emission from polarized millijansky sources is most likely powered by AGNs, where the active nuclei are embedded within host galaxies with mid-infrared spectra dominated by old-population (10~Gyr) starlight or continuum produced by dusty tori. We find no evidence for polarized SFGs or individual stars to the sensitivity limits of our data - all polarized ATLAS sources are classified as AGNs. \item The ATLAS data indicate that fractional polarization levels for sources with starlight-dominated mid-infrared hosts and those with continuum-dominated mid-infrared hosts are similar. \item The morphologies and angular sizes of polarized ATLAS components and sources are consistent with the interpretation that polarized emission in millijansky sources originates from the jets or lobes of extended AGNs, where coherent large-scale magnetic fields are likely to be present. We find that the majority of polarized ATLAS sources are resolved in total intensity, even though the majority of components in linear polarization are unresolved. This is consistent with the interpretation that large-scale magnetic fields that do not completely beam depolarize are present in these sources, despite the relatively poor resolutions of the ATLAS data. \item We do not find any components or sources with fractional polarization levels greater than 24\%, in contrast with previous studies of faint polarized sources. We attribute this finding to our improved data analysis procedures. \item The ATLAS data are consistent with a distribution of fractional polarization at 1.4~GHz that is independent of flux density down to $I\sim10$~mJy, and perhaps even down to 1~mJy when considering the upper envelope of the distribution. This result is in contrast to the findings from previous deep 1.4~GHz polarization surveys \citep[with the very recent exception of][]{2014arXiv1402.3637R}, and is consistent with results at higher frequencies ($\ge4.8$~GHz). The anti-correlation observed in previous 1.4~GHz studies is due to two effects: a selection bias, and spurious high fractional polarization detections. Both of these effects can become more prevalent at faint total flux densities. We find that components and sources can be characterised using the same distribution of fractional linear polarization, with a median level of 4\%. We have presented a new lognormal model to describe the distribution of fractional polarization for 1.4~GHz components and sources, specific to AGNs, in surveys with resolution FWHMs $\sim10\arcsec$. \item No polarized SFGs were detected in ATLAS DR2 down to the linear polarization detection threshold of $\sim200$~$\mu$Jy. The ATLAS data constrain typical fractional polarization levels for the $I$~{\footnotesize $\gtrsim$}~100~$\mu$Jy SFG population to be $\Pi_\tnm{\tiny SFG}<60\%$. \item Differences between differential number-counts of components and of sources in 1.4~GHz surveys with resolution FWHM $\sim10\arcsec$ are not likely to be significant ({\footnotesize $\lesssim$}~20\%) at millijansky levels. \item The ATLAS total intensity differential source counts do not exhibit any unexpected flattening down to the survey limit $\sim100\mu$Jy. \item The ATLAS linearly polarized differential component counts do not exhibit any flattening below $\sim1$~mJy, unlike previous findings which have led to suggestions of increasing levels of fractional polarization with decreasing flux density or the emergence of a new source population. The polarized counts down to $\sim100$~$\mu$Jy are consistent with being drawn from the total intensity counts at flux densities where luminous FR-type radio galaxies and quasars dominate. \item Constrained by the ATLAS data, we estimate that the surface density of linearly polarized components in a 1.4~GHz survey with resolution FWHM $\sim10\arcsec$ is 50~deg$^{-2}$ for $L_{\tnm{\tiny cmp}}\ge100$~$\mu$Jy, and 90~deg$^{-2}$ for $L_{\tnm{\tiny cmp}}\ge50$~$\mu$Jy. We estimate that the surface density for polarized sources is $\sim45$~deg$^{-2}$ for $L_{\tnm{\tiny src}}\ge100$~$\mu$Jy, assuming that most polarized components belong to dual-component sources (e.g. FR-type) at these flux densities. \item We find that the statistics of ATLAS sources exhibiting asymmetric depolarization are consistent with the interpretation that the Laing-Garrington effect is due predominantly to source orientation within a surrounding magnetoionic medium. To our knowledge, this work represents the first attempt to investigate asymmetric depolarization in a blind survey. \end{enumerate} | 14 | 3 | 1403.5308 |
1403 | 1403.0231_arXiv.txt | {Flux emergence is widely recognized to play an important role in the initiation of coronal mass ejections. The Chen \& Shibata (2000) model, which addresses the connection between emerging flux and flux rope eruptions, can be implemented numerically to study how emerging flux through the photosphere can impact the eruption of a pre-existing coronal flux rope.} {The model's sensitivity to the initial conditions and reconnection micro-physics is investigated with a parameter study. In particular, we aim to understand the stability of the coronal flux rope in the context of X-point collapse, as well as the effects of boundary driving in both unstratified and stratified atmospheres.} {A modified version of the Chen \& Shibata model is implemented in a code with high numerical accuracy with different combinations of initial parameters governing the magnetic equilibrium and gravitational stratification of the atmosphere. In the absence of driving, we assess the behavior of waves in the vicinity of the X-point. With boundary driving applied, we study the effects of reconnection micro-physics and atmospheric stratification on the eruption.} {We find that the Chen \& Shibata equilibrium can be unstable to an X-point collapse even in the absence of driving due to wave accumulation at the X-point. However, the equilibrium can be stabilized by reducing the compressibility of the plasma, which allows small-amplitude waves to pass through the X-point without accumulation. Simulations with the photospheric boundary driving evaluate the impact of reconnection micro-physics and atmospheric stratification on the resulting dynamics: we show the evolution of the system to be determined primarily by the structure of the global magnetic fields with little sensitivity to the micro-physics of magnetic reconnection; and in a stratified atmosphere, we identify a novel mechanism for producing quasi-periodic behavior at the reconnection site behind a rising flux rope as a possible explanation of similar phenomena observed in solar and stellar flares.} {} \titlerunning{Flux Rope Stability \& Atmospheric Stratification in Models of Coronal Mass Ejections} | Coronal mass ejections (CMEs) are a common occurrence in the Sun's atmosphere that are known to release giga-tons of plasma into interplanetary space. Some of the ejected plasma can reach the space environment of the Earth and have a strong and complex influence on space activity by inducing geospace disruptions that can severely impact spacecraft, power grids, and communication \citep{Baker13}. While CMEs are quite commonly observed \citep{Evans13}, especially during the peak of the solar cycle, they are still poorly understood. Some of the biggest CME mysteries pertain to their origin, propagation, and relation to flares. The initiation of CMEs has been widely studied and yet remains largely unexplained \citep[see reviews by][]{Forbes06,Chen11}. However, many observational studies of associated features have led to clues about how they occur and what factors contribute to their destabilization \citep[see review by][]{Gopal06}. Prior to an eruption, large-scale shear motions are often observed in photospheric images, especially about the magnetic neutral line \citep{Krall82} and in the form of sunspot rotations \citep{Tian06}. In addition, patches of magnetic flux are found to emerge, expand, move, fragment, coalesce, and cancel over a wide range of length and time scales \citep{Sheeley69, Zwaan85, Centeno07, Parnell09}. It is believed that shear motions, sunspot rotation, and the emergence of new flux are all related to the injection of magnetic helicity into coronal magnetic structures that could be directly involved in the eruption \citep{Chae01, Kusano02, Demoulin02, Pariat06, Magara08}. In addition to the growing body of observational studies that have improved our understanding of CMEs, many new insights have also emerged from theoretical and numerical efforts. CMEs have been modeled in two and three dimensions using both simple analytical methods and sophisticated magnetohydrodynamic simulations \citep[see][and references therein]{Jacobs11}. These models differ widely in physical and numerical details, each making its own choice of how to address the trade-off between complexity and computational feasibility. Early theoretical models explained CMEs as a loss of equilibrium, due to magnetic buoyant instabilities \citep[e.g.,][]{vanTend78, Low81, Demoulin88}, as well as MHD flows \citep{Low84} and reconnection \citep{Forbes91}. \citet{Forbes95} proposed a CME model based on the movement of magnetic footpoints (sources) below a flux rope and the subsequent development of a singular current sheet, through which a large magnetic energy release should take place as the flux rope moves continually outwards. \citet{LinForbes00} refined their model and computed exact solutions for the energy release, flux rope height, current sheet length, and reconnection rate. The Lin \& Forbes (hereafter, ``LF'') model, while simplistic, provides an important step forward in CME modeling because it offers exact solutions to the time-dependent nonlinear problem of a flux rope eruption and includes more than a heuristic treatment of magnetic reconnection. Furthermore, it predicts many features (e.g., morphology, current sheet, post-flare loops, flows, energetics) confirmed by observations \citep{Ciaravella02, Ko03, Lin05}. A similar two-dimensional flux rope model was proposed by \citet{ChenShibata00}. Like LF, the Chen \& Shibata (``CS'') model consists of a two-dimensional configuration in which a flux rope sits above the photosphere, surrounded by a line-tied coronal arcade. In both models, the magnetic equilibrium is destabilized by photospheric driving, causing a current sheet to form in the flux rope's wake as it moves outwards. However, whereas the LF model calls for a somewhat manufactured mechanism for destabilization via large-scale convergence of the sources, the CS model improves upon the LF model by incorporating flux emergence as the driver. While it does not lend itself to a purely analytical treatment, the CS model is suitable for numerical simulation. The authors report four very different outcomes based on the position and direction of the driving, showing that the location of the emergence {\em per se} is not a critical factor for destabilizing the coronal flux rope but rather that the relative orientation of the emerging flux determines whether the flux rope moves outwards/upwards (CME-like) or inwards/downwards (failed eruption). Several subsequent studies have built upon the CS model. For example, \citet{Chen04}, \citet{Shiota03a}, and \citet{Shiota04} produced synthetic emission images from CS simulations to compare morphological features, reconnection in-flows, and coronal dimmings found in actual CME observations. Moreover, \citet{Shiota03a} and \citet{Shiota05} were able to identify the formation, structure, and location of slow and fast shocks in the CMEs produced in these simulations. Gravitational density stratification in an isothermal atmosphere was considered by \citet{Chen04, Shiota04} and also in a later study by \citet{Dubey06} in spherical coordinates with axisymmetry. In this study, we re-examine the CS model using a more sophisticated numerical tool, a more realistic atmosphere, and higher spatial resolution than previous studies. Simulations are performed using a high-order spectral element method with numerically accurate, self-consistent treatments of diffusive transport (i.e., resistivity, viscosity, and thermal conduction). In addition, we reformulate the initial conditions to have magnetic fields that are everywhere continuous and differentiable, and to include a solar-like temperature profile with a sharp transition region and density stratification. Through an exploration of physical parameters, we find that the CS magnetic equilibrium can be unstable even without a flux emergence driver. Linear theory has shown that sufficient perturbation of the field lines near an X-point by waves or motion can disrupt the balance between magnetic pressure and magnetic tension, causing the X-point to collapse and form a reconnecting current sheet \citep[][chapter 2]{PriestForbes}. Our simulations demonstrate that under a wide range of conditions the CS equilibrium is susceptible to such a collapse via nonlinear accumulation of fast magnetosonic waves at the X-point \citep{McLaughlin09}. However, we also show that in a sufficiently incompressible plasma due, for example, to the presence of a background "guide" magnetic field co-aligned with the axis of the flux rope, the X-point collapse does not take place and the CS magnetic equilibrium can be stabilized. For both stable and unstable configurations, we investigate the impact of the resistivity model enabling magnetic reconnection below the flux rope, as well as the plasma parameters in the low solar atmosphere, on the flux rope's response to the flux emergence driver. We show that flux emergence can produce a rising flux rope both in a stratified and an unstratified atmosphere, though the resulting ejection speed, as well as the plasma dynamics around the X-point, can be strongly effected by the magnitude of the guide field and the atmospheric stratification. | \label{sec:discussion} Coronal mass ejections are eruptive solar events of enormous proportions that shed plasma and magnetic flux into interplanetary space. The Chen \& Shibata model is a good starting point for understanding how such an eruption can originate from the destabilization of a global magnetic configuration by local flux emergence. It helps us to see a connection between flux emergence, a phenomenon at the solar surface, and flux rope ejection, a phenomenon in the corona. Many observational studies have shown spatio-temporal correlations between flux emergence and eruptive events, but few theoretical models to date have identified a precise single mechanism or sequence of processes whereby producing magnetic flux at the photosphere dynamically triggers an eruption. The CS model may assume an oversimplified solar atmosphere and a somewhat manufactured magnetic topology, but it does proffer a complete story. To determine the effects of a more realistic solar atmosphere, we have undertaken an effort to repeat the study using a different numerical suite and allowing for a stratified atmosphere with the density variation of over four orders of magnitude, as well as a sharp temperature transition between the chromosphere and the corona. We have found that even in the absence of stratification the initial equilibrium can be unstable to small perturbations. The initial adjustment of the magnetic equilibrium to slight force imbalances can generate fast waves that may not be able to propagate through the X-point below the flux rope. In these cases, the fast waves accumulate in such a way as to collapse the X-point and initiate reconnection. Thus, the equilibrium can be destabilized before any photospheric driving is applied. However, we also found that the stability of the CS equilibrium can be controlled by varying the compressibility of the plasma, which in a two-dimensional system is determined by the combination of thermal pressure and the magnitude of the out-of-plane component of the magnetic field. To quantify this effect, we defined a generalized measure of compressibility $\Gamma$ and have empirically determined the equilibrium's stability boundaries in terms of $\Gamma$. When emulating flux emergence by applying an electric field at the photospheric boundary, in the unstratified atmosphere, the results of our simulations are qualitatively similar to those of the original study. However, there are also important differences and new findings. As opposed to the original study, when initialized in a stable configuration, our simulations show little evidence of significant flux rope acceleration or Joule heating associated with the reconnection current sheet. Notably, this result appears to be insensitive to the micro-physics of the reconnection region. By varying the magnitude of the background out-of-plane magnetic field component and thus changing the stability of the global magnetic configuration, we also show that flux rope rise speeds comparable to the original result are possible but require an unstable magnetic configuration as the initial condition. We further show that the micro-physics of reconnection is more likely to slow down than to accelerate the flux rope by comparing simulations with and without anomalous resistivity. It is well known that current-dependent anomalous resistivity allows for ``fast" magnetic reconnection with only weak dependence on the magnitude of resistivity itself \citep{Malyshkin05}. Yet, for both initially stable and quasi-stable magnetic configurations, allowing for anomalous resistivity did not result in a substantial increase of the flux rope rise speed. That is, merely allowing for faster reconnection did not lead to faster reconnection and faster flux rope ejection. On the other hand, in magnetic configurations where fast flux-rope ejection is possible, the simulations with low guide field indicate that the inability of the magnetic reconnection process to occur sufficiently fast could limit the rise speed of the flux rope. In the flux emergence simulations with stable magnetic configuration and realistic atmospheric stratification, the weakness of the X-point heating and the slowness of the ejected flux rope are reproduced, and amplified. In these simulations, changes in the magnetic field structure due to flux emergence generate persistent chromospheric upflows of cold, dense material that is convected into and dramatically cools the reconnection current sheet. In addition to the steady state upflows and cooling, the stratified simulations also produce another type of behavior: self-induced quasi-periodic oscillations in the X-point temperature, density, and other fluid quantities. The quasi-periodic oscillations observed in the stratified simulation are of transient nature, appearing after the flux emergence drive has been completed and lasting for just over an hour while the flux rope is within $\approx 1$~Mm of its initial location. The robustness of this phenomenon will be a subject of future research, but our initial investigation indicates that a critical balance between the upward tension force of the reconnected magnetic field and the downward gravitational pull on the dense chromospheric plasma convected into the reconnection region has to be achieved in order for the quasi-periodic oscillations to appear in a simulation. While that may seem to be a prohibitive constraint, we speculate that in the three-dimensional parameter space spanned by (1) the height of the X-point, (2) the strength of the magnetic fields and (3) the horizontal location of the emerging flux relative to the separatrices of the pre-existing magnetic configuration, all quantities that can vary greatly throughout the lower solar atmosphere, there is likely embedded a two-dimensional parameter space where such balance can, indeed, be achieved. We note that there is also extensive observational evidence for what has been called quasi-periodic pulsations (QPP) in solar and stellar flares \citep[e.g., see][and references therein]{Nakariakov09, MitraKraev05} with the QPP periodicity time scale varying from fractions of a second to several minutes, comparable to the period of the oscillations produced in our simulation. In fact, \citet{Nakariakov09} have previously resorted to the water drop formation analogy in describing what they refer to as a class of ``load/unload'' models of long multi-minute period QPPs. The plasma droplet mechanism described in Sec.~\ref{sssec:stratified} above is a much more direct, and novel, analogy to the same physical process with the potential to provide a new alternative explanation for the long-duration QPPs. Finally, we point out that the limitations of the two-dimensional MHD model used here for modeling a region of potential flaring activity embedded into a stratified solar atmosphere are many. It is well known that laminar resistive reconnection cannot account for the observed rates of magnetic energy release, particle acceleration, or radiation from solar flares, while three-dimensional effects can substantially alter both the flux-rope stability properties and the micro-physics of reconnection. Nevertheless, we believe that the careful and systematic study described in this article is a prerequisite for performing more complete, and also substantially more challenging and complicated, studies of CME initiation by flux emergence in the future. | 14 | 3 | 1403.0231 |
1403 | 1403.2691_arXiv.txt | We measure the extinction curve in the central 200 pc of M31 at mid-ultraviolet to near-infrared wavelengths (from 1928\AA\ to 1.5$\mu$m), using \swift/\uvot\ and \hst\ \wfc3/\acs\ observations in thirteen bands. Taking advantage of the high angular resolution of the \hst\ \wfc3\ and \acs\ detectors, we develop a method to simultaneously determine the relative extinction and the fraction of obscured starlight for five dusty complexes located in the circumnuclear region. The extinction curves of these clumps ($R_V$=2.4-2.5) are steeper than the average Galactic one ($R_V$=3.1), but are similar to optical and near-infrared curves recently measured toward the Galactic Bulge ($R_V\sim2.5$). This similarity suggests that steep extinction curves may be common in the inner bulge of galaxies. In the ultraviolet, the extinction curves of these clumps are also unusual. We find that one dusty clump (size $<$2 pc) exhibits a strong UV bump (extinction at 2175\AA ), more than three standard deviation higher than that predicted by common models. Although the high stellar metallicity of the M31 bulge indicates that there are sufficient carbon and silicon to produce large dust grains, the grains may have been destroyed by supernova explosions or past activity of the central super-massive black hole, resulting in the observed steepened extinction curve. | \label{s:intro} Dust grains are pervasive in the Universe, absorbing, scattering and re-radiating light, affecting all wavelengths. Accounting for the effects of dust is one of the fundamental steps when inferring intrinsic properties of astrophysical objects. The degree of the effects depends not only on the total column density of dust grains, but also on their size and composition. Dust grains of various sizes affect different parts of the electromagnetic spectra. Small grains mainly absorb at shorter wavelengths, such as the ultraviolet (UV), while large grains dominate attenuation in the infrared (IR). In particular, carbonaceous grains are suggested to cause strong extinction near 2175\AA\ \citep{dra03}. The overall wavelength dependence of dust extinction is called the `extinction law' (or extinction curve), which is conventionally expressed to be the ratio between the absolute extinction, $A_{\lambda}$, at some wavelength, $\lambda$ and the absolute extinction in the $V$ band, $A_V$, as a function of the reciprocal of the wavelength. The extinction curve is governed by the mix of dust grains, which can potentially be affected by the local environments. Strong shocks and UV photons could destroy large grains and thus change the shape of the extinction curve~\citep{jon04}. % Extinction curves have been extensively studied in the Milky Way (MW,~\citealt{fit04} and reference therein) and in the Magellanic Clouds (MCs, Large Magellanic Cloud: LMC and Small Magellanic Cloud: SMC,~\citealt{gor03}). % Thanks to the International Ultraviolet Explorer (IUE), many high-quality low-resolution UV spectra of the stars in the MW and the MCs have made previous work on extinction curves possible. These studies have revealed significant environmentally-dependent effects on the extinction curves, reflected in varying UV slopes and strengths of the 2175\AA\ bump.~\citet{car89} find that most extinction curves in the MW could be expressed with a function that depends on a single parameter, $R_V$=$A_V$/($A_B$-$A_V$) ($A_B$ is the absolute extinction in the $B$ band), which roughly traces the dust grain size. Cardelli's extinction curve steepens (i.e. the relative extinction in the short wavelength becomes large) with decreasing $R_V$, although deviations are found toward several directions~\citep{mat92}. One of the most significant features of the MW extinction curve is the strong 2175\AA\ bump, the width of which is sensitive to the local environment (from 0.63 to 1.47 $\mu m^{-1}$; ~\citealt{val04}). % In contrast to the well-behaved extinction curves in the MW, the extinction curves in the MCs, especially the SMC, are much steeper in the UV bands and exhibit a significantly weaker 2175\AA\ bump~\citep{gor98,mis99}.~\citet{gor03} fit the extinction curves in the MCs with the generalized model provided by~\citet{fit90} (similar to that of~\citealt{car89}), and claim that the variation in dust properties in the MW and MCs is caused by environmental effects. The Andromeda galaxy~\citep[M31, at a distance of $\sim$780 kpc;][]{mcc05} provides us with an ideal testbed to study the extinction curves in regions with different metallicity and star-forming activity. The extinction curve in the M31 disk is similar to the `average' Galactic one ($R_V$=3.1), albeit with a possibly weaker 2175\AA\ bump~\citep{bia96}. Using ground-based optical images in BVRI bands, \citet{mel00} find that the extinction curve of a dusty complex 1.3\arcmin\ on the sky ($\sim$300 pc in projection) northwest of the M31 nucleus is much steeper ($R_V\sim2.1$). In this work, we study the extinction curve in the central 200 pc of the circumnuclear region (CNR) of M31. As the second closest galactic nucleus, the CNR of M31 offers a unique laboratory~\citep[][and references therein]{li09} for studying the interaction and co-evolution between the super-massive black holes (SMBHs) and their host galaxies. By virtue of proximity, we can achieve an unparalleled linear resolution in M31 for a detailed study on various astrophysical activities in an extreme galactic nuclear environment. Like our Galaxy, M31 harbors a radiatively quiescent SMBH, named M31*~\citep{dre88,kor88,cra92,gar10,li11}. On the other hand, unlike the active star formation in the Galactic Center, the nuclear bulge of M31 does not host any young massive stars (less than 10 Myr old,~\citealt{bro98,ros12}) and contains only a small amount of molecular gas~\citep{mel00,mel11,mel13}. The stellar population in the M31 bulge is found to be highly homogeneous, dominated by old stars ($\sim$8 Gyr,~\citealt{ols06,sag10}). In the central 2\arcmin\ ($\sim$450 pc) , the two-dimensional surface brightness distribution of the bulge agrees well with a S\'ersic Model~\citep{pen02,li13}. The metallicity in the M31 bulge seems to be super-solar~\citep{sag10} and much higher than that of the MCs. The steep extinction curve claimed by~\citet{mel00} may be due to the nuclear environment of the galaxy, with its high metallicity, as well as the potential impact of the SMBH (i.e. due to ongoing mechanical feedback and/or previous outbursts) and strong interstellar shocks, all of which could affect the size and compositions of the dust grains. Because of the relatively low line-of-sight extinction, the CNR of M31 is the nearest well-defined galaxy nucleus that can be mapped from the UV to the near-IR (NIR) bands. To our knowledge, there has not yet been a study of the extinction curve covering the UV-optical-NIR wavelength range in the central $\sim$ 500 pc of a normal galaxy. The understanding of the extinction curve over this entire range is essential to studies of distant galactic nuclei, especially for those with similar properties. In this paper, we empirically derive the relative extinctions at 13 bands from the mid-UV (MUV) to NIR and then determine the extinction curves for representative regions in the CNR of M31. We utilize data from the \hst\ Wide Field Camera 3 ({\sl WFC3}) and Advanced Camera for Surveys (\acs ) of multiple programs~\citep{dal12,li13}. The cores of dusty clumps can be resolved in our data, thanks to the superb angular resolution of \hst\ ($<$0.15\arcsec, i.e. $\sim$0.55 pc), while the high sensitivity of the \hst\ \wfc3\ and \acs\ cameras ensures high signal-to-noise (S/N) ratios. We also utilize \swift/\uvot\ observations with three MUV filters, the middle of which covers the 2175\AA\ bump. Therefore, the \swift/\uvot\ filters can be used not only to examine the slope of the extinction curve in the MUV, but also to probe the strength of the 2175\AA\ bump. In a companion work,~\citet{li13} studied the fine spatial structures of the extinction features in the CNR of M31. We present the \swift\ and \hst\ observations, and the data reduction in \S\ref{s:observation}. We describe our method to derive the line-of-sight locations and the extinction curves in \S\ref{s:method}, apply it to the dusty clumps in M31's CNR in \S\ref{s:analysis} and present the results in \S\ref{s:result}. We discuss the implications of our results in \S\ref{s:discussion} and conclude the paper in \S\ref{s:summary}. | \label{s:discussion} The extinction curve is determined by the size and compositions of dust, which could be related to many factors, such as the metallicity of a molecular cloud and its environment. The metallicity alone is unlikely to be able to explain the variations among the extinction curves in the MW and the MCs. Different sightlines that have similar metallicity gas can show very different extinction curves. Indeed, the extinction curves toward a few lines of sight in the MCs are found to be similar to the Galactic extinction curve~\citep{gor03}, whereas sightlines toward four stars in the Milky Way have steep extinction curves that lack the 2175\AA\ bump~\citep{val04}. Therefore, factors other than metallicity must play important roles in determining the shape of an extinction curves.~\citet{gor03} point out that the differences between the MW and the MCs extinction curves may be due to their sampling different environments. In particularly, most of the studied extinction curves in the MCs are from active star-formation regions, where strong shocks and UV photons may conspire to destroy large dust grains, whereas, those in the MW are typically toward runaway main-sequence OB stars. Within M31, the metallicity in the bulge is comparable to regions of the disk~\citep{ros07}, for which~\citet{bia96} have derived shallower extinction curves. Therefore, unless the clouds are due to accreted low metallicity gas, it seems unlikely that metallicity is responsible for those steep curves. We have found that the extinction curves in the CNR of M31 are steep. % We naively expected that the extinction curve there should be similar to or even flatter than the MW one, because of their comparable metallicity and low star formation rate, but found just the opposite. In fact, the CNR of M31 is not the only galactic bulge with steep extinction curves in the Local Group. The Galactic inner bulge has recently been suggested to have a similar non-standard optical extinction curve~\citep{uda03,sum04,rev10,nat13}. These authors use red clump stars as standard candles, due to their nearly constant magnitude and color at high metallicities. They derive the foreground extinction in the optical and NIR bands (V, I, J and K) from the differences of the observed and intrinsic magnitudes/colors of the red clump stars toward different sightlines in the Galactic Bulge. They find that the relative exinction could not be explained by the standard Galactic extinction curve ($R_V$=3.1), and must instead be steeper.~\citet{nat13} report a value of $R_V$=2.5 toward the Galactic Bulge, with the extinction curve model of~\citet{car89}, which is similar to the $R_V$ value we have obtained in the CNR of M31. This consistency suggests that a steep extinction curve could be common in galactic bulges. The extinction curves could be steepened by eliminating large grains. It is possible that they have been destroyed by interstellar shocks. Recombination lines (H$\alpha$, [N {\small \rm II}], [S {\small \rm II}], [O {\small \rm III}]) have been found in the CNR of M31 by~\citet{jac85} and arise from regions that are morphologically similar to that of the dust emission~\citep{li09}. Therefore, the recombination lines are from the surfaces of the dusty molecular clouds. Because the [N {\small \rm II}] line is stronger than the H$\alpha$ line, these recombination lines are suggested to be excited by shocks~\citep{rub71}. We speculate that the shocks from supernova explosions or past activity of M31* have evaporated large dust grains and steeped the extinction curve. For example, recently,~\citet{phi13} find that the interstellar medium of host galaxies surrounding 32 Type Ia supernovae has $R_V<2.7$ with a mean value of 2.06. We also find the 2175\AA\ bump in our extinction curves, which is generally thought to be due to small graphite grains. The 2175\AA\ bump is especially strong in the extinction curve of Clump D, or probably Clump C. The former is located 30\arcsec\ (113 pc in projection) southeast of the D395A/393/384 clump studied by~\citet{mel00}. This small and compact clump core has a size $<$ 2 pc~\citep{li13} and appears dark in the \hst\ F275W, F336W and F390M images, consistent with its high $f$ value. The high metallicity of the clump may provide the necessary carbon and silicon to construct small graphite grains. Among the five dusty clumps, Clump D seems to have the smallest median $A_{F547M}$, but the largest 2175\AA\ bump. The 2175\AA\ bump is weaker in the extinction curve of Clump B or E, which both have high $A_{F547M}$. This situation is probably reminiscent of the four lines of sight in the MW~\citep{val04} through dense molecular clouds, which have weak 2175\AA\ bumps, for example, HD62542 ($A_V$=0.99$\pm$0.14) and HD210121 ($A_V$=0.75$\pm$0.15). Future \hst/STIS spectra in the mid-ultraviolet range are needed to confirm the potential 2175\AA\ bump in Clumps C and D. Because of the `Red Leak' problem and old stellar population in the M31 bulge, we need to assume the SED to compare the observed relative extinction in thirteen bands with the model. With the UV spectra, we can directly derive the extinction curve, as well as the parameters of the 2175\AA\ bump, such as, its centroid and width. In this paper, we have presented the first study of the extinction curve within the central 1\arcmin\ region of M31 from the MUV to the NIR. We have used \swift/\uvot\ and \hst\ \wfc3/\acs\ observations in thirteen bands to simultaneously constrain the line-of-sight locations and the relative extinction $A_n/A_{F547M}$ of five dusty clumps in this region. Instead of fixed certain line-of-sight locations for these clumps, we have developed a method to determine their background stellar light fraction ($f$) directly from the observations. % We have shown that the extinction curve is generally steep in the circumnuclear region of M31, where the metallicity is super-solar. The derived $R_V$=2.4-2.5 is similar to that found toward the Galactic Bulge. We discuss this consistency which leads us to conclude that large dust grains are destroyed in the harsh environments of the bulges, e.g., via potential shocks from supernova explosions and/or past activities of M31*, as indicated by the strong [N{\sc II}] recombination lines from the dusty clumps. The extinction curves of the five dusty clumps show significant variations in the mid-ultraviolet. Some of the extinction curves can be explained by the extinction curve model of~\citet{fit99}. Others, most notably Clump D (probably also Clump C), shows an unusually strong 2175\AA\ bump, which is weak elsewhere in the M31 disk~\citep{bia96}. | 14 | 3 | 1403.2691 |
1403 | 1403.5849_arXiv.txt | { Using the temperature data from \emph{Planck} we search for departures from a power-law primordial power spectrum, employing Bayesian model-selection and posterior probabilities. We parametrize the spectrum with $n$ knots located at arbitrary values of $\log{k}$, with both linear and cubic splines. This formulation recovers both slow modulations and sharp transitions in the primordial spectrum. The power spectrum is well-fit by a featureless, power-law at wavenumbers $k>10^{-3} \, \impc$. A modulated primordial spectrum yields a better fit relative to $\Lambda$CDM at large scales, but there is no strong evidence for a departure from a power-law spectrum. Moreover, using simulated maps we show that a local feature at $k \sim 10^{-3} \, \impc$ can mimic the suppression of large-scale power. With multi-knot spectra we see only small changes in the posterior distributions for the other free parameters in the standard $\Lambda$CDM universe. Lastly, we investigate whether the hemispherical power asymmetry is explained by independent features in the primordial power spectrum in each ecliptic hemisphere, but find no significant differences between them. } | The first cosmological data analysis by the \emph{Planck} Science Team \cite{Collaboration:2013ww} confirmed the conventional $\Lambda$CDM model of cosmology with unprecedented precision. In particular, a scale-invariant primordial power spectrum (PPS) is excluded at $>5\sigma$. Simple models of single-field inflation generically yield an almost scale-invariant PPS, but no inflationary models are yet favoured by Bayesian evidence relative to $\Lambda$CDM \cite{Martin:2010hh,Easther:2011yq,Collaboration:2013vu,Martin:2013nzq}. Conversely, models with relatively complex spectra, including oscillations or localized amplifications, are consistent with current cosmological data \cite{Ashoorioon:2006wc,Ashoorioon:2008qr,Flauger:2009ab,Achucarro:2010da,Peiris:2013opa,Easther:2013we}. While it is clear that the Harrison-Zel'dovich spectrum does not provide an optimal fit to the data, it does not follow that the power-law PPS is preferred over all other possible forms. Furthermore, as cosmological constraints become sensitive to increasingly delicate signals in the PPS, it is important to check whether constraints on these parameters depend on the assumed form of the PPS. Model-independent approaches to reconstructing the PPS have been widely studied \cite{Wang:2013vf,2013JCAP...07..031H,2013JCAP...12..035H,2012JCAP...10..050G,2010ApJ...711....1P,2010JCAP...01..016N,2009JCAP...07..011N,2009PhRvD..79d3010N,2008PhRvD..78l3002N,2008PhRvD..78b3511S,2007PhRvD..75l3502S,2006MNRAS.367.1095T,2006MNRAS.372..646L,2004PhRvD..70d3523S,2004JCAP...04..002H,Efstathiou:2003bh,2003ApJ...599....1M,2001PhRvD..63d3009H,Verde:2008er,Peiris:2009ke,Bird:2010mp,Bridges:2005br,Bridges:2006zm,Bridges:2008ta,2012JCAP...06..006V,Vazquez:2011xa,Vazquez:2013dva,dePutter:2014vd}, and the approached used here closely parallels that of Ref.~\cite{2012JCAP...06..006V}, which examines the seven year WMAP dataset. We revisit this problem using \emph{Planck} data, Bayesian model-selection based on evidence (or \emph{marginalized likelihood}) ratios \cite{Bridges:2005br,Bridges:2006zm,Bridges:2008ta,2012JCAP...06..006V,Vazquez:2011xa,Vazquez:2013dva} and a flexible specification for the PPS. We use this formalism to test whether \emph{Planck} constraints on cosmological parameters are weakened when permit a generic PPS, rather than usual, almost--scale-invariant power-law formulation. While parameter degeneracies with the PPS could, in principle affect the posteriors on other cosmological variables (e.g., \cite{Kinney:2001js}), we find that the constraints on these parameters do not change significantly when we allow generic forms of the PPS. Secondly, we use this formalism to determine whether the observed large-scale hemispherical asymmetry in the two ecliptic hemispheres can be attributed to differences in the form of the PPS. We find no difference in the structure of the power spectrum in the two hemispheres, either qualitatively or in the evidence ratios. Finally, the algorithm described here was implemented in \textsc{Cosmo++} \cite{Aslanyan:2013ts}, which is publicly available. We compare Bayesian evidence for the non--power-law models to the evidence for the red-tilted PPS of $\Lambda$CDM. As we add more parameters to the PPS the Bayesian evidence does not change significantly, indicating the data cannot substantially distinguish between these models. However, most of the extra knots appear in the long wavelength section of the power spectrum, with $k \lesssim 10^{-3} \, \impc$, suggesting that smaller scales are indeed well described by a power-law PPS. Since no model-selection method can be completely non-parametric, we check our analysis by obtaining posterior probabilities for two different styles of non--power-law PPS. We compare both a linear- and a cubic-spline interpolation model, which capture sharp and smooth features in the PPS, respectively. The two models are illustrated in Fig.~\ref{model_fig} and explained in detail in Section~\ref{ssect:relaxingpowerlaw}. We allow variation in the number of knots, their amplitudes, their positions in $k$-space, and the endpoint amplitudes. We see the maximum increase in evidence is $\Delta\ln Z=0.7$ for the one knot linear spline model with varying foregrounds and $\Delta\ln Z=2.2$ for the five knot linear spline, albeit with the foreground parameters in the Planck likelihood fixed to their best-fit values. \begin{figure} \centering \includegraphics[width=0.75\textwidth]{linear_vs_cubic.png} \caption{We illustrate the linear-spline $\mathrm{LS}_n$ and cubic-spline $\mathrm{CS}_n$ models for the primordial power spectrum, where $n$ is the number of knots between the two endpoints. The white region denotes the range of $k$ for which the spectrum is defined, with $k_\mathrm{min}=10^{-6}\,\mathrm{Mpc}^{-1}$ and $k_\mathrm{max}=1.0\,\mathrm{Mpc}^{-1}$. There are $2n+2$ degrees of freedom for each choice of spline, since we vary the amplitude $\Delta^2_\zeta$ at the endpoints and allow the knots to move in both $\Delta^2_\zeta$ and $k$.} \label{model_fig} \end{figure} To test the effectiveness of our PPS parametrizations we attempt to recover nontrivial signals in simulated CMB temperature maps. The method clearly finds even small added features in the PPS, while the evidence ratio strongly favors models with added interior knots when the simulated feature is large enough. The analysis here considers only \emph{Planck} data. In a separate paper we will consider the implications of the recent BICEP2 $B$-mode polarization detection \cite{Ade:2014xna,Ade:2014gua} for the scalar power spectrum using the knot-spline techniques developed here. | \label{sec_summary} We have applied the ``knot-spline'' reconstruction method \cite{2012JCAP...06..006V} to the Planck temperature data. This paper breaks new ground by checking the algorithm's ability to recover the PPS from simulated maps with artificially introduced features and by confirming that cosmological parameter constraints obtained from the Planck data are not diluted when the usual assumptions about the form of the primordial power spectrum are relaxed. Furthermore, we investigate whether the hemispherical power asymmetry visible in the WMAP and Planck temperature maps is correlated with differences in the primordial power spectrum recovered from each hemisphere, finding that the two power spectra are in good agreement. Finally, the numerical tools needed to reproduce or extend this analysis are now included with the \textsc{Cosmo++} library \cite{Aslanyan:2013ts}. The PPS reconstruction method used here allows the location of the knots to vary in both $k$-space and amplitude, and allows us to capture both gentle variations in the spectrum as well as a broad class of localized features. Increasing the possible complexity of the PPS necessarily improves the fit to the data, and we must guard against the ``look-elsewhere'' effect or ``fitting the noise''. Determining the optimal number of knots can be posed as a model selection problem, and we use Bayesian evidence ratios to safeguard against overfitting. We applied our methods to simulated maps with Planck characteristics to check the reliability of the method, and to estimate the amplitude of possible features in the PPS that the method can detect. We were able to recover modulations which modified an underlying power law spectrum by less than $5\%$. Typically, specific modulations have well-defined thresholds above which they are very easy to detect; for example, a long wavelength modulation with an amplitude a factor of three beyond the threshold of detectability yields an improvement in evidence of $\Delta\ln Z=51$. Because the \emph{Planck} likelihoods are more sensitive to features at $k>10^{-3} \, \impc$, the posteriors on the knots' positions (Fig.~\ref{ps_sim_lin_fig}) show a decrement of power at $k<10^{-3} \, \impc$, although this is not a feature of the simulated data, and cautions us against over-interpreting an apparent decrement in large scale power in the actual sky maps. More generally, the weak evidence computed for the smaller modulation is partly driven by the use of ``uninformative'' priors for the modulated spectrum \cite{Easther:2011yq}. Consequently, Bayesian evidence does not permit a strictly algorithmic solution to cosmological model-selection problems and with maximum entropy priors similar to those used here, and nuanced physical analyses of the improvement in the maximum likelihood along with cross-checks against other datasets will remain important. Having tested our methods, we reconstructed the PPS from \emph{Planck} CMB temperature and lensing data. We found no evidence for deviations from the standard power law PPS on scales with $k \, \gtrsim \, 10^{-3} \, \impc$. Although on larger scales the data is not able to distinguish between models with or without features due to cosmic variance, the extensions to $\Lambda$CDM do not have sufficient Bayesian evidence to favor them over a standard power-law PPS. Furthermore, the posteriors for the ``standard'' cosmological parameters did not differ substantially from the power-law case did not change significantly when a more general PPS was allowed and we can conclude that the \emph{Planck} constraints on these parameters are robust. Finally, we performed a PPS reconstruction on each individual hemisphere, but found no systematic difference between the results showing that any ``hemispherical anomaly'' is not associated with differences in the underlying power spectrum. This paper is the first in a sequence of analyses of non-standard power spectra. In particular we will investigate the implications of the recent detection of $B$-mode polarization by the BICEP2 telescope \cite{Ade:2014xna,Ade:2014gua} for the scalar power spectrum \cite{Abazajian:2014tqa}, and in a third paper we will study whether permitting a non--power-law PPS changes the estimated values of derived parameters such as $\sigma_8$ or modifies estimated constraints on the neutrino sector. | 14 | 3 | 1403.5849 |
1403 | 1403.0188_arXiv.txt | To maximize the discovery potential of future synoptic surveys, especially in the field of transient science, it will be necessary to use automatic classification to identify some of the astronomical sources. The data mining technique of supervised classification is suitable for this problem. Here, we present a supervised learning method to automatically classify variable X-ray sources in the second \textit{XMM-Newton} serendipitous source catalog (2XMMi-DR2). Random Forest is our classifier of choice since it is one of the most accurate learning algorithms available. Our training set consists of 873 variable sources and their features are derived from time series, spectra, and other multi-wavelength contextual information. The 10-fold cross validation accuracy of the training data is ${\sim}$97\% on a seven-class data set. We applied the trained classification model to 411 unknown variable 2XMM sources to produce a probabilistically classified catalog. Using the classification margin and the Random Forest derived outlier measure, we identified 12 anomalous sources, of which, 2XMM J180658.7$-$500250 appears to be the most unusual source in the sample. Its X-ray spectra is suggestive of a ULX but its variability makes it highly unusual. Machine-learned classification and anomaly detection will facilitate scientific discoveries in the era of all-sky surveys. | The identification of variable and transient astrophysical sources will be a major challenge in the near future across all wavelengths. The advent of facilities such as the Large Synoptic Survey Telescope (LSST) in optical \citep{Tyson2002}, the Square Kilometre Array (SKA) in radio \citep{cordes2004} and the extended ROentgen Survey with an Imaging Telescope Array (e-ROSITA) in X-rays \citep{merloni2012}, will enable the next generation of all-sky time-domain surveys. Many types of transients and variable sources are currently known, such as supernovae, cataclysmic variables (CVs), X-ray binaries (XRBs), flare stars, gamma-ray bursts (GRB), tidal disruption flares, and future time-domain surveys will likely uncover novel source types. The large number of sources to be surveyed makes identifying interesting transients a challenging task, especially since timely multi-wavelength follow-ups will be critical for fulfilling the transient science goals. To this end, we envision that automatic classification will be a crucial part of the processing pipeline \citep{murphy2012}. Here, we demonstrate the feasibility of using time series and contextual information to automatically classify variable and transient sources. We used data from the X-ray Multi-Mirror Mission - Newton (\textit{XMM-Newton}) because there has not been previous studies on this data set using automatic classification algorithms and because the time series for many of the sources are readily available, thereby enabling us to investigate the efficacy of a classifier built using solely time-domain information. Automatic classification is a similar problem across all wavelengths and we expect that the techniques used in this paper can be readily adapted for data sets in other wave-bands. The Second XMM-Newton Serendipitous Source Catalog Data Release 2 (2XMMi-DR2) was the largest catalog of X-ray sources \citep{watson2009} at the time it was released, but has since been surpassed by 2XMMi-DR3 and 3XMM. In this study, we used 2XMMi-DR2 and kept DR3 as a verification sample. There have been previous attempts to classify the unidentified sources in 2XMMi \citep{pineau2011, lin2012}. The traditional method is to cross-match the unknown sources with catalogs in other wavelengths (e.g. SDSS, 2MASS) and then use expert knowledge to draw up classification rules. For example, one powerful discriminant is the ratio of the optical to X-ray flux for separating active galactic nuclei (AGN) and stars. In the scheme used by \citet{lin2012}, sources whose positions coincide with the centres of galaxies are deemed to be AGN. Manually selected classification rules often have their basis in science and are usually comprehensible to other experts. This method works well when there are only a few pieces of information to be processed (e.g. optical to X-ray flux), but becomes intractable when there are many disparate sets of information. In machine learning, each piece of information is translated into either a real number or a categorical label known as a \textit{feature}. Machine learned classification excels at finding subtle patterns in data sets with a large number of features. Machine learned classification has been used extensively in astronomy. In X-ray astronomy, \citet{mcglynn2004} used oblique decision trees to produce a catalog of probabilistically classified X-ray sources from \textit{ROSAT}. Since that study, there have been many advances in automatic classification techniques. Ensemble algorithms such as Random Forest (RF) have replaced single decision trees as the state-of-the-art. RF has been successfully used in astronomy for the automatic classification of variable stars \citep{richards2011, dubath2011} and the photometric classification of supernovae \citep{carliles2010}. In optical astronomy, there are efforts to incorporate automatic classification in the processing pipelines of current and planned surveys \citep{saglia2012, bloom2012, djorgovski2012}. Feature representation is an important issue in light curve classification. Since light curves are rarely observed with exactly the same cadences, they need to be transformed into structured feature sets before different sources can be compared. Various light curve feature representations have been used in astronomy. For example, \citet{matijevi2012} transformed the light curves of each Kepler eclipsing binary into a set of 1000 observations by fitting and then interpolating the observations. However, this method only works for a very homogenous set of light curves. Other studies use a restrictive set of variability measures. In \citet{hofmann2013}, X-ray sources in M31 are placed into two light curve classes - highly variable or outbursts. This method has limited descriptive power for the variety of time-variability behaviours. In contrast, \citet{rimoldini2012} extracted a large number of features from each light curve in the Hipparcos catalog and used RF and Bayesian networks to automatically classify ${\sim}$6000 unsolved optical variables. They achieved a misclassification rate of less than 12\% and this is the methodology for feature representation that we have used. In this paper, we present the results of using the RF algorithm to classify variable sources in 2XMMi-DR2. In Section \ref{s_data}, we describe the 2XMMi-DR2 data set and the data processing we performed. In Section \ref{s_method}, we describe the RF algorithm. In Section \ref{s_timeseries} we present the classification results using only time-series features and in Section \ref{s_contextual}, we show how the classification accuracy increases with the inclusion of contextual features. Our main result, a table of probabilistically classified 2XMMi variable sources, is presented in Section \ref{s_unknown}. In Section \ref{s_interesting} we present a method for selecting anomalous sources and briefly describe one of the interesting anomalous source. Finally, in Section \ref{s_conclusion} we discuss the limitations and future prospects of machine learned classification. | In this paper, we have tested the performance of the RF classifier with the 2XMMi-DR2 data set. On a seven class data set with only time series features, we were able to attain a 10-fold validation accuracy of ${\sim}77\%$. Time series features do have some discriminative power, but in the absence of other information, they do not result in a high performing classifier. When we added in contextual features such as hardness ratios, optical/IR/radio cross-matches, Galactic coordinates and proximity to nearby galaxies, the classification accuracy increased to ${\sim}97\%$. This shows that the RF classifier can be a high performing classifier, but only by combining both time-series and contextual features. The same conclusion was made by \citet{palaversa2013} in their work on the automatic classification of optical stars, in which they found that using both light curve features and colours allowed them to achieve accuracy of 92\%. A potential recommendation from our work is that the classifiers for future synoptic variable surveys will need more than just temporal flux measurements to achieve good performance. We demonstrated the scientific potential of an automatic classifier by applying our random forest classifier to 411 unknown variable sources. To test the reliability of such automatic classification, we found recent classifications in the literature for 19 sources and checked the literature's suggested classification against the output from our classifier. Our classification agrees with the literature in 13 out of the 19 sources (accuracy of 68\%). The mislabelled cases are due to a source belonging to a new and unseen class or because the classification made in literature used information (such as optical spectra) that were not available to us. We also used our RF classifier on a known subset of target sources in 2XMM-DR3. We were able to classify 22 out of 27 sources correctly (accuracy of 81\%). The mislabelled sources are again of unknown source types, or are unusual members of one of the known source types. In the DR3 verification exercise, we showed that the RF classifier can accurately classify GRBs, a heavily under-represented class. This performance was achieved by oversampling the minority classes. To find anomalous sources, we used the classification margin and the outlier measure from the RF package. Most of the high potential anomalous sources we found contained data quality issues. One source in our list did look genuinely unusual (2XMM J180658.7$-$500250) and further work needs to be done to determine its true nature. There are two areas for improvement on the algorithm front. First, to the best of our knowledge, current machine learning algorithms (including RF) do not take into account the error bars in the features. In astronomy, accurate measurement errors are readily available and provide valuable information, and should be incorporated into the machine learning algorithm. One simple way to do this is to apply a weighting to reflect the size of the error. This needs to be done in such a way that would propagate the error to the classification accuracy. Second, the RF classifier lacks interpretability. For an individual source, the RF classifier does not allow the user to pinpoint the feature which led to the classification, which is something that a human expert can easily provide. However, RF can provide a measure of feature importance measured using all the samples in the training set. Automatic classification will likely play a major role in future synoptic surveys across all wavelengths. In this paper, we have shown that the RF classifier can achieve excellent performance. We envision that a similar model can be built into the pipeline for time-domain surveys on the SKA and the LSST, where the goal will be to produce probabilistic classifications as a value-added component to the catalogs. | 14 | 3 | 1403.0188 |
1403 | 1403.6350_arXiv.txt | The temperature of the low-density intergalactic medium (IGM) at high redshift is sensitive to the timing and nature of hydrogen and HeII reionization, and can be measured from Lyman-alpha (Ly-$\alpha$) forest absorption spectra. Since the memory of intergalactic gas to heating during reionization gradually fades, measurements as close as possible to reionization are desirable. In addition, measuring the IGM temperature at sufficiently high redshifts should help to isolate the effects of hydrogen reionization since HeII reionization starts later, at lower redshift. Motivated by this, we model the IGM temperature at $z \gtrsim 5$ using semi-numeric models of patchy reionization. We construct mock Ly-$\alpha$ forest spectra from these models and consider their observable implications. We find that the small-scale structure in the Ly-$\alpha$ forest is sensitive to the temperature of the IGM even at redshifts where the average absorption in the forest is as high as $90\%$. We forecast the accuracy at which the $z \gtrsim 5$ IGM temperature can be measured using existing samples of high resolution quasar spectra, and find that interesting constraints are possible. For example, an early reionization model in which reionization ends at $z \sim 10$ should be distinguishable -- at high statistical significance -- from a lower redshift model where reionization completes at $z \sim 6$. We discuss improvements to our modeling that may be required to robustly interpret future measurements. | \label{sec:intro} The temperature of the low density intergalactic medium (IGM) after reionization retains information about when and how the gas was heated during the Epoch of Reionization (EoR) (e.g. \citealt{1994MNRAS.266..343M,Hui:1997dp,Theuns:2002yc,Hui:2003hn}). The temperature of the IGM in turn impacts the statistical properties of the Ly-$\alpha$ forest towards background quasars and so the absorption in the forest provides ``fossil'' evidence regarding the timing and nature of reionization. Scrutinized carefully, this fossil may therefore improve our understanding of reionization. For example, the IGM will likely be cooler at $z \sim 5$ if most of the IGM volume reionized at relatively high redshift, near e.g. $z \sim 10$, than if reionization happened later, near say $z \sim 6$. If reionization occurs early, the gas has longer to cool and reaches a lower temperature than if it happens late, at least provided the gas is heated to a fixed temperature at reionization. In addition, the IGM temperature should be inhomogeneous, partly as a result of spatial variations in the timing of reionization across the universe \citep{Trac:2008yz,Cen:2009bg,Furlanetto:2009kr}. Careful measurements of the IGM temperature after reionization should hence constrain the average reionization history of the universe, and may potentially reveal spatial variations around the average history as well. Two separate phases of reionization are likely relevant for understanding the thermal history of the IGM: an early period of hydrogen reionization during which hydrogen is ionized, and helium is singly ionized by star-forming galaxies, and a later period in which helium is doubly-ionized by quasars, i.e. HeII reionization. Hydrogen reionization completed sometime before $z \sim 6$ or so (e.g. \citealt{Fan:2005es}, although it might conceivably end as late as $z \sim 5$ -- see \citealt{McGreer:2011dm,Mesinger:2009mv,Lidz:2007mz}) , while mounting evidence suggests HeII reionization finished by $z \gtrsim 2.5-3$ (see e.g. \citealt{Worseck:2011qk,Syphers:2011uw} and references therein). Many of the existing IGM temperature measurements focus on redshifts of $z \sim 2-4$ \citep{Schaye:1999vr,Ricotti:1999hx,McDonald:2000nn,Zaldarriaga:2000mz,Theuns:2001my,Lidz:2009ca}; in this case the temperature is likely strongly influenced by HeII reionization (e.g. \citealt{McQuinn:2008am}) and so these measurements mostly constrain helium reionization rather than hydrogen reionization. In order to best constrain hydrogen reionization using the thermal history of the IGM, temperature measurements at higher redshift ($z \gtrsim 5$) are required. Indeed, recent work has started to probe the temperature at these early times. In particular, the recent study by \citet{Becker:2012aq} includes a measurement at $z=4.8$; \citet{Bolton:2011ck} and \citet{Raskutti:2012qz} determine the $z \sim 6$ temperature in the special ``proximity zone'' region of the Ly-$\alpha$ forest close to the quasar itself; and the analysis in \citet{Viel:2013fqw} starts to bound the $z \gtrsim 5$ IGM temperature, although these authors focus on placing limits on warm dark matter models. The temperature at these higher redshifts is unlikely to be significantly impacted by HeII reionization. In addition, the ``memory'' of intergalactic gas to heating during the EoR gradually fades and so measurements as close as possible to the EoR should, in principle, be most constraining. It is not, however, obvious that the IGM temperature can be measured accurately enough from the $z \gtrsim 5$ Ly-$\alpha$ forest to exploit the sensitivity of the high redshift temperature to the properties of reionization. In particular, the forest is highly absorbed by $z \sim 5$ with $z \gtrsim 6$ spectra showing essentially complete Gunn-Peterson \citep{1965ApJ...142.1633G} absorption troughs \citep{Becker:2001ee,Fan:2005es}. An interesting question is then: what is the highest redshift at which it is feasible to measure the IGM temperature from the Ly-$\alpha$ forest? Towards this end, the goal of this paper is to both model the thermal state of the $z \sim 5$ IGM, incorporating inhomogeneities in the hydrogen reionization process, and to quantify the prospects for actually measuring the IGM temperature using $z \gtrsim 5$ Ly-$\alpha$ forest absorption spectra. The outline of this paper is as follows. In \S \ref{sec:sims}, we describe the numerical simulations used in our analysis. In \S \ref{sec:reion_hist}, we present plausible example models for the reionization history of the universe and describe our approach for modeling inhomogeneous reionization. We adopt a semi-analytic approach for modeling the resulting thermal history of the IGM, as described in \S \ref{sec:therm_hist}. In this section, we also quantify the statistical properties of the temperature field in several simulated reionization models. Finally, in \S \ref{sec:temp_measure} we discuss how to measure the temperature from the $z \sim 5$ Ly-$\alpha$ forest, and forecast how well it may be measured with existing data. Our main conclusions are described in \S \ref{sec:conclusions}. This work partly overlaps with previous work which also recognized the importance of, and modeled, temperature inhomogeneities in the $z \sim 5$ IGM and considered some of the observable implications \citep{Trac:2008yz,Cen:2009bg,Furlanetto:2009kr}.\footnote{\citet{Lai:2005ha} also considered temperature fluctuations from hydrogen reionization, but these authors focused on $z \sim 3$ where -- as they discussed -- these fluctuations should be small and swamped by effects from HeII reionization.} One key difference with this earlier work is that we consider a more direct approach for measuring the temperature of the $z \sim 5$ IGM from the Ly-$\alpha$ forest. Our modeling of the thermal state of the IGM is closely related to that in \citet{Furlanetto:2009kr}, except that we implement a similar general approach using numerical simulations, which allow us to construct mock Ly-$\alpha$ forest spectra and to measure the detailed statistical properties of these spectra. The works of \citep{Trac:2008yz,Cen:2009bg} use radiative transfer simulations to model hydrogen reionization and the thermal history of the IGM and so these authors track some of the underlying physics in more detail than we do here. However, our approach here is faster, simpler, and more flexible, while we believe that it nevertheless captures many of the important processes involved. | \label{sec:conclusions} In this work, we modeled the temperature of the IGM at $z \gtrsim 5$, incorporating the impact of spatial variations in the timing of reionization across the universe. We contrasted the $z \sim 5$ temperature in models where reionization completes at high redshift -- near $z=10$ -- with scenarios where reionization completes later, near $z = 6$. In agreement with previous work \citep{Trac:2008yz,Furlanetto:2009kr}\footnote{This is also in general agreement with still earlier work by \citealt{Theuns:2002yc} and \citealt{Hui:2003hn}, although these two studies did not incorporate inhomogeneities in the timing of reionization.}, we found that the properties of the $z=5$ temperature differ markedly between these two models. The IGM is cooler in the early reionization model, and the usual temperature-density relation is a good description of the temperature state in this case, while the temperature state is more complex and inhomogeneous in the late reionization scenario. We then produced mock $z \gtrsim 5$ Ly-$\alpha$ forest spectra from our numerical models, in effort to explore the observable implications of the IGM temperature as close as possible to hydrogen reionization. In particular, we used the Morlet wavelet filter approach of \citet{Lidz:2009ca} to extract the small-scale structure across each Ly-$\alpha$ forest spectrum. The small-scale structure in the forest is sensitive to the temperature of the IGM, and the filter we use is localized in configuration space, which makes it well-suited for application in cases where the temperature field is inhomogeneous. Interestingly, we found that the small-scale structure in the forest is sensitive to the IGM temperature even when the forest is highly absorbed. In particular, the transmission field in between absorbed regions is more spiky if the IGM is cold, compared to hotter models. Using existing high resolution Ly-$\alpha$ forest samples, one should be able to use this difference to distinguish between high redshift and lower redshift reionization models at high significance. It may, however, be necessary to combine measurements of the small-scale structure in the forest with measurements of the larger scale flux power spectrum to help break degeneracies with the mean transmitted flux, which is hard to estimate directly at the high redshifts of interest for these studies. In addition, we considered the impact of spatial variations in the timing of reionization on the width of the wavelet amplitude distribution. We found that these variations broaden the width of this distribution, but that the broadening is fairly subtle. This likely results in part because the temperature variations we are interested in are coherent on rather large scales, and aliasing -- from fluctuations in the transmission field transverse to the line of sight -- obscures our ability to measure large scale fluctuations along the line of sight (e.g. \citealt{McQuinn:2010mq,Lai:2005ha}). Nonetheless, we forecast that our Low-z, $T_r=3 \times 10^4$ K model can be distinguished from a homogeneous temperature model at $2-3 \sigma$ with existing samples of ten high resolution sightlines. Larger samples could improve on this, and an analysis of the small-scale structure in the Ly-$\beta$ forest might help as well. In this paper, we focused on the small-scale structure since it is a direct indicator of the temperature, but another approach would be to consider instead transmission fluctuations on large scales, especially as probed in ``3D'' measurements of the Ly-$\alpha$ forest (e.g. \citealt{McQuinn:2010mq}). This may be possible at $z \gtrsim 4$ with DESI \citep{Levi:2013gra,McQuinn:2010mq}. To robustly interpret future measurements, our modeling should be improved in various ways. In particular, we should incorporate inhomogeneous Jeans smoothing effects into our modeling. This might be accomplished by, for example, incorporating our semi-numeric modeling on top of a large dynamic range HPM \citep{Gnedin:1997td} simulation. These calculations will need to face the competing requirements of capturing the large-scale variations in the timing of reionization, while simultaneously resolving the filtering scale. Nevertheless, we believe that measurements of the $z \gtrsim 5$ IGM temperature should provide a valuable handle on the reionization history of the universe. | 14 | 3 | 1403.6350 |
1403 | 1403.7227.txt | \label{section:abstract} We have obtained radial velocity measurements for 51 new globular clusters around the Sombrero galaxy. These measurements were obtained using spectroscopic observations from the AAOmega spectrograph on the Anglo-Australian Telescope and the Hydra spectrograph at WIYN. Combined with our own past measurements and velocity measurements obtained from the literature we have constructed a large database of radial velocities that contains a total of 360 confirmed globular clusters. Previous studies' analyses of the kinematics and mass profile of the Sombrero globular cluster system have been constrained to the inner $\sim$9\arcmin~($\sim$24 kpc or $\sim$5$R_e$), but our new measurements have increased the radial coverage of the data, allowing us to determine the kinematic properties of M104 out to $\sim$15\arcmin~($\sim$41 kpc or $\sim$9$R_e$). We use our set of radial velocities to study the GC system kinematics and to determine the mass profile and V-band mass-to-light profile of the galaxy. We find that $M/L_V$ increases from 4.5 at the center to a value of 20.9 at 41 kpc ($\sim$9$R_e$ or 15\arcmin), which implies that the dark matter halo extends to the edge of our available data set. We compare our mass profile at 20 kpc~($\sim$4$R_e$ or $\sim$7.4\arcmin) to the mass computed from x-ray data and find good agreement. We also use our data to look for rotation in the globular cluster system as a whole, as well as in the red and blue subpopulations. We find no evidence for significant rotation in any of these samples. | \label{section:introduction} While the details of galaxy formation are not yet well understood, the current paradigm suggests that dark matter (DM) halos play a critical role in the process. In these halos, baryonic matter collects and cools to form stars and galaxies, and it is believed that the subsequent merging of these halos and their contents leads to the formation of more massive galaxies. Thus, understanding the structure of DM halos is fundamentally important for testing galaxy formation models and cosmological theories. One way to examine the DM halo of a galaxy is to analyze its mass profile out to large radii. For gas-rich galaxies such as spirals, this can be done by examining the kinematics of the stars and the neutral hydrogen gas. However this type of analysis is much more difficult for early-type galaxies since they lack these easily observed dynamical tracers. Globular cluster systems provide an excellent set of alternative tracers for exploring the outer regions of early-type galaxies. Globular clusters (GCs) are luminous, compact collections of stars that are billions of years old and formed during the early stages of galaxy formation \citep{Ashman98, Brodie06}. They have been identified in photometric studies out to 10 to 15 effective radii (e.g. \citet[hereafter RZ04]{Rhode04}, \citet{Harris09}, \& \citet{Dirsch03}) and, therefore, serve as excellent probes of the formation and merger history of their host galaxies \citep{Brodie06}. Unfortunately, due to observational constraints, few GC systems have large numbers (more than 100-200) of spectroscopic radial velocity measurements necessary for these types of kinematic studies. Some of the GC systems with the largest number of measured radial velocities include those around massive elliptical galaxies, such as NGC4472 \citep{Cote03}, NGC1399 \citep{KP98, Dirsch04, Schuberth10}, and M87 \citep{Cote01, Hanes01, Strader11} as well as the S0 galaxy NGC5128 \citep{Peng04, Woodley07}. M104, otherwise known as the Sombrero Galaxy or NGC 4594, is an isolated edge-on Sa/S0 galaxy located at a distance of 9.8 Mpc \citep{Tonry01} with an effective radius of 1.7\arcmin~(4.6 kpc) \citep{Kormendy89}. Several photometric studies have been made of the GC system of M104. RZ04 performed a wide-field photometric study of the GC system of the galaxy in $B,V,$ \& $R$, and detected GC candidates out to 25\arcmin~(15$R_e$ or 68 kpc). They also found that the GC system of the Sombrero contains roughly 1900 clusters with a de Vaucouleurs law radial distribution that extends to 19\arcmin~(11$R_e$ or 51 kpc), where extent is defined as the radius where the surface density of GCs is consistent with zero within the estimated measurement errors. \citet{Larsen01}, \citet{Spitler06}, and \citet{Harris10} performed HST photometry on the more crowded central regions of M104 in order to detect GC candidates closer to the center of the galaxy. All of these photometric studies found that the GC system of M104, like those of many giant galaxies, exhibits a bimodal color distribution which is assumed to correspond to a metal-rich red subpopulation and a metal-poor blue subpopulation \citep{Gebhardt99, Kundu01, Rhode04}. Examining the kinematics of GCs in these two sub-populations can test whether or not they formed during two distinct phases of galaxy formation. In addition to these photometric studies, several groups have performed spectroscopic observations of the M104 GC system \citep{Bridges97, Bridges07, Larsen02, Deimos11}. \citet[hereafter B07]{Bridges07} performed a relatively wide-field kinematic study using spectroscopically measured radial velocities for 108 GCs out to 20\arcmin~(12$R_e$ or $\sim$54 kpc) with the 2dF spectrograph on the Anglo-Australian Telescope (AAT). They found that the $M/L_V$ ratio of the galaxy increases with distance from the center to $\sim$12 at 9.5\arcmin~(6$R_e$ or $\sim$25 kpc), which provides tentative support for the presence of a DM halo around M104. In addition, they found no evidence of global rotation in the GC system or in the red and blue sub-populations. However, the limited number of GCs at large radii in this study makes the results in the outer regions uncertain. The most recent spectroscopic study by \citet{Deimos11} consists of a large number of clusters (over 200); however, they only observe GCs out to a distance of about 27 kpc ($\sim$6$R_e$ or $\sim$10\arcmin) from the center of the galaxy. In addition, they did not perform a kinematic analysis of their sample. They did, however, confirm the metallicity bimodality of the GC system detected in the earlier photometric studies (peaks at [Fe/H] = --1.4 and [Fe/H] = --0.6). In order to acquire a more complete understanding of the dynamical properties of the galaxy, it is crucial to obtain both large numbers of velocity measurements and measurements which provide significant spatial coverage. We have obtained new spectroscopic observations of M104 GCs using the AAOmega spectrograph on the 3.9m AAT and the Hydra spectrograph on the WIYN 3.5m telescope. From these data we obtained 51 new GC velocities and we combine these new measurements with data from the literature to create a sample of 360 confirmed M104 GCs with reliable radial velocity measurements that include objects out to 24\arcmin~($\sim$14.1$R_e$ or $\sim$64.9 kpc) in galactocentric distance. This is the largest sample of radial velocity measurements used for a kinematic study for the Sombrero to date. Using this sample, we were able to study the kinematics and mass profile of M104 to a larger radial extent than previous studies. The paper is organized as follows: Section 2 covers the acquisition and processing of the data, while the methods used to obtain radial velocity measurements for our target GC candidates are discussed in \S3. Section 4 provides an analysis of the rotation within the GC system and the determination of the mass profile. Section 5 provides a discussion of our results compared to kinematic studies of other mass tracers in M104 and other galaxies. Finally, \S6 summarizes our findings. | \label{section:discussion} \subsection{Comparison of the Red and Blue Sub-Populations to Other Galaxies} \label{section:rbcomp} Galaxies with well-studied GC kinematics are few, and are most commonly giant cluster ellipticals. In spite of this limited sample, galaxy to galaxy comparisons of GC kinematics have begun to provide interesting results. \citet{Hwang08} compared the kinematic properties of GCs in six well-studied giant elliptical galaxies (M60, M87, M49, NGC 1399, NGC 5128, and NGC 4636), and most recently \citet{Pota13} examined the kinematics of GCs in 12 early type galaxies (9 ellipticals and 3 S0s) as part of the SAGES Legacy Unifying Globulars and Galaxies Survey (SLUGGS). Both of these studies found that, for their galaxies, the rotational properties of the GC systems and the GC system subpopulations were highly varied and are likely to depend on the merger history of the individual galaxy. Numerical simulations of dissipationless mergers by \citet{Bekki05} suggest that outside a radius of $\sim$ 20 kpc ($\sim$4.3$R_e$ or $\sim$7.4\arcmin~at the distance of M104) both GC subpopulations should exhibit rotation on the order of 30 - 40 $km s^{-1}$. However, as discussed in Section \ref{section:rotation}, we see no significant rotation in our GC sample for M104 as a whole or in the individual subpopulations. This perhaps suggests a more complex merger history for this galaxy. In addition to rotation, mergers can also impart differences in the overall velocity dispersion profile of the GC system. Another prediction of the \citet{Bekki05} simulation is that the velocity dispersion profiles of the GC systems of galaxies formed by major mergers decrease as a function of radius. In multiple-merger scenarios, they find that their modeled velocity dispersion profiles can become more flattened. As discussed in earlier sections of this paper, the shape of the velocity dispersion profile determined from observational data is sensitive to the selection of member GCs, therefore, it is difficult to determine whether the decreasing shape of the velocity dispersion profile shown in Figure \ref{fig:vdisp} is intrinsic or a result of the GC selection process. Although the shape of the velocity dispersion profile is uncertain, we can still compare the properties of the velocity dispersion for the red and blue subpopulations. \citet{Pota13} found observational evidence for a difference in the central velocity dispersions between the subpopulations in the GC systems of their SLUGGS galaxies. They found that, in general, the velocity dispersion profiles for the blue GC subpopulations were higher overall than the velocity dispersion profiles of the red GCs. Figure \ref{fig:vdispcolor} shows the smoothed velocity dispersion profiles for the red and blue GC subpopulations in our M104 sample. Consistent with the results of \citet{Pota13} and the simulations of \citet{Bekki05} we find that the center of the velocity dispersion profile of the blue GCs is roughly 60 $km s^{-1}$ higher than that of the red GCs inside a radius of 10\arcmin($\sim$6$R_e$ or $\sim$27 kpc). \subsection{Comparison to Other Mass Tracers} \label{section:other tracers} The most easily observed kinematic tracers in galaxies are the stars and the gas. Although these tracers are limited in their radial extent, it is useful to compare the results from these types of studies with the results from the GC system since they should trace the same underlying mass distribution. \citet{Kormendy89} measured the rotation curve of the stars and gas in M104 using optical spectra from the Canada-France-Hawaii Telescope and used their results to calculate the mass profile of the galaxy out to roughly 3.5\arcmin. \citet{Bridges07} found that the \citet{Kormendy89} profile was in excellent agreement with their mass profile derived from the globular clusters (see \citealt{Bridges07} Figure 7 and associated discussion). We also find good agreement between the \citet{Kormendy89} mass profile and our updated globular cluster mass profile. Figure \ref{fig:mass_comp} shows the 1-$\sigma$ boundaries of our mass profile out to 8\arcmin~($\sim$4.7$R_e$ or $\sim$21.6 kpc) shown as solid black lines. The 1-$\sigma$ mass profile boundaries for the GCs identified using a flat velocity cut are also shown with solid gray lines. Overplotted on this figure is the mass profile of \citet{Kormendy89}, illustrated by the dashed line. Both of our mass profiles are consistent with the \citet{Kormendy89} profile, although inward of $\sim$2\arcmin~the mass profile derived from the flat GC sample is in slightly better agreement. X-ray emission from hot coronal gas has been predicted by galaxy formation models \citep{White78, White91}, and has been observed in many giant elliptical and S0 galaxies \citep{Forman85}. It has also been found in a few spiral galaxies \citep{Bogdan13, Benson00}. M104 has been shown to possess extended, diffuse x-ray emitting hot gas out to $\sim$20 kpc ($\sim$4.3$R_e$ or $\sim$7.4\arcmin) from the galaxy center \citep{Li07, Li11}. These observations can provide additional estimates of the host galaxy mass. Using measurements of the diffuse x-ray emission of M104 from the Einstein Observatory, \citet{Forman85} estimated a total mass for M104 of 9.5$\times$10$^{11}$ M$_\odot$ at a radius of $\sim$6.6\arcmin~($\sim$3.9$R_e$ or $\sim$17.9 kpc). \citet{Li07} observed diffuse x-ray emission in the Sombrero using Chandra and XMM-Newton. They measured a uniform plasma temperature of 0.6-0.7 keV extending to a radius of 20 kpc~($\sim$4$R_e$ or $\sim$7.4\arcmin) from the galaxy center. Assuming the gas is in virial equilibrium, we calculate a total mass enclosed inside this radius between 6.7$\times$10$^{11}$ and 7.8$\times$10$^{11}$ M$_\odot$. The masses determined from the x-ray results of \citet{Forman85} and \citet{Li07} are shown in comparison to our GC mass profile in Figure \ref{fig:mass_comp} as open and filled circles, respectively. We plot the average value for the mass range computed from the \citet{Li07} data, with the full range indicated by the error bars. The mass estimate from the \citet{Li07} x-ray results are in excellent agreement with our GC mass profile; however, the \citet{Forman85} mass estimate falls above our mass profile by roughly 3$\times$10$^{11}$ M$_\odot$ or approximately 5-$\sigma$. It is difficult to judge the consistency between the \citet{Forman85} and other mass determinations due to the absence of a well-determined gas temperature. However, we note that a modest uncertainty on the \citet{Forman85} result of 10\% would place our mass profile within 3-$\sigma$ of this result. | 14 | 3 | 1403.7227 |
1403 | 1403.3096_arXiv.txt | We present the results of local, vertically stratified, radiation magnetohydrodynamic (MHD) shearing box simulations of magneto-rotational (MRI) turbulence appropriate for the hydrogen ionizing regime of dwarf nova and soft X-ray transient outbursts. We incorporate the frequency-integrated opacities and equation of state for this regime, but neglect non-ideal MHD effects and surface irradiation, and do not impose net vertical magnetic flux. We find two stable thermal equilibrium tracks in the effective temperature versus surface mass density plane, in qualitative agreement with the S-curve picture of the standard disk instability model. We find that the large opacity at temperatures near $10^4$K, a corollary of the hydrogen ionization transition, % triggers strong, intermittent thermal convection on the upper stable branch. This convection strengthens the magnetic turbulent dynamo and greatly enhances the time-averaged value of the stress to thermal pressure ratio $\alpha$, {\color{\oldmajor}possibly} by generating vertical magnetic field that may seed the axisymmetric MRI, and by increasing cooling so that the pressure does not rise in proportion to the turbulent dissipation. These enhanced stress to pressure ratios may alleviate the order of magnitude discrepancy between the $\alpha$-values observationally inferred in the outburst state and those that have been measured from previous local numerical simulations of magnetorotational turbulence that lack net vertical magnetic flux. | The accretion of material through a rotationally supported disk orbiting a central gravitating body is a process of fundamental astrophysical importance. In order for material to move inward through the disk to liberate its gravitational energy, its angular momentum must be extracted, so that the material loses its rotational support against gravity. The fluid stresses responsible for these torques are therefore central to the accretion disk phenomenon. Theoretical models of accretion disks that have been used to fit real data generally parameterize the stresses by a dimensionless parameter $\alpha$, the stress measured in terms of local thermal pressure \citep{Shakura_73}. The most reliable estimates of $\alpha$ come from episodic outbursts in dwarf novae. The outburst cycle in these systems is very successfully modeled by disk instability models (DIMs) \citep[][for a recent review, see \citet{Lasota_01}]{Osaki_74,Hoshi_79,Meyer_81,Cannizzo_82,Faulkner_83,Mineshige_83} as a limit cycle between {\color{\major}two stable} % thermal equilibrium states: in outburst (high mass accretion rate) a hot state in which hydrogen is fully ionized, and in quiescence (low mass accretion rate) a cool state in which hydrogen is largely neutral. The measured outburst time scales give well-determined estimates of $\alpha\sim 0.1$ in the hot, ionized state \citep[e.g.][]{Smak_99}. {\color{\major}On the other hand, measured time intervals between outbursts indicate that $\alpha$ in the cool state is an order of magnitude smaller \citep[e.g.][]{Cannizzo_88}.} A plausible physical mechanism for the stresses in ionized disks is correlated magnetohydrodynamic (MHD) turbulence stirred by nonlinear development of the magneto-rotational instability (MRI) \citep{Balbus_91}. The MRI grows because magnetic fields in an electrically conducting plasma cause angular momentum exchange between fluid elements that taps the free energy of orbital shear \citep{Balbus_98}. However, numerical simulations of this turbulence within local patches of accretion disks so far show a universal value of $\alpha\sim 0.01$ unless net vertical magnetic flux is imposed from the outside \citep{Hawley_95,Hawley_96,Sano_04,Pessah_07}. This value is an order of magnitude smaller than the value suggested by the observations of ionized outbursting disks in dwarf novae \citep{King_07}. It is possible that in real disks, local net flux is created by global linkages \citep{Sorathia_10}. However, the centrality of the hydrogen ionization transition to DIMs of dwarf novae may be a clue to the apparent discrepancy in $\alpha$. A sharp change in ionization can alter the opacity and equation of state (EOS) of a fluid, with dynamical consequences if convection arises. Most previous numerical studies that showed $\alpha\sim 0.01$ assumed isothermal disks. In a recent attempt to understand dwarf nova disks in the framework of MRI turbulence, \citet{Latter_12} first demonstrated the bistability of the disk with an analytic approximate local cooling model, but without vertical stratification and therefore without the possibility of convection; the resultant $\alpha$ was $\sim 0.01$ in the absence of net magnetic flux. To explore the generic consequences of convection in stratified MRI turbulence, \citet{Bodo_12} solved an energy equation with finite thermal diffusivity and a perfect gas EOS; they found that convection enhanced the stress, but a notable change in $\alpha$ was not mentioned. Here we present radiation MHD simulations that fully take into account vertical stratification and realistic thermodynamics to determine the state of MRI turbulence in dwarf nova disks. We include opacities and an EOS that reflect the ionization fraction. The local thermal state is determined by a balance between local dissipation of turbulence and cooling calculated from a solution of the radiative transfer problem and a direct simulation of thermal convection. We consider the case of zero net vertical magnetic field, which, as noted above, results in the lowest possible $\alpha$ values. We assume ideal MHD in order to focus on the effects of opacities and the EOS on the thermal equilibrium and turbulent stresses. Non-ideal effects will likely be very important for the cool state in which hydrogen is mostly neutral {\color{\oldmajor}\citep{Gammie_98,Sano_02,Sano_03,Kunz_13}}. Our simulations are successful in reproducing the two distinct branches of thermal equilibria inferred by the DIM: a hot ionized branch and a cool neutral branch. We measure $\alpha$ in all our simulations and find that its value is significantly enhanced {\color{\oldminor}at the low surface brightness end of} the upper branch, due to the fact that the high opacities produce intermittent thermal convection, which enhances the time-averaged magnetic stresses in the MRI turbulence relative to the time-averaged thermal pressure. We present these results in this paper, which is organized as follows. In Section \ref{sec:methods}, we describe the numerical method and the initial condition for our radiation MHD simulations. Quantitative results about thermal equilibrium and MRI turbulence in the simulations are presented in Section \ref{sec:results}. {\color{\oldminor}We discuss our results in Section \ref{sec:discussion},} and we summarize our conclusions in Section \ref{sec:conclusion}. | \label{sec:conclusion} We have successfully identified two distinct stable branches of thermal equilibria in the hydrogen ionization regime of accretion disks: a hot ionized branch and a cool neutral branch. We have measured high values of $\alpha$ on the upper branch that are comparable to those inferred from observations of dwarf nova outbursts, the very systems where $\alpha$ is measured best. The physical mechanism for creating these high $\alpha$ values is specific to the physical conditions of the hydrogen ionization transition that is responsible for these outbursts. That mechanism is thermal convection triggered by the strong dependence of opacity upon temperature. We confirm the finding of \citet{Bodo_12} that convection modifies the MRI dynamo to enhance magnetic stresses, but our more realistic treatment of opacity and thermodynamics yields a larger effect, with a substantial increase in $\alpha$. Convection acts only in a narrow range of temperatures near the ionization transition because that is where the opacity is greatest. Thus the high values of $\alpha$ are restricted to the upper bend in the S-curve. Because the observational inference of high values of $\alpha$ is based on outburst light-curves, our finding that $\alpha$ is especially large near the low surface density end of the upper branch is relevant to the quantitative interpretation of these light-curves. Similarly, when we understand better the stresses in the plasma on the lower branch, where non-ideal MHD effects are important \citep[see, for example,][]{Menou_00}, those results will bear on observational inferences tied to the recurrence times of dwarf novae. | 14 | 3 | 1403.3096 |
1403 | 1403.6216_arXiv.txt | {We show that Bogoliubov excited scalar and tensor modes do not alleviate Planckian evolution during inflation if one assumes that $r$ and the Bogoliubov coefficients are approximately scale invariant. We constrain the excitation parameter for the scalar fluctuations, $\beta$, and tensor perturbations, $\tilde{\beta}$, by requiring that there be at least three decades of scale invariance in the scalar and tensor power spectrum. For the scalar fluctuations this is motivated by the observed nearly scale invariant scalar power spectrum. For the tensor fluctuations this assumption may be shown to be valid or invalid by future experiments.} \arxivnumber{1403.6216} \begin{document} | The recent measurement by BICEP2 of the primordial B-mode polarization \cite{Ade:2014xna} indicate that naively the inflaton field traverses Planckian field values during inflation according to the Lyth bound \cite{Lyth:1996im}. There have been attempts to alter the expression for the tensor-to-scalar ratio in a way such that this conclusion no longer holds \cite{Dufaux:2007pt,Cook:2011hg,Senatore:2011sp,Barnaby:2011qe,Barnaby:2012xt,Carney:2012pk,Biagetti:2013kwa,Lello:2013awa,Collins:2014yua}. In particular, there have been investigations into whether excited spectator fields or modified scalar/tensor mode functions may appreciably alleviate the need for Planckian evolution.\\ Any loop correction to the scalar and/or tensor power spectrum will be suppressed by powers of the Planck mass and therefore are not promising for generating an appreciable effect in general. Additionally, any loop corrections to the scalar power spectrum must be small so as not to disrupt the agreement between standard inflationary theory and the observed scalar power spectrum. Therefore, rather than relying on loop corrections one promising avenue for generating an appreciable modification to the tensor-to-scalar ratio is through modifications of the fluctuation mode functions. We will show that for scale invariant Bogoliubov parameters\footnote{There have previously been investigations into the consequences of one particular form of tensor excitation scale dependence \cite{Ashoorioon:2013eia,Ashoorioon:2014nta}. These studies use different arguments than are presented here. Also, see \cite{Sriramkumar:2004pj} for a discussion on the relationship between trans-Planckian physics and excited initial states.}, excited Bogoliubov states do not appreciably alleviate the need for Planckian evolution.\\ We will denote the reduced Planck mass by $M_P^2=(8\h\pi\h G_N)^{-1}$ and physical momenta by $p=(k/a)$. For simplicity we will assume that the tensor-to-scalar ratio $r$ is approximately scale independent over the three to four decades of the observed scalar power spectrum, this assumption may be shown to be invalid by future experiments. | We have shown that modifying the scalar and/or tensor fluctuation mode functions to have a non-zero Bogoliubov excitation parameter does not allow one to escape having Planckian field evolution according to the Lyth bound if one assumes at least three decades of scale independence for the tensor-to-scalar ratio ($r$) and the Bogoliubov excitation parameters ($\beta,\tilde{\beta}$). Further measurements may show that the scale dependence in the tensor spectrum is stronger than in the scalar spectrum implying the scale dependence of $\tilde{\beta}$ is non-negligible. An interesting direction for future studies would be to find out what degree of scale dependence is necessary to prevent the presence of Planckian field evolution and finding what physical mechanism generates such scale dependence. | 14 | 3 | 1403.6216 |
1403 | 1403.6020_arXiv.txt | { We report our investigation of the first transiting planet candidate from the YETI project in the young ($\sim$4\,Myr old) open cluster Trumpler\,37. The transit-like signal detected in the lightcurve of the F8V star 2M21385603+5711345 repeats every $1.364894\pm0.000015$ days, and has a depth of $54.5\pm0.8$\,mmag in $R$. Membership to the cluster is supported by its mean radial velocity and location in the color-magnitude diagram, while the Li diagnostic and proper motion are inconclusive in this regard. Follow-up photometric monitoring and adaptive optics imaging allow us to rule out many possible blend scenarios, but our radial-velocity measurements show it to be an eclipsing single-lined spectroscopic binary with a late-type (mid-M) stellar companion, rather than one of planetary nature. The estimated mass of the companion is 0.15--0.44\,$M_{\sun}$. The search for planets around very young stars such as those targeted by the YETI survey remains of critical importance to understand the early stages of planet formation and evolution. } | \label{Sec:intro} The transit technique has been one of the most successful methods for detecting extrasolar planets. It has enabled the discovery of more than 430\footnote{\url{http://exoplanet.eu} as of 2014-02-27} transiting exoplanets to date. The transit light curve enables one to determine the planet-to-star radius ratio and the inclination of the planetary orbit (Seager \& Mall{\'e}n-Ornelas \cite{sea03}), from which the absolute size of the planet can be established provided an estimate of the size of the parent star is available. Inferring the planetary mass usually requires high-precision radial velocity measurements that can be more expensive to obtain in terms of telescope time. These spectroscopic observations yield the minimum mass $M_p \sin i$ of the companion, which, combined with the inclination from the light curve and an estimate of the stellar mass, then enables the true mass $M_p$ of the planet to be calculated. Alternatively, measurements of transit timing variations can also yield the planetary mass in favorable cases. The presence of transit-like signals in the light curve of a star is no guarantee of the planetary nature of the star's companion. The majority of such signals in ground-based transit searches end up being astrophysical false positives, and careful analysis is required to rule them out (see, e.g., Charbonneau et al.\ \cite{cha04}). For example, the transit of a small star in front of a much larger star (of earlier type, or a giant) can produce a transit depth indistinguishable from that of a true planet around a star. These scenarios can usually be discovered by performing low-resolution spectroscopy to classify the primary star. In the low-mass regime of degenerate compact objects the radius is independent of the mass (Guillot \cite{gui99}). Therefore, for a given transit signal, the companion may be a planet, a brown dwarf, or even a low-mass star. Medium- or high-resolution spectroscopy obtained near the quadratures is often sufficient to distinguish these cases, as companions that are brown dwarfs or late-type stars would generally induce easily detectable radial velocity variations on the star. The same spectra also permit the identification of cases of grazing eclipses of binary systems, which can also mimic the transit signals. Another important source of false positives are background eclipsing binaries blended with the foreground star (see, e.g., Torres et al.\ \cite{tor04}) . The light from the foreground target can reduce the otherwise deep eclipses of the binary to planetary proportions, mimicking a transit signal. In many cases adaptive optics imaging on large telescopes can help to reduce the possibility of such a contaminant at small angular separations. Most transit surveys target field stars, for which age determination is generally challenging, and those searches are often biased towards main-sequence stars with ages of the order of Gyrs. No transiting planets have yet been identified around young stars (ages less than 100\,Myr), with the possible exception of the transit candidate around the weak-lined T~Tauri star CVSO30 in the 8\,Myr old cluster 25\,Ori (van Eyken et al.\ \cite{eyk12}, Barnes et al.\ \cite{bar13}). Only a few studies have searched for transiting planets around young stars. Two examples include the CoRoT satellite, which observed the 3\,Myr old cluster NGC\,2264 for 24 days (Affer et al.\ \cite{aff13}), and the MONITOR project, which is investigating several young clusters with ages in the range 1--200\,Myr (Hodgkin et al.\ \cite{hod06}), staring at each cluster for at least 10 nights. Neither project has reported any close-in planet discoveries to date. This is rather surprising, as one expects planets to form already in the proto-planetary disk, which begins to dissipate a few Myr after star formation (Mamajek \cite{mam09}). Theoretical models are still rather uncertain at young ages as they depend upon the unknown initial conditions for planet formation (e.g., Marley et al.\ \cite{mar07}, Fortney et al.\ \cite{for08}, Spiegel \& Burrows \cite{spi12}). Therefore, obtaining precise mass and radius measurements for planets around stars in young clusters of known age is critically important to test various aspects of current models of planet formation and evolution. The YETI (Young Exoplanet Transit Initiative) network was established precisely to search for transiting planets in young clusters (Neuh\"auser et al.\ \cite{neu11}). The network consists of ground-based telescopes with apertures of 0.4\,m to 2\,m, spread out in longitude across several continents for significantly increased duty cycle and insurance against bad weather. The project is narrowly focused on clusters with ages of 2 to 20\,Myr, including Trumpler\,37, 25\,Ori, IC\,348, Collinder\,69, NGC\,1980, and NGC\,7243. Each of the clusters is monitored for three years at a time. In each year we schedule three YETI campaigns of one to two weeks duration each. In this paper we report on the investigation of the first transiting planet candidate uncovered by the YETI project around the $R = 15$ magnitude star 2M21385603+5711345. This object is located in the area of the 4\,Myr old (Kun, Kiss \& Balog \cite{kkb08}) cluster Trumpler\,37, distant about 870\,pc from the Sun (Contreras et al.\ \cite{con02}). For a summary of the cluster properties we refer the reader to the work of Errmann et al.\ (\cite{err13}), which includes a list of candidates members of Trumpler\,37. The transit candidate studied here is not included in that list, so that its properties were not previously known. As we describe below, our extensive follow-up observations reveal it to be a false positive (an eclipsing single-lined spectroscopic binary with a low-mass stellar secondary) rather than a true planet. We use our observations to characterize both the companion and the parent star. | We have performed extensive follow-up observations (photometric monitoring, imaging, and spectroscopy) of the first transiting planet candidate (2M21385603+5711345) from the YETI network, in the young open cluster Trumpler\,37. Membership in the cluster seems likely based on its mean radial velocity and location in the color-magnitude diagram, though other evidence (small proper motion and lack of Li $\lambda$6708 absorption) is inconclusive. Careful analysis of our survey and follow-up observations shows that the candidate is an astrophysical false positive rather than a true planet. We determine the companion to be a late-type (mid-M) star in an eclipsing configuration around the late F primary star. Close visual inspection of the fully reduced and rebinned YETI light curve in Fig.~\ref{Fig:lc_yeti} after combining all telescopes shows a hint of a secondary eclipse at phase 0.5 that would normally be an early warning sign, but that is too subtle to have been noticed earlier in the analysis. \begin{sloppypar} \tolerance 9999 While disappointing, this outcome is not surprising given the fact that all other ground-based transit surveys have experienced very high rates of false positives typically in excess of 80\% (see, e.g., Brown \cite{Brown03}, Konacki et al.\ \cite{Kon03}, O'Donovan et al.\ \cite{Odon06}, Latham et al.\ \cite{Lat09}). Regardless of this result, the search for planets around very young stars remains of critical importance for our understanding of the formation and evolution of exoplanets, and to learn about the properties of these objects at the very early stages. \end{sloppypar} With the YETI network in full operation we continue to monitor several young open clusters as described in Sect.~\ref{Sec:intro}, and follow-up observations for two additional transit candidates are currently underway. | 14 | 3 | 1403.6020 |
1403 | 1403.0430_arXiv.txt | Due to their higher planet-star mass-ratios, M dwarfs are the easiest targets for detection of low-mass planets orbiting nearby stars using Doppler spectroscopy. Furthermore, because of their low masses and luminosities, Doppler measurements enable the detection of low-mass planets in their habitable zones that correspond to closer orbits than for Solar-type stars. We re-analyse literature UVES radial velocities of 41 nearby M dwarfs in a combination with new velocities obtained from publicly available spectra from the HARPS-ESO spectrograph of these stars in an attempt to constrain any low-amplitude Keplerian signals. We apply Bayesian signal detection criteria, together with posterior sampling techniques, in combination with noise models that take into account correlations in the data and obtain estimates for the number of planet candidates in the sample. More generally, we use the estimated detection probability function to calculate the occurrence rate of low-mass planets around nearby M dwarfs. We report eight new planet candidates in the sample (orbiting GJ 27.1, GJ 160.2, GJ 180, GJ 229, GJ 422, and GJ 682), including two new multiplanet systems, and confirm two previously known candidates in the GJ 433 system based on detections of Keplerian signals in the combined UVES and HARPS radial velocity data that cannot be explained by periodic and/or quasiperiodic phenomena related to stellar activities. Finally, we use the estimated detection probability function to calculate the occurrence rate of low-mass planets around nearby M dwarfs. According to our results, M dwarfs are hosts to an abundance of low-mass planets and the occurrence rate of planets less massive than 10 M$_{\oplus}$ is of the order of one planet per star, possibly even greater. Our results also indicate that planets with masses between 3 and 10 M$_{\oplus}$ are common in the stellar habitable zones of M dwarfs with an estimated occurrence rate of 0.21$^{+0.03}_{-0.05}$ planets per star. | In recent years, planets have been discovered around the least massive stars, M dwarfs, in a diversity of different configurations with widely varying orbital properties and masses \citep[e.g.][and references therein]{endl2006,bonfils2013}. For instance, there are several high-multiplicity systems around M dwarfs consisting of only low-mass planets that can be referred to as super-Earths or Neptunes, such as those orbiting GJ 581 \citep{bonfils2005,udry2007,mayor2009}\footnote{We note that the number of planets around GJ 581 is uncertain with different authors reporting different numbers from three to six \citep[see][]{vogt2010,vogt2012,gregory2011,tuomi2011,baluev2012}.}, GJ 667C \citep{anglada2012,anglada2013,delfosse2012}, and GJ 163 \citep{bonfils2013b,tuomi2013c}. Recent precision velocity surveys have also revealed the existence of more massive planetary companions orbiting nearby M dwarfs \citep[e.g.][]{rivera2010,anglada2012b} showing that such companions do exist, but not in abundance \citep{bonfils2013,montet2013}, and are less common than for K, G, and F stars \citep{endl2006}. However, the most interesting planetary companions around these stars are the low-mass ones that orbit their hosts with such separations that, under certain assumptions regarding atmospheric properties, they can be estimated to enable the existence of water in its liquid form on the planetary surfaces \citep[e.g.][]{selsis2007,kopparapu2013}. Planets of this type -- sometimes called habitable-zone super-Earths -- are easier to detect around M dwarfs than around more massive stars because the planet-star mass-ratios give rise to signals with sufficiently high amplitudes, and the shorter orbital periods allow for more orbital phases to be sampled in data covering a fixed length of time, to enable their detections \citep[e.g.][]{mayor2009,anglada2013,tuomi2013c}. Recently, accurate estimates for the occurrence rate of planets in the \emph{Kepler's} field have been reported in several studies \citep[e.g.][]{howard2012,dressing2013,morton2013}. One of the most interesting features in the \emph{Kepler} sample is that the occurrence rate of planets around stars appears to increase from roughly 0.05 planets per star around F2 stars to 0.3 per star around M0 dwarfs \citep{howard2012}, although the functional form of this relation is far from certain. This increase applies to planets with orbital periods below 50 days because of the available baseline of the \emph{Kepler} data. While \emph{Kepler} will be able to provide occurrence rates for longer orbital periods, possibly up to 200-300 days, radial velocity surveys will be needed to probe the occurrence rate of planets on orbits longer than that. Moreover, unlike planets around more massive K, G, and F stars that have been targeted by the \emph{Kepler} space-telescope in abundance, M dwarfs are not bright enough to be found in comparable numbers in the \emph{Kepler's} field, which makes it difficult to estimate the occurrence rates and statistical properties of planets around such stars in detail. According to \citet{dressing2013}, the \emph{Kepler's} sample contains 3897 stars with estimated effective temperatures below 4000 K, out of which 64 are planet candidate host stars with a total of 95 candidate planets orbiting them. \citet{dressing2013} concluded that with periods ($P$) less than 50 days, the occurrence rate of planets with radii between 0.5 R$_{\oplus} < r_{p} < 4 $R$_{\oplus}$ is 0.90$^{+0.04}_{-0.03}$ planets per star; with radii between 0.5 R$_{\oplus} < r_{p} <$ 1.4 R$_{\oplus}$ is 0.51$^{+0.13}_{-0.06}$ planets per star, although this estimate might be underestimated as much as by a factor of two \citep{morton2013}; and that the occurrence rate of planets with $r_{p} > 1.4$ R$_{\oplus}$ decreases as a function of decreasing stellar temperature. Furthermore, the occurrence rate of planets appears to decrease heavily between 2 and 4 R$_{\oplus}$, which is indicative of overabundance of planets with low radii and therefore low masses \citep{morton2013}. These findings challenge the results obtained using radial velocity surveys that should be able to detect planets with similar statistics, although the comparison with \emph{Kepler's} results is difficult due to the challenges in comparing populations described in terms of planetary radii and minimum masses in the absense of accurate population models for planetary compositions and therefore densities. The estimates based on transits detected by using the \emph{Kepler} telescope might also be contaminated by a false positive rate of $\sim$ 10\% due to astrophysical effects such as stellar binaries in the background \citep{morton2011,fressin2013}. Far fewer planets around M dwarfs are known from radial velocity surveys of such stars \citep[e.g.][who reported nine planet candidates in their sample]{bonfils2013}. However, the ones that are known are among the richest and the most interesting extrasolar planetary systems in terms of numbers of planets, their orbital spacing and dynamical packedness, and their low masses \citep[e.g.][]{mayor2009,rivera2010,anglada2012b,anglada2013,tuomi2013c}. To a certain extent, this lack of known planets around M dwarfs is due to observational biases arising from the fact that early radial velocity surveys did not target low-mass stars because of the difficulties in obtaining sufficiently high signal-to-noise observations due to a lack of photons in the V band to enable high quality radial velocity measurements. Another reason was that -- based on a sample size of unity -- Solar-type stars were considered more promising hosts to planetary systems. This observational bias is also likely caused by the fact that -- in comparison with stars of the spectral classes F, G, and K -- massive giant planets are not as abundant around M dwarfs \citep{bonfils2013}, and the planets that exist, if they indeed do exist, are likely so small that they induce radial velocity signals that have amplitudes comparable to the current high-precision measurement noise levels, which makes their detection difficult at best. \citet{bonfils2013} reported estimates for the occurrence rates of planetary companions orbiting M dwarfs based on radial velocity measurements obtained by using the \emph{High Accuracy Radial velocity Planet Searcher} (HARPS) spectrograph. According to their results, super-Earths with minimum masses between 1 and 10 M$_{\oplus}$ are abundant around M dwarfs with an occurrence rate of 0.36$^{+0.25}_{-0.10}$ for periods between 1 and 10 days and 0.52$^{+0.50}_{-0.16}$ for periods between 10 and 100 days, respectively. Furhtermore, they reported an estimate for the occurrence rate of super-Earths in the habitable zones (HZs) of M dwarfs of 0.41$^{+0.54}_{-0.13}$ planets per star. M dwarfs are the most abundant type of stars in the Solar neighbourhood. Therefore, the occurrence rate of planets around these stars will dominate any general estimates of the occurrence rate of planets. For this reason, we re-analyse the radial velocities obtained using the \emph{Ultraviolet and Visual Echelle Spectrograph} (UVES) at VLT-UT2 of a sample of M dwarfs of \citet{zechmeister2009} using posterior sampling techniques in our Bayesian search for planetary signals. We also extract HARPS radial velocities for these stars from the publicly available spectra in the European Southern Observatory (ESO) archive and analyse the combined UVES and HARPS velocities. The methods are presented in Section \ref{sec:statistical_tools} in detail and we show the results based on combined HARPS and UVES data in Section \ref{sec:UVES}. We present the statistics of the new planet candidates we detect and compare the obtained occurrence rates to other planet surveys targeting M dwarfs in Section \ref{sec:planet_statistics}, describe some of the interesting new planetary systems and the evidence in favour of their existence in greater detail in Section \ref{sec:new_systems}, and discuss the results in Section \ref{sec:discussion}. | \label{sec:discussion} We have presented our analysis of UVES velocities of a sample of 41 M dwarfs \citep{zechmeister2009} when combining the velocities with HARPS precision data as obtained from the spectra available in the ESO archive. As a result, we report the existence of eight new planet candidates around the sample stars (Tables \ref{tab:UVES_signals} and \ref{tab:planet_orbits}) and confirm the existence of the two companions around GJ 433 \citep{delfosse2012} that exceed our conservative probabilistic detection threshold by making the statistical models more than 10$^{4}$ times more probable than models without the corresponding signals. Among the most interesting targets in our sample are GJ 433, GJ 180, and GJ 682, with at least two candidate planets each. We have also presented estimates for the occurrence rate of low-mass planets around M dwarfs (Table \ref{tab:occurrence}) based on the current sample. We find that low-mass planets are very common around M dwarfs in the Solar neighbourhood and that the occurrence rate of planets with masses between 3 and 10 M$_{\oplus}$ is 1.08$^{+2.83}_{-0.72}$ per star. This estimate is likely consistent with that suggested based on the \emph{Kepler} results for a sample of stars with $T_{\rm eff} < 4000$ K \citep{dressing2013,morton2013}, although the comparisons are not easily performed because we could assess the occurrence rates of companions with periods up to the span of the radial velocity data of a few thousand days. On the other hand, we confirm the lack of planets with masses above 3 M$_{\oplus}$ on orbits with periods between 1-10 days. Such companions to low-mass stars have an occurrence rate of only 0.06$^{+0.11}_{-0.03}$ planets per star based on our sample. There are nine targets in the sample that are also found in the sample of M dwarfs presented in \citet{bonfils2013}: GJ 1, GJ 176, GJ 229, GJ 357, GJ 433, GJ 551, GJ 682, GJ 699, GJ 846, and GJ 849. Out of these nine stars, we found signals in the velocities of GJ 229, GJ 433, and GJ 682. Our results are essentially similar for GJ 433, for which \citet{bonfils2013} reported a signal at 7.4 days and the same group reported another long-period signal when analysing the HARPS data in combination with the UVES data analysed here \citep{delfosse2012}. The planet candidates GJ 229 b, GJ 682 b and c have orbital periods of 471 [459, 493], 17.478 [17.438, 17.540], and 57.32 [56.84, 57.77] days. \citet{bonfils2013} did not report any such periodicities for these stars. We believe the reason is that we obtained HARPS-TERRA velocities from the HARPS spectra that are more precise for M dwarfs \citep{anglada2012c}, combined the HARPS velocities with the UVES ones which provides more information on the underlying periodic signals regardless of whether the signals can be detected in the two data sets independently or not, and accounted for correlations in the velocity data that could disable the detections of low-amplitude signals if not accounted for \citep{baluev2012,tuomi2012c,tuomi2013b}. We have compared our results briefly with those obtained by using the \emph{Kepler} space-telescope \citep[e.g.][]{howard2012,dressing2013} in Section \ref{sec:planet_statistics}. However, such a comparison is not necessarily reliable because the properties of \emph{Kepler's} transiting planet candidates can only be discussed in terms of planetary radii and the radial velocity method can only be used to obtain minimum masses. Because of this, it is not surprising that there are remarkable differences that are unlikely to arise by chance alone. For instance, \citet{dressing2013} estimated that there are roughly 0.15$^{+0.13}_{0.06}$ Earth-sized planets (radii between 0.5 and 1.4 R$_{\oplus}$) in the habitable zones of cool stars (with $T_{\rm eff} <$ 4000 K) and that the nearest such planet could be expected to be found within 5 pc with 95\% confidence. We calculated a similar estimate for candidates with masses between 3 and 10 M$_{\oplus}$ and obtained an occurrence rate estimate of 0.21$^{+0.03}_{-0.05}$ planets per star that appears to be higher than the estimate of \citet{dressing2013} despite the fact that we cannot assess the occurrence rates of planets with masses below 3 M$_{\oplus}$ because we did not detect any such candidates orbiting the stars in the sample. However, these estimates can only be compared in detail with a range of robust planet composition and evolution models in hand, and is beyond the current work. According to our results, M dwarfs have very high rates of hosting systems of low-mass planets around them and have a high probability of being hosts to super-Earths in their habitable zones. Together with the fact that radial velocity surveys can be used to obtain evidence for Earth-mass planets orbiting such stars, and the fact that M dwarfs are very abundant in the Solar neighbourhood, this makes them primary targets for searches of Earth-like planets, and possibly life, with current and future planet surveys. | 14 | 3 | 1403.0430 |
1403 | 1403.5812_arXiv.txt | Broadband optical and narrowband \ion{Si}{13} X-ray images of the young Galactic supernova remnant Cassiopeia A (Cas A) obtained over several decades are used to investigate spatial and temporal emission correlations on both large and small angular scales. The data examined consist of optical and near infrared ground-based and {\it Hubble Space Telescope} images taken between 1951 and 2011, and X-ray images from {\it Einstein}, {\it ROSAT}, and {\it Chandra} taken between 1979 and 2013. We find weak spatial correlations between the remnant's X-ray and optical emission features on large scales, but several cases of good optical/X-ray correlations on small scales for features which have brightened due to recent interaction with the reverse shock. We also find instances where: (i) a time delay is observed between the appearance of a feature's optical and X-ray emissions, (ii) displacements of several arcseconds between a feature's X-ray and optical emission peaks and, (iii) regions showing no corresponding X-ray or optical emissions. To explain this behavior, we propose a highly inhomogeneous density model for Cas A's ejecta consisting of small, dense optically emitting knots (n $\sim 10^{2-3}$ cm$^{-3}$) and a much lower density (n $\sim 0.1 - 1$ cm$^{-3}$) diffuse X-ray emitting component often spatially associated with optical emission knots. The X-ray emitting component is sometimes linked to optical clumps through shock induced mass ablation generating trailing material leading to spatially offset X-ray/optical emissions. A range of ejecta densities can also explain the observed X-ray/optical time delays since the remnant's $\approx 5000$ km s$^{-1}$ reverse shock heats dense ejecta clumps to temperatures around $3 \times 10^{4}$ K relatively quickly which then become optically bright while more diffuse ejecta become X-ray bright on longer timescales. Highly inhomogeneous ejecta as proposed here for Cas A may help explain some of the X-ray/optical emission features seen in other young core collapse SN remnants. | With an estimated undecelerated explosion date around 1680 \citep{thor01,fesen06}, Cassiopeia A (Cas A) is one of the youngest known Galactic supernova remnants (SNRs). It is also one of the few historic remnants with a secure supernova (SN) subtype through the detection of optical and infrared light echoes of its initial supernova outburst \citep{krause08,rest08,rest11,Besel12}. Optical spectra of its light echo show Cas A to be the remnant of a core-collapse Type IIb supernova event with an optical spectrum at maximum light similar to SN 1993J in M81. As indicated by the slow dense wind into which Cas A's forward shock is expanding \citep{chevalier03,hwang09}, the Cas~A progenitor was probably a red supergiant with a mass of 15--25M$_{\sun}$ that may have lost much of its hydrogen envelope to a binary interaction \citep{young06,hwang12}. Viewed in X-rays, the remnant consists of a bright, line-emitting shell arising from reverse shocked ejecta rich in O, Si, S, Ar, Ca, and Fe \citep{fabian80, markert83,vink96,hughes00,willingale02,willingale03,hwang03,laming03}. Small knots and filamentary regions of X-ray emitting ejecta have been observed to change in intensity and structure over time, indicating the location of recently shocked, ionizing ejecta \citep{patnaude07}. Exterior to this shell are faint X-ray filaments which mark the current location of the SNR's $\simeq$ 5000 km s$^{-1}$ expanding forward shock \citep{delaney04,patnaude09a}. This emission is largely nonthermal but can include faint line emission from shocked circumstellar material \citep[CSM;][]{araya10}. This outer nonthermal X-ray emission is fading with time \citep{patnaude11} while the bulk of the remnant's bright thermal emission arising from shocked ejecta has remained relatively steady over the last few decades. The remnant's optical and infrared emissions trace the location of its denser debris ($\geqq 10^{3}$ cm$^{-3}$; \citealt{Hurford96,Fesen2001}) which in some places is co-spatial with lower density and more diffuse X-ray emitting material. The bulk of Cas~A's optical and near-infrared emission consists of a V$_{\rm r}$ = $-4000$ to $+6000$ km s$^{-1}$ expanding shell of knots, condensations, and filaments which lack any H$\alpha$ emission \citep{ck78,ck79,Reed1995,delaney10,MF2013}. A few dozen semi-stationary condensations known as QSFs which do exhibit strong H$\alpha$ and [\ion{N}{2}] $\lambda\lambda$6548, 6583 emissions with prevalently negative radial velocities of 0 to $-250$ km s$^{-1}$ appear to be pre-SN, circumstellar mass-loss material \citep{vdbD70,vdb71b,vandenbergh85,Reed1995}. In the picture of an inhomogeneous SN debris field with a range of ejecta densities and component dimensions, strong and widespread correlations between low density X-ray emitting material and small dense optically emitting knots are not expected; optical emission arises from gas with electron temperatures of around $\sim$ 30,000 K while X-rays arise from material shock heated to several million degrees K. Indeed, one generally sees only a weak spatial correlation of bright fine scale X-ray and optical features in images of Cas~A taken at similar epochs \citep{laming03}. But there are exceptions. For example, while noting only broad optical/X-ray emission coincidences in the remnant's northern regions in the 1979 {\sl Einstein} image, \cite{fabian80} cited an especially good correspondence between optical, X-ray, and radio emission in the bright optical Filament 1 of \citet{BM54}. However, in places where dense optical emitting knots are embedded within or associated with a much lower density component there might be a close optical/X-ray emission correlation. The resemblance of the remnant's morphology in the optical and Si and S X-ray emission lines is suggestive of at least some spatial coincidence \citep{Hwang00}. Furthermore, indications of ejecta knot mass stripping in Cas A have been reported \citep{Fesenetal2001,Fesen11} and such trailing material, if of the right density and mass, could lead to detectable associated X-ray emission downstream. Lastly, different excitation timescales between optical and X-ray emissions might lead to discernible time lags between the onset of strong optical and strong X-ray emissions. Here we present the results of an investigation into X-ray and optical correlations that may be related to such factors through a comparative survey of Cas A's optical and X-ray emission evolution over the last 30 years. The observations and results are described in $\S$2 and $\S$3, with a modeling analysis described in $\S$4. A summary of our findings and conclusions is given in $\S$5. | We have presented optical observations of Cassiopeia A dating back to 1951 and up through 2011 and compare these data to X-ray observations taken with {\it Einstein}, {\it ROSAT}, and {\it Chandra} in order to investigate spatial correlations between X-ray and optical emissions. Due to the large differences in postshock densities and temperatures, the prevailing view has been that there is little correlation between ejecta that emits in X-rays and that which emits optically. However, our study shows that this is not always true. Taking into account the dynamical evolution of shocked ejecta and the relevant X-ray and optical timescales involved in the radiative and hydrodynamical evolution of shocked ejecta, we do find X-ray/optical correlations in many regions of Cas A. We have identified four cases of correlations and anti-correlations between X-ray and optical emission in the shocked ejecta in Cas A. These are: 1) X-ray and optical emission time delays of years or even several decades where the optical emission for a region or feature shows up prior to its associated X-ray emission, 2) spatial offsets typically of a few arcseconds ($\approx 10^{17}$) cm) between a feature's optical and X-ray emission emission peaks, 3) regions showing significant optical emission but with no corresponding X-ray emission, and 4) strong X-ray emitting regions having little if any positional coincident optical emission. To explain these correlations and anti-correlations, we propose a highly inhomogeneous density model for Cas A's ejecta consisting of: 1) small dense knots which rapidly form optical emission following reverse shock front passage embedded in a more extended and more diffuse lower density component giving rise to associated X-ray emission but sometimes showing a significant time delay relative to the optical, 2) shock induced mass ablation off dense ejecta clumps thereby generating trailing low density and X-ray emitted material and hence positionally offset from the optical emitting knots, 3) smooth and relatively continuous high density ejecta filaments or shell walls that never reach X-ray emitting temperature, or conversely, large extended regions consisting mainly of low density ejecta, both of which lead to X-ray/optical emission anti-correlations. A highly inhomogeneous ejecta model as proposed here for Cas A may also help explain some of the X-ray/optical emission features seen in other young core collapse SN remnants. However it remains to be determined what dynamical and radiative properties of the explosion mechanism sets the relative volumes of a remnant's low and high density ejecta, how these components are distributed and arranged in the expanding debris cloud, or what are the limits to range of ejecta densities as a function of elemental abundances and expansion velocity \citep[e.g.][]{kifonidis03}. Further studies into some of the properties of Cas~A's interior unshocked ejecta like those reported by \citet{smith09}, \citet{isensee2010}, \citet{Grefen2014} and \citet{MF2014} may give us valuable insights to these issues. | 14 | 3 | 1403.5812 |
1403 | 1403.2329_arXiv.txt | { Recent theoretical works claim that high-mass X-ray binaries could have been important sources of energy feedback into the interstellar and intergalactic media, playing a major role in both the early stages of galaxy formation and the physical state of the intergalactic medium during the reionization epoch. A metallicity dependence of the production rate or luminosity of the sources is a key ingredient generally assumed but not yet probed.} {Our goal is to explore the relation between the X-ray luminosity and star formation rate of galaxies as a possible tracer of a metallicity dependence of the production rates and/or X-ray luminosities of high-mass X-ray binaries, using hydrodynamical cosmological simulations. }{We developed a model to estimate the X-ray luminosities of star forming galaxies based on stellar evolution models which include metallicity dependences. We applied our X-ray binary models to galaxies selected from hydrodynamical cosmological simulations which include chemical evolution of the stellar populations in a self-consistent way. Hence for each simulated galaxies we have a distribution of stellar populations with different ages and chemical abundances, determined by its formation history. This allows us to robustely predict the X-ray luminosity -- star formation rate relation under different hypotheses for the effects of metallicity.} { Our models successfully reproduce the dispersion in the observed relations as an outcome of the combined effects of the mixture of stellar populations with heterogeneous chemical abundances and the metallicity dependence of the X-ray sources. We find that the evolution of the X-ray luminosity as a function of the star formation rate of galaxies could store information on possible metallicity dependences of the high-mass X-ray sources. A non-metallicity dependent model predicts a non-evolving relation while any metallicity dependence should affect the slope and the dispersion as a function of redshift. Our results suggest the characteristics of the X-ray luminosity evolution can be linked to the nature of the metallicity dependence of the production rate or the X-ray luminosity of the stellar sources. By confronting our models with current available observations of strong star-forming galaxies, we find that only chemistry-dependent models reproduce the observed trend for $z < 4$. However, it is is not possible to prove the nature of this dependence yet. } {} | \label{intro} High-mass X-ray binaries (HMXBs) are systems composed by a compact object, which can be a neutron star (NS) or a black hole (BH), and an early-type star. The compact object accretes mass from its companion star, converting gravitational into thermal energy, part of which is radiated away in the X-ray band ($\sim 0.1-10\ {\rm keV}$). Since the first high-energy observatories ({\em Einstein}, {\em ROSAT}, {\em ASCA}), these sources have been observed in the Milky Way as well as in nearby galaxies. In the last decade, the higher angular resolution and sensitivity of {\em Chandra} and {\em XMM-Newton} allowed these observatories to detect thousands of HMXBs in the local Universe \citep[][and references therein]{Grimm2003,Fabbiano2006,Mineo2012}. The relation between HMXBs and massive stars makes these sources dominate the X-ray luminosity of star-forming galaxies with high specific star formation rate (sSFR). \citet{Mineo2012} have compiled a large sample of HMXBs in nearby late-type galaxies, for which they claim that the contamination by other types of sources (i.e. low-mass X-ray binaries ---LMXBs---, background active galactic nuclei) is negligible. Recently, the X-ray emission of a sample of metal-poor blue compact dwarf galaxies was investigated by \citet{Kaaret2011}, and different authors have studied the properties of X-ray emitting star-forming galaxies at high redshift \citep{Cowie2012,Basu2012}. These observations have provided a large amount of data on the properties of HMXB populations, and on the relation of these properties to those of the host galaxies. HMXBs are an important tool to investigate stellar (particularly binary) evolution, and the nature of compact objects. They are also potential star-formation tracers due to their relation to massive stars, and they have been proposed as important sources of stellar energy feedback into the interstellar and intergalactic media \citep{Power2009,Mirabel2011a,Dijkstra2011,Justham2012,Power2013}. One of the key problems to understand these systems, their evolution, and their influence on the environment, is the dependence of the HMXB production and properties on the metallicity of the stellar populations from which they form. Recent stellar evolution models suggest that the number of BHs and NSs produced by a stellar population depends on its metallicity \citep{Georgy2009}. Binary population synthesis models show that also the fraction of these compact objects that end up in binary systems with massive companions should depend on metallicity \citep{Belczynski2004a,Dray2006,Belczynski2008,Belczynski2010a, Linden2010}, because at lower metallicities more systems can survive disruption when the primary BH forms, and also avoid merging in the common-envelope phase. Finally, both models and observations suggest that low-metallicity stars form more massive BHs, which could produce potentially higher-luminosity HMXBs \citep[][and references therein]{Belczynski2010b,Linden2010,Feng2011}. However, the observational evidence for the metallicity dependence of the number and luminosity function of HMXBs is still poor. It is clear that this dependence must be searched for in the properties of the populations of HMXBs in star-forming galaxies. A key observable is the X-ray luminosity $L_{\rm X}$ of such galaxies, which in the local Universe scales with the star formation rate \citep[SFR;][]{Grimm2003,Mineo2012}. This correlation is usually parameterized as $L_{\rm X} = 3.5 \times 10^{40} {\rm erg}\ {\rm s}^{-1}\ f_{\rm X}\ {\rm SFR}/(M_\odot\ {\rm yr}^{-1})$, where the factor $f_{\rm X}$ accounts for possible variations due to the dependence of HMXB properties on metallicity or other physical parameters. \citet{Mineo2012} found that observations of nearby galaxies are consistent with a constant $f_{\rm X} \sim 0.2$, but the correlation shows a large dispersion, which might be due to metallicity effects. \citet{Kaaret2011} measured unusually larger $f_{\rm X}$ values for a sample of nearby blue compact dwarf galaxies with low metallicities. Unfortunately, their small sample did not allow them to reach statistically meaningful conclusions about a departure of these galaxies from the standard $L_{\rm X}$--SFR relation of \citet{Grimm2003}. \citet{Cowie2012} investigated this issue using a sample of galaxies at high redshift, for which metallicity effects should be important due to the chemical evolution of the Universe. They found that $f_{\rm X}$ is at most marginally dependent on redshift, however the observational uncertainties and the complex dependence of galaxy metallicity on redshift still leave the question open. Using the same X-ray survey \citet{Basu2012} have studied Lyman-Break galaxies in the range $z=1.5-8$, finding instead that $f_{\rm X}$ evolves with redshift. They also argue that \citet{Cowie2012} did not correct the galaxy luminosities for dust attenuation, which could prevent them to observe the evolution. A key issue to resolve the problem is to understand how $f_{\rm X}$ is affected by the metallicity dispersion within a galaxy, the correlation of the galaxy mean metallicity and SFR, and the chemical evolution of the Universe, in order to make a proper interpretation of the observational results. An interesting way to explore the metallicity dependence of HMXB populations, and the key issue of the possible evolution of $f_{\rm X}$, is through the combination of binary population synthesis models with a description of the stellar populations in a galaxy. \citet{Belczynski2004b} used this method to develop models that reproduce the emission of specific galaxies, while \citet{Zuo2011} explored the X-ray emission of galaxies at different redshifts using prescriptions for their star formation histories. These authors found a good agreement between the predicted and observed X-ray luminosity to stellar mass ratio in the range $z = 0-4$, but they were not able to reproduce the corresponding X-ray to optical luminosity ratio. A step forward in this approach is to couple binary population synthesis models to scenarios for the formation and evolution of galaxies in a cosmological context. Previous works used population synthesis models which provide a description of the HMXB properties expected from a single, homogeneous parent stellar population or semi-analytical models where there is a unique mean metallicity for stellar populations born at a certain time in a given galaxy. As we mentioned before, we intent to improve the modeling of HMXBs by providing a more realistic description of the complexity of stellar populations within a galaxy. Here we present a novel scheme to model the HMXB populations of star-forming galaxies, which couples population synthesis results to galaxy catalogues constructed from a hydrodynamical cosmological simulation of structure formation which is part of the Fenix project (Tissera et~al., in prep.). This simulation includes star formation, a multiphase treatment of the interstellar medium, the chemical enrichment of baryons, and the feedback from supernovae in a self-consistent way \citep{Scannapieco2005,Scannapieco2006}, and reproduces global dynamical and chemical properties of galaxies \citep{deRossi2010,deRossi2012,pedrosa2014}. This makes the simulation well suited for the task of investigating the evolution of the $L_{\rm X}$--SFR relation of star forming galaxies, expanding and complementing the results obtained by other methods such as semi-analytical models \citep[e.g.,][]{Fragos2012}. Our scheme is similar to those applied by \citet{Nuza2007}, \citet{Chisari2010}, \citet{Artale2011a}, and \citet{Pellizza2012} to the study of gamma-ray bursts. It includes both the modelling of the intrinsic HMXB populations of galaxies, and the definition of different samples comparable to observations, based on the modelling of selection effects. This is an important ability of our scheme, as it allows us to make a fair comparison with observations to constrain free parameters and discard incorrect hypotheses. Using our scheme, we develop different models to explore the effects of the dependence of the HMXB population properties on the metallicity of the parent stellar populations, and address the question of the evolution of the $f_{\rm X}$ factor by comparing our predictions to observations of galaxies across time. This paper is organized as follows. In Section~\ref{simu} we briefly present the numerical simulations used to describe the formation and evolution of galaxies, and the construction of galaxy catalogues. In Section~\ref{popsyn} we describe our HMXB model and how it is implemented onto the simulated galaxy catalogues to generate intrinsic population. In Section~\ref{redshift_g}, we present our results as a function of cosmic time. Finally, in Section~\ref{con}, we discuss our main conclusions. | \label{con} Motivated by recent studies which suggest that both the number of HMXBs and their X-ray luminosity might be higher in low-metallicity stellar systems \citep{Belczynski2004a,Dray2006,Linden2010,Mirabel2011a}, we explored the consequences of these hypotheses for the X-ray emission of star-forming galaxies, and its cosmological evolution. For this purpose, we develope a model to generate HMXBs which is applied to galaxy catalogues constructed from hydrodynamical cosmological simulations. These simulations include a self-consistent treatment of the chemical evolution of baryons, and consequently, they provide the ages and metallicities of the stellar populations as galaxies form and evolve. As a function of time, each galaxy is described as mixture of stellar populations with different metallicities and ages. By using our HMXBs we can follow the formation of these events at different stages of evolution of the Universe and confront them with observations to constrain the free parameters of the models. We first confront our models with observations in nearby galaxies \citep{Mineo2012} to estimate the free parameters. Then, we apply them to investigate the variations of the HMXB properties across cosmic time. We explore a non-metallicity dependent model and three other ones with metallicity dependences in the production rate and X-ray luminosities. We detect a significant dispersion in the $L_{\rm X}$--SFR relation for our simulated local-Universe galaxies in the range $\sim 0.28-0.44$~dex, comparable to the $\sim 0.4$ dex reported by \citet{Mineo2012}. Hence, our results suggest that the internal metallicity dispersion of galaxies combined with the metallicity dependence of the X-ray sources might provide the physical origin for the observed dispersion in the $L_{\rm X}$--SFR relation. We explored the cosmological evolution of the ratio between the X-ray luminosity of galaxies and their SFR, parametrized by the factor $f_{\rm X}$. The confrontation of our models with the observations of \citet{Basu2012} favours models with metallicity dependence, and rejects those in which metallicity plays a negligible role. However, the nature of this dependence cannot be determined by using these observations. Both a dependence of the HMXB production rate on metallicity (M1), and this one plus a metallicity-dependent HMXB luminosity (M3) fit the data as well. More precise measurements of $f_{\rm X}$--SFR relation as a function of redshift would help us to establish its nature. In fact, three main trends are detected in our chemistry-dependent models, which might be used by observers to test the existence of a chemical dependence in the rate of production or in the luminosities of HMXBs: \begin{itemize} \item At given redshift, a decrease of $f_{\rm X}$ with the SFR of galaxies is detected due to the correlation between the mean metallicity of galaxies and their SFR. This correlation makes low-SFR galaxies have lower metallicity and hence, higher $f_{\rm X}$ that high-SFR ones. Our models predict a weak decrease if a metallicity dependence for the production rates is adopted ($\sim 0.15$ dex) and a stronger one if this dependence is extended to the X-ray luminosities ($\sim 0.25$ dex). \item For galaxies with similar SFRs, $f_{\rm X}$ should decrease with decreasing redshift because galaxies evolve to higher mean metallicities for decreasing redshift. Again, the level of decrease is predicted to be directly related to the metallicity dependence: the higher it is, the larger the change with redshift. In our models, $f_{\rm X}$ increases by $\sim 0.5\ {\rm dex}$ between $z \sim 0$ and $z \sim 3.5$ for a metallicity-dependent HMXB rate, and by $\sim 1\ {\rm dex}$ in the same redshift range if metallicity affects both the rates and the X-ray luminosities of HMXBs. \item The $f_{\rm X}$--SFR relation shows a dispersion which reflects the combined effects of the chemical evolution of the stellar populations and the metallicity dependences of the X-ray sources. This dispersion should decrease with increasing redshift since high-redshift galaxies tend to have stellar populations with more homogeneous metallicity distributions. Our findings suggest that the variation of the dispersion with redshift store information on the nature of the metallicity dependence. \end{itemize} None of these three effects is predicted in a scenario with a negligible metallicity dependence of the properties of HMXBs. Hence the observational measurement of any of them would make a strong case for this dependence. Our results suggest that the evolution of the $f_{\rm X}$--SFR relation should be observed up to high redshift and low SFRs in order to assess the chemical dependence of the properties of HMXB populations. The study of the properties of HMXB populations through cosmic times can unveil their potential contribution to energy feedback, which is expected to play a critical role in the thermal and ionization history of the Universe \citep[e.g.][]{Mirabel2011a,Justham2012,Fragos2013a,Jeon2013}. In a future work we explore the effects that energy feedback from HMBXs might have on the regulation of the star formation in early Universe. | 14 | 3 | 1403.2329 |
1403 | 1403.0576_arXiv.txt | Although statistical evidence is not overwhelming, possible support for an approximately 35 million year periodicity in the crater record on Earth could indicate a nonrandom underlying enhancement of meteorite impacts at regular intervals. A proposed explanation in terms of tidal effects on Oort cloud comet perturbations as the Solar System passes through the galactic midplane is hampered by lack of an underlying cause for sufficiently enhanced gravitational effects over a sufficiently short time interval and by the time frame between such possible enhancements. We show that a smooth dark disk in the galactic midplane would address both these issues and create a periodic enhancement of the sort that has potentially been observed. Such a disk is motivated by a novel dark matter component with dissipative cooling that we considered in earlier work. We show how to evaluate the statistical evidence for periodicity by input of appropriate measured priors from the galactic model, justifying or ruling out periodic cratering with more confidence than by evaluating the data without an underlying model. We find that, marginalizing over astrophysical uncertainties, the likelihood ratio for such a model relative to one with a constant cratering rate is 3.0, which moderately favors the dark disk model. Our analysis furthermore yields a posterior distribution that, based on current crater data, singles out a dark matter disk surface density of approximately 10 $M_\odot/{\rm pc}^2$. The geological record thereby motivates a particular model of dark matter that will be probed in the near future. | 14 | 3 | 1403.0576 |
||
1403 | 1403.2868_arXiv.txt | We present a class of models where both the primordial inflation and the late times de Sitter phase are driven by simple phenomenological agegraphic potentials. In this context, a possible new scenario for a smooth exit from inflation to the radiation era is discussed by resorting the kination (stiff) era but without the inefficient radiation production mechanism of these models. This is done by considering rapidly decreasing expressions for $V(t)$ soon after inflation. We show that the parameters of our models can reproduce the scalar spectral parameter $n_s$ predicted by Planck data in particular for models with concave potentials. Finally, according to the recent BICEP2 data, all our models allow a huge amount of primordial gravitational waves. | Many experimental observations during the past decade (see \cite{1,q,2}) are in agreement with the hypothesis of a present day accelerating universe. In the standard $\Lambda$CDM cosmological model, an accelerating universe scenario invokes the presence of the so-called dark energy expressed in terms of a cosmological constant $\Lambda$ representing about $70\%$ of the present universe matter-energy. However, the physical origin of this constant is still obscure. Moreover, the physical mechanism leading to a small $\Lambda$ that begins to dominate only recently still remains mysterious. In view of the fact that a cosmological constant is the most simple solution of the Klein-Gordon equation for a scalar field $\phi$, it seemed reasonable to consider a time-varying field, named quintessence field, to describe a running cosmological constant driven by some potential $V(\phi)$ (see for example \cite{3,4,5,6}). This quintessence field produces a late times cosmological constant by means of a mechanism very closed to the one leading to primordial inflation. i.e. a slowly rolling scalar field (see for example \cite{7,8,9}). These models admit a tracker solution which partially alleviates the coincidence problem. An unsolved problem in these models is fine-tuning, i.e the fact that the energy density for $\Lambda$ is so small compared to typical particle physics scales. Moreover, it is unclear how to obtain a smooth transition, after the inflationary epoch, to the radiation era, since the typical density of quintessence field is at least two orders smaller than the background density. Only in recent times the quintessence field should begin to dominate and thus mimic a cosmological constant. As a result, what is practically absent in the literature of modern cosmology is an unified view in which a smooth transition from the inflationary epoch to the radiation era up to dark energy era is obtained. An interesting alternative proposal to alleviate this lack is to link primordial inflation to dark energy. Initially \cite{10,11,12}, this has been considered in the context of anthropic selection effects. More recently, in \cite{13} it was shown that primordial quantum fluctuations of an almost massless scalar field during the primordial inflation could explain the present quasi-de Sitter phase. Another interesting paper along this line is \cite{14}. Here, the actual accelerating phase is obtained from primordial inflation by using the renormalization group equation to obtain the rate of change of the density of the vacuum energy as dictated by the usual approach of quantum field theory in curved spacetimes. The model also predicts a transition to the radiation era. However, a description in terms of an effective action together with the dynamics of $\phi$ is still missing. In this paper, we follow this interesting line of research, but use a different approach. In particular, we attempt to obtain a unified description of the whole history of the universe starting from the inflationary epoch by means of phenomenological potentials, but initially expressed in term of the cosmic time $t$. In particular, we are interested in possible alternative mechanisms allowing a graceful exit from inflation together with a transition to the radiation era. This paper is organized as follows. In section 2 we write down the relevant equations together with a presentation of our approach. In section 3 we study a particular model leading to the inflationary epoch. In section 4 we analyze the problem of a smooth exit from the inflation, while in section 5 we study the universal tracker de Sitter solution of our models. In section 6 we study the dynamics of our model. In section 7 we analyze our models in light of Planck data and present a study for a more general class of potentials allowing concave potentials. In section 8 we study the possibility to introduce a running cosmological constant. Finally, section 9 is devoted to some conclusions. | This paper is an attempt to obtain a unified description of the universe from the inflationary era up to the late de Sitter phase. In particular, we introduce a physically viable class of phenomenological potentials $V(t)$ that allow us to achieve this unified description. In fact, our models are able to produce an inflationary mechanism both at early and late times. We show that a smooth transition from the inflationary epoch up to the radiation era can be obtained. After recombination, the scalar field $\phi$ generates small corrections to the Hubble flow $H$ that look as $H\sim \frac{2}{3t}+\frac{const.}{t^3}+o(1)$. Although these corrections are expected very small, they could be investigated by future cosmological data. Moreover, a smooth transition to the de Sitter accelerate phase is obtained.\\ The models (\ref{ag1}) have four arbitrary constants, i.e. $k_{\phi}, I_{\phi}, T, k$. Two of them can be fixed by data concerning the present value of the cosmological constant ($\sim k_{\phi}$) and the effective cosmological constant driving the primordial inflation ($\sim I_{\phi}$). Unfortunately, we have not at our disposal a sound estimate for this ratio from ordinary quantum field theory. The other parameters can be chosen in light of Planck and BICEP2 data.\\ The parameter $T$ represents the characteristic time after which primordial inflationary de Sitter-like expansion phase is not more efficient. The ones of the section $7$ have $F(t)$ instead of $k_{\phi}$. We have also tested our class of models with the recent Planck and BICEP2 data, in particular for the indices $n_s, \epsilon_V, r, \eta_V$. We found that the model with a convex potential can be acceptable if we consider the running spectral index Planck constraints and the more recent BICEP2 data. By considering more general potentials (\ref{ag1})with $n>1$, we can easily obtain concave potentials with an acceptable range ($<0.12$) for the tensor ratio $r$ according with Planck data. More general potentials can be used provided that $V(t)$ be monotonically decreasing with respect $t$, nearly constant during inflation and rapidly decreasing soon after inflation to a small value, i.e. the cosmological constant, or to a small running cosmological constant, as shown in section $7$. However, irrespective of the modifications on $V(t)$, our study suggests that the definition of $V$ as an agegraphic potential allows to introduce potentials with a time varying form in terms of $\phi$. In fact, at the begin of the primordial inflation our potentials (\ref{ag1}) and (\ref{agage2}) have with good approximation a linear expression in terms of $\phi$. There, a well posed mechanism to start inflation in terms of axion monodromy is given, further motivating the choices (\ref{agage2}) and (\ref{ag1}). However, also during inflation other higher orders do appear. This is in line with the idea that quantum fluctuations can add further terms to the initial expression for $V(\phi)$ due to renormalization group equation (to this purpose see \cite{14}). Moreover, the explicit presence of $t$ permit us to follows the timing of the universe just in time to obtain the fundamental transition to the radiation era before nucleosynthesis an up to the dark matter era. Often in the literature these transitions are obtained only asymptotically (see \cite{Chervon}), which is unphysical. In our approach we consider from the onset a simple physically motivated expression for $V$ in terms of the cosmic time $t$, provided that a monotonically decreasing expression for $\phi$ is allowed. For this reason we named the agegraphic potential 'phenomenological'. The exponential form of the potentials (\ref{6}) and (\ref{ag1}) is dictated from the necessity to have an efficient primordial inflationary mechanism together with a rapidly decaying allowing a plausible simple mechanism for a graceful exit from inflation up to matter creation and to a late times de Sitter phase. In this regard, note that an arbitrary naive choice for $V(t)$ generally does not work. As an extreme example, since the class of potentials (\ref{ag1}) and (\ref{agage2}) rapidly converge to a constant or negligible value, one may be tempted to set a potential of the form $V(t)=I_{\phi}$ for $t<T$ and $V(t)=k_{\phi}$ for $t>T$. Apart from a lack of continuity at $t=T$ that does not permit a smooth transition from the two regimes at $t<T$ and $t>T$, these models predict a strictly gaussian perturbation, i.e. $\eta_V=\epsilon_V=0, n_s=1$, in complete disagreement with Planck data, which are the cornerstone of any physically viable cosmological model. In this manner we have a possible unification of the two inflationary epochs by a single smooth agegraphic potential expressed in terms of the cosmic time $t$ in such a way that dark energy is a relic of primordial inflation together with a possible simple graceful exit from inflation up to radiation era without invoking the usual reheating mechanism or dangerous particles creation due to the gravitational field. It is also interesting to note that in this framework the mechanism chosen for primordial inflation is not plagued from fine-tuning of ${\phi}_0$ since the only condition that leads to an efficient inflationary mechanism is that the characteristic time $T$ is of the order or some order larger than $t_e$, i.e. the time after which inflation is not more strictly in a de Sitter phase. Theoretically, we could take ${\phi}_0=0$ and inflation works as well. Obviously, the mechanism leading up to radiation era obviously must be further investigated by quantum calculations, by resorting the post inflationary non oscillatong scenarios, but read in our frame and with a different mechanism for matter-radiation creation without encounter the drawbacks of usual non-reheating cosmologies (see \cite{23}). To this purpose note that the mechanism proposed leading to nucleosynthesis is not plagued from fine tuning problems and only requires a negligible remnant of radiation soon after inflation. | 14 | 3 | 1403.2868 |
1403 | 1403.7483_arXiv.txt | namefont}{\normalfont\bfseries} \renewcommand{\abstracttextfont}{\normalfont} \usepackage{titlesec} \titleformat{ \noindent We investigate the possibility of using the only known fundamental scalar, the Higgs, as an inflaton with minimal coupling to gravity. The peculiar appearance of a plateau or a false vacuum in the renormalised effective scalar potential suggests that the Higgs might drive inflation. For the case of a false vacuum we use an additional singlet scalar field, motivated by the strong CP problem, and its coupling to the Higgs to lift the barrier allowing for a graceful exit from inflation by mimicking hybrid inflation. We find that this scenario is incompatible with current measurements of the Higgs mass and the QCD coupling constant and conclude that the Higgs can only be the inflaton in more complicated scenarios. | \begin{multicols}{2} A period of exponential expansion in the early Universe solves the horizon, flatness and monopole problem as well as sourcing the seeds of structure formation. The spectrum of scalar perturbations predicted from such inflationary theory has been measured many times, most recently to an impressive accuracy by the Planck satellite \cite{Planck}. The recently reported observation of primordial B-modes in the polarization of the CMB by the BICEP-2 experiment \cite{BICEP} may turn out to be the most convincing evidence of inflation to date. Although the Planck data has made some steps in selecting from the various models that can produce inflation we are still a long way from pinning down what features the precise microscopic mechanism responsible for inflation would have to have. What is, however, common to almost all models is the presence of a scalar inflaton. The discovery of the Higgs boson, $h$, by the ATLAS \cite{ATLAS1207} and CMS \cite{CMS1207} collaborations is the first (seemingly \cite{Ellis1303}) fundamental scalar we have detected. It is therefore natural to ask whether the Higgs can play the role of the inflaton. A naive first answer would be that it cannot because it is well known that for $V(\phi)\simeq \frac{1}{4}\lambda \phi^4 $ the measured spectrum of perturbations requires\footnote{This requirement is to fit the perturbations for $N=60$ e-folds before the end of inflation. This model is also in tension with Planck's $n_S -r$ plane constraints \cite{Planck}, where $n_S$ is the spectral index and $r$ is the tensor-to-scalar ratio} the quartic coupling $\lambda \simeq 10^{-13}$ whereas the measured Higgs mass requires $\lambda \sim 0.13$. This, however, neglects the effect of quantum corrections. Properly considered, these effects can lead to substantial modifications to the tree-level potential and a significant scale dependence of $\lambda$. For a finely chosen mass of the top quark it is possible, as shown in \cite{Isidori0712}, that the effective Higgs potential develops a flat part at large field values or even a second, local minimum, also called a false vacuum. Remarkably these features appear at approximately the correct scale to generate the observed perturbations which suggests the Higgs does indeed have a role to play in inflation. Recently there has been a lot of interest in using the Higgs as the inflaton in the context of a non-minimal coupling to gravity~\cite{Bezrukov0710,Bezrukov0812,Bezrukov0812a,Barbon0903,Bezrukov0904,Lerner0912,Burgess1002,Bezrukov1008,Giudice1010}. It is worth noting that quantum corrections may reduce the predictiveness of such models \cite{burgess1402} and should be taken into account. Additionally, if the recent measurement by the BICEP collaboration proves to be true then these models will be put under pressure~\cite{Cook1403} (for a possible way out see the recent works~\cite{Bezrukov1403,Hamada1403} that rely on similar tunings of the Higgs potential). Here we don't consider any such coupling and so refer to it as minimal Higgs inflation. In this paper we will investigate how the plateau or the false vacuum could be used to explain the inflationary phase of the universe. To do so we will first look at the situation where there is a plateau in the potential and see whether the Higgs can inflate the universe on its own by slowly rolling down the plateau. The case of a false vacuum in the potential demands a mechanism for a graceful exit from inflation. Therefore, we extend the model and add an additional scalar field, $s$, which can lift the Higgs out of its local minimum. The strong CP-problem motivates the existence of such an additional scalar field and it is worth investigating if such a mechanism can give successful inflation. Our calculation improved upon a previous treatment in~\cite{masina1204} by considering the full 3-loop renormalisation group equation (RGE) improved 2-loop effective potential~\cite{degrassi1205,buttazzo1307}, including the 1-loop RGE's for the new scalar field and its threshold effect at the matching scale. Also, we account for the movement of the Higgs during inflation and further address a degeneracy in the initial depth of the false vacuum. We will see that these improvements can dramatically affect the conclusions. The structure of the paper is as follows. In section 2 we discuss the RGE improved effective potential and attempt to use the resulting plateau for inflation. In section 3 we discuss the possibility of false vacuum inflation which is the main focus of this paper. Finally, in section 4 we present our conclusions. | In this paper we have considered two possible implementations of minimal Higgs inflation. In section 2 we tuned the Higgs potential in such a way that a plateau appears and investigated whether this plateau can be used to inflate the universe via a slow-rolling of the Higgs alone. We considered the full 3-loop RGE improved 2-loop effective potential. A simultaneous fit of the number of e-foldings and the scalar perturbations turned out to be impossible, such that an extension of the Standard Model is necessary, compare figure 2. The most minimal extension was investigated in section 3 where we introduced an additional singlet scalar field $s$ and looked at a hybrid scenario. Such a scalar field is motivated by the strong CP-problem. In this case, the Higgs sits in a local minimum of the potential and $s$ slowly rolls towards the minimum of its potential. The mutual coupling between $s$ and the Higgs field removes the barrier during the rolling of $s$ such that the Higgs can then roll towards its global minimum and successful exit is guaranteed. To ensure a correct treatment, we included the 1-loop RGE's for the new scalar, the threshold effect in the Higgs potential occurring at the mass of the singlet scalar, the movement of the Higgs field during inflation and the degeneracy in the well depth. Our results are summarised in figure 4 where one can see that those sets of parameters that give a good fit to the inflationary observables are clearly excluded by measurements of the Higgs mass and the strong coupling constant. With standard General Relativity and Quantum Field Theory with minimal couplings between the particles and gravity it has been shown that one cannot obtain inflation using only the standard model Higgs. In this work we show that even with an additional field allowing the Higgs to become the waterfall field of a hybrid inflation model, the coupling between the two fields conspires to prevent good inflationary parameters. Inflation can only be explained using either a more complicated scenario or an entirely separate field such that the Higgs plays no role in the process. | 14 | 3 | 1403.7483 |
1403 | 1403.5486_arXiv.txt | {{\bfseries\scshape Abstract} \\ \par Supersymmetric versions of induced-gravity inflation are formulated within Supergravity (SUGRA) employing two gauge singlet chiral superfields. The proposed superpotential is uniquely determined by applying a continuous $R$ and a discrete $\mathbb{Z}_n$ symmetry. We select two types of logarithmic \Ka s, one associated with a no-scale-type $SU(2,1)/SU(2)\times U(1)_R\times\mathbb{Z}_n$ \Km\ and one more generic. In both cases, imposing a lower bound on the parameter $\ck$ involved in the coupling between the inflaton and the Ricci scalar curvature -- e.g. $\ck\gtrsim 76, 105, 310$ for $n=2,3$ and $6$ respectively --, inflation can be attained even for \sub\ values of the inflaton while the corresponding effective theory respects the perturbative unitarity. In the case of no-scale SUGRA we show that, for every $n$, the inflationary observables remain unchanged and in agreement with the current data while the inflaton mass is predicted to be $3\cdot10^{13}~\GeV$. Beyond no-scale SUGRA the inflationary observables depend mildly on $n$ and crucially on the coefficient involved in the fourth order term of the \Ka\ which mixes the inflaton with the accompanying non-inflaton field. } \\ \\ {\ftn \sf Keywords: Cosmology, Supersymmetric models, Supergravity, Modified Gravity};\\ {\ftn \sf PACS codes: 98.80.Cq, 11.30.Qc, 12.60.Jv, 04.65.+e, 04.50.Kd}\\ \\ \publishedin{{\sl J. Cosmol. Astropart. Phys.} {\bf 08}, {057} (2014)} | % The announcement of the recent PLANCK results \cite{wmap,plin} fuelled increasing interest in inflationary models implemented thanks to a strong enough non-minimal coupling between the inflaton field, $\phi$, and the Ricci scalar curvature, $\rcc$. Indeed, these models predict \cite{plin, defelice13} a (scalar) spectral index $\ns$, tantalizingly close to the value favored by observational data. The existing non-minimally coupled to Gravity inflationary models can be classified into two categories depending whether the non-minimal coupling to $\rcc$ is added into the conventional one, $\mP^2\rcc/2$ -- where $\mP = 2.44\cdot 10^{18}~\GeV$ is the reduced Planck scale -- or it replaces the latter. In the first case the \emph{vacuum expectation value} ({\ftn\sf v.e.v}) of the inflaton after inflation assumes sufficiently low values after inflation, such that a transition to Einstein gravity at low energy to be guarantied. In the second case, however, the term $\mP^2\rcc/2$ is dynamically generated via the v.e.v of the inflaton; these models are, thus, named \cite{induced, higgsflaton} \emph{Induced-Gravity} ({\ftn\sf IG}) inflationary models. Despite the fact that both models of non-Minimal Inflation are quite similar during inflation and may be collectively classified into universal ``attractor'' models \cite{roest}, they exhibit two crucial differences. Namely, in the second category, {\sf\ftn (i)} the \emph{Einstein frame} ({\ftn\sf EF}) inflationary potential develops a singularity at $\phi=0$ and so, inflation is of Starobinsky-type \cite{R2} actually; {\sf\ftn (ii)} The \emph{ultaviolet} ({\ftn\sf UV}) cut-off scale \cite{cutoff,cutof,riotto} of the theory, as it is recently realized \cite{pallis,gian}, can be identified with $\mP$ and, thereby, concerns regarding the naturalness of inflation can be safely eluded. On the other hand, only some \cite{riotto} of the remaining models of nonminimal inflation can be characterized as unitarity safe. In a recent paper \cite{pallis} a \emph{supersymmetric} ({\ftn\sf SUSY}) version of IG inflation was, for first time, presented within no-scale \cite{noscale,eno5, eno7} \emph{Supergravity} ({\ftn\sf SUGRA}). A Higgs-like modulus plays there the role of inflaton, in sharp contrast to \cref{eno5} where the inflaton is matter-like. For this reason we call in \cref{pallis} the inflationary model \emph{no-scale modular inflation}. Although any connection with the no-scale SUSY breaking \cite{noscale, eno9} is lost in that setting, we show that the model provides a robust cosmological scenario linking together non-thermal leptogenesis, neutrino physics and a resolution to the $\mu$ problem of the \emph{Minimal SUSY SM} ({\ftn\sf MSSM}). Namely, in \cref{pallis}, we employ a \Ka, $K$, corresponding to a $SU(N,1)/ SU(N)\times U(1)_R\times\mathbb{Z}_2$ symmetric \Km. This symmetry fixes beautifully the form of $K$ up to an holomorphic function $\fk$ which exclusively depends on the inflaton, $\phi$, and its form $\fk\sim\phi^2$ is fixed by imposing a $\mathbb{Z}_2$ discrete symmetry which is also respected by the superpotential $\Whi$. Moreover, the model possesses a continuous $R$ symmetry, which reduces to the well-known $R$-parity of MSSM. Thanks to the strong enough coupling between $\phi$ and $\rcc$, inflation can be attained even for \sub\ values of $\phi$, contrary to other SUSY realizations \cite{eno7,linde,zavalos} of the Starobinsky-type inflation. Most recently a more generic form of $\fk$ has been proposed \cite{gian} at the non-SUSY level. In particular, $\fk$ is specified as $\fk\sim\phi^n$ and it was pointed out that the resulting IG inflationary models exhibit an attractor behavior since the inflationary observables and the mass of the inflaton at the vacuum are independent of the choice of $n$. It would be, thereby, interesting to investigate if this nice feature insists also in the SUSY realizations of these models. This aim gives us the opportunity to generalize our previous analysis \cite{pallis} and investigate the inflationary predictions independently of the post-inflationary cosmological evolution. Namely, we here impose on $\fk$ a discrete $\mathbb{Z}_n$ symmetry with $n\geq2$, and investigate its possible embedding in the standard Poincar\'e SUGRA, without invoking the superconformal formulation -- cf.~\cref{rena}. We discriminate two possible embeddings, one based on a no-scale-type symmetry and one more generic, with the first of these being much more predictive. Namely, while the embedding of IG models in generic SUGRA gives adjustable results as regards the inflationary observables, -- see also \cref{talk} --, no-scale SUGRA predicts independently of $n$ results identical to those obtained in the non-SUSY case. Therefore, no-scale SUGRA consists a natural framework in which such models can be implemented. Below, in Sec.~\ref{fhim}, we describe the generic formulation of IG models within SUGRA. In Sec.~\ref{fhi} we present the basic ingredients of our IG inflationary models, derive the inflationary observables and confront them with observations. We also provide a detailed analysis of the UV behavior of these models in Sec.~\ref{fhi3}. Our conclusions are summarized in Sec.~\ref{con}. Throughout the text, the subscript of type $,\chi$ denotes derivation \emph{with respect to} ({\ftn\sf w.r.t}) the field $\chi$ (e.g., $_{,\chi\chi}=\partial^2/\partial\chi^2$) and charge conjugation is denoted by a star. | \label{con} In this work we showed that a wide class of IG inflationary models can be naturally embedded in standard SUGRA. Namely, we considered a superpotential which realize easily the IG idea and can be uniquely determined by imposing two global symmetries -- a continuous $R$ and a discrete $\mathbb{Z}_n$ symmetry -- in conjunction with the requirement that inflation has to occur for \sub\ values of the inflaton. On the other hand, we adopted two forms of \Ka s, one corresponding to the \Km\ $SU(2,1)/SU(2)\times U(1)_R\times\mathbb{Z}_n$, inspired by no-scale SUGRA, and one more generic. In both cases, the tachyonic instability, occurring along the direction of the accompanying non-inflaton field, can be remedied by considering terms up to the fourth order in the \Ka. Thanks to the underlying symmetries the inflaton, $\phi$ appears predominantly as $\phi^n$ in both the super- and \Ka s. In the case of no-scale SUGRA, the inflaton is not mixed with the accompanying non-inflaton field in \Ka. As a consequence, the model predicts results identical to the non-SUSY case independently of the exponent $n$. In particular, we found $\ns\simeq0.963$, $\as\simeq-0.00068$ and $r\simeq0.0038$, which are in excellent agreement with the current data, and $\msn=3\cdot10^{13}~\GeV$. Beyond no-scale SUGRA, all the possible terms up to the forth order in powers of the various fields are included in the \Ka. In this case, we can achieve $\ns$ precisely equal to its central observationally favored value, mildly tuning the coefficient $\ksp$. Furthermore, a weak dependance of the results on $n$ arises with the lower $n$'s being more favored, since the required tuning on $\ksp$ is softer. In both cases a $n$-dependent lower bound on $\ck$ assists us to obtain inflation for \sub\ values of the inflaton, stabilizing thereby our proposal against possible corrections from higher order terms in $\fk$. Furthermore we showed that the one-loop radiative corrections remain subdominant during inflation and the corresponding effective theory is trustable up to $\mP$. Indeed, these models possess a built-in solution into long-standing naturalness problem \cite{cutoff,riotto} which plagued similar models. As demonstrated both in the EF and the JF, this solution relies on the dynamical generation of $\mP$ at the vacuum of the theory. As a bottom line we could say that although no-scale SUGRA has been initially coined as a solution to the problem of SUSY breaking \cite{noscale,eno9} ensuring a vanishing cosmological constant, it is by now recognized -- see also \cite{eno7,zavalos,pallis} -- that it provides a flexible framework for inflationary model building. In fact, no-scale SUGRA is tailor-made for IG (and nonminimal, in general) inflation since the predictive power of this inflationary model in more generic SUGRA incarnations is lost. \subsubsection*{\large\bfseries\scshape Note Added} When this work was under completion, the {\small\sc Bicep2} experiment \cite{gws} announced the detection of B-mode polarization in the cosmic microwave background radiation at large angular scales. If this mode is attributed to the primordial gravity waves predicted by inflation, it implies \cite{gws} $r=0.16^{+0.06}_{-0.05}$ -- after subtraction of a dust model. Combining this result with \sEref{nswmap}{c} we find -- cf. \cref{rcom} -- a simultaneously compatible region $0.06\lesssim r\lesssim0.135$ (at $95\%$ c.l.) which, obviously, is not fulfilled by the models presented here, since the predicted $r$ lies one order of magnitude lower -- see \Eref{res} and comments below \Eref{resg3}. However, it is still premature to exclude any inflationary model with $r$ lower than the above limit since the current data are subject to considerable foreground uncertainty -- see e.g. \cref{gws1,gws2}. \begin{acknowledgement} This research was supported by the Generalitat Valenciana under contract PROMETEOII/2013/017. \end{acknowledgement} \newpage \rhead[\fancyplain{}{ \bf \thepage}]{\fancyplain{}{\sl IG Inflation in no-Scale SUGRA \& Beyond}} \lhead[\fancyplain{}{\sl \leftmark}]{\fancyplain{}{\bf \thepage}} \cfoot{} | 14 | 3 | 1403.5486 |
1403 | 1403.7951_arXiv.txt | In this paper we present the discussion on the salient points of the computational analysis that are at the basis of the paper \emph{Rotation curves of galaxies by fourth order gravity} \citep{StSc}. The computational and data analysis have been made with the software Mathematica$^\circledR$ and presented at Mathematica Italy 5th User Group meeting (2011, Turin - Italy). | The computational analysis here described is referred to the study of the galactic rotation curve. The theoretical details of the model investigated are omitted here, but fully available on the cited paper \citep{StSc}. The formula under study is $v(r,R,z)=\sqrt{r\frac{\partial}{\partial r}\Phi(r,R,z)}$ where $\Phi(r,R,z)$ is the gravitational potential \begin{widetext} \begin{eqnarray}\label{potential} &&\Phi(r,R,z)\,=\,\frac{4\pi G}{3}\,\biggl[\frac{1}{r}\int_0^\infty dr'\,\rho_{bulge}(r')\,r'\,\biggl(3\,\frac{|r-r'|-r-r'}{2} -\frac{e^{-\mu_1|r-r'|}-e^{-\mu_1(r+r')}}{2\,\mu_1} +2\,\frac{e^{-\mu_2|r-r'|}-e^{-\mu_2(r+r')} }{\mu_2}\biggr)\biggr] \nonumber\\\nonumber\\&& +\frac{4\pi G}{3}\,\biggl[\frac{1}{r}\int_0^{\Xi} dr'\,\rho_{DM}(r')\,r'\,\biggl(3\,\frac{|r-r'|-r-r'}{2} -\frac{e^{-\mu_1|r-r'|}-e^{-\mu_1(r+r')}}{2\,\mu_1} +2\,\frac{e^{-\mu_2|r-r'|}-e^{-\mu_2(r+r')} }{\mu_2}\biggr)\biggr] \nonumber\\\nonumber\\&& -2\,G\,\biggr\{\int_0^\infty dR'\,\sigma_{disc}(R')\,R'\,\biggl(\frac{\mathfrak{K}(\frac{4RR'}{(R+R')^2+z^2})}{\sqrt{(R+R')^2+z^2}} +\frac{\mathfrak{K}(\frac{-4RR'}{(R-R')^2+z^2})}{\sqrt{(R-R')^2+z^2}}\biggr)+\int_0^\infty dR'\,\sigma_{disc}(R')\,R'\, \\\nonumber\\&& \times\int_0^{\pi} d\theta'\frac{1}{3\,\sqrt{(R+R')^2+z^2-4RR'\cos^2\theta'}} \biggl[e^{-\mu_1\sqrt{(R+R')^2+z^2-4RR'\cos^2\theta'}} -4\,e^{-\mu_2\sqrt{(R+R')^2+z^2-4RR'\cos^2\theta'}}\biggr]\biggr\}\nonumber \end{eqnarray} \end{widetext} where $\mathfrak{K}(x)$ is the Elliptic function and $G$ is gravitational constant. We remember that in the potential (\ref{potential}) we can distinguish the contributions of the bulge, the disk and the (eventual) Dark Matter. $r$ is the radial coordinate in the spherical system, while $R$, $z$ are respectively the radial coordinate in the plane of disc and the distance from the plane then we have the geometric relation $r\,=\,\sqrt{R^2+z^2}$. The main item is the choice of models of matter distribution. The more simple model characterizing the shape of galaxy is the following \begin{eqnarray}\label{density_3} \left\{\begin{array}{ll} \rho_{bulge}(r)\,=\,\frac{M_b}{2\,\pi\,{\xi_b}^{3-\gamma}\,\Gamma(\frac{3-\gamma}{2})}\frac{e^{-\frac{r^2}{{\xi_b}^2}}}{r^\gamma}\\\\ \sigma_{disk}(R)\,=\,\frac{M_d}{2\pi\,{\xi_d}^2}\, e^{-\frac{R}{\xi_d}}\\\\ \rho_{DM}(r)\,=\,\frac{\alpha\,M_{DM}}{\pi\,(4-\pi){\xi_{DM}}^3}\,\frac{1}{1+\frac{r^2}{{\xi_{DM}}^2}} \end{array}\right. \end{eqnarray} where $\Gamma(x)$ is the Gamma function, $0\,\leq\,\gamma\,<\,3$ is a free parameter and $0\,\leq\,\alpha\,<\,1$ is the ratio of Dark Matter inside the sphere with radius $\xi_{DM}$ with respect the total Dark Matter $M_{DM}$. Moreove the couples $\xi_b$, $M_b$ and $\xi_d$, $M_d$ are the radius and the mass of the bulge and the disc. The parameters $\mu_1$ and $\mu_2$ are the free parameters in the theory and only by fitting process can be fixed. A sensible item is the choice of distance $\Xi$ on the which we are observing the rotation curve. In fact all models for the Dark Matter component are not limited and we need to cut the upper value of integration in (\ref{potential}). A further distinction are the contributions to the potential coming from terms of General Relativity (GR) origin and terms of Forth Order Gravity (FOG) origin. Finally our aim is the numerical evaluation of the rotation curve in the galactic plane \begin{eqnarray}\label{velocity} v(R,R,0)=\sqrt{R\frac{\partial}{\partial R}\Phi(R,R,0)} \end{eqnarray} Our analysis is then organized as follows: in section II we investigate the contribution of these terms on the galactic rotation curve, in section III a data fit between our theoretical curves and the data of the rotation curve of the Milky Way and the galaxy NGC 3190 and in section IV we report the conclusions. | In this paper we presented the salient points in the program we build in the computation of the velocity curves of the Milky Way and the galaxy NGC 3190. In figure \ref{schermata_9} is shown the full code corresponding to the plot of the figure \ref{plot_1_PRD} \citep{StSc}, that is the code for a galaxy whose components are the bulge, the disk and the Dark Matter. The code referring also to the study of the galaxy NGC 3190 is exactly the same with the exclusion of the part of code referring to the bulge. \begin{figure}[t] \centering \includegraphics[scale=0.45]{fig08.eps}\\ \caption{Screen-shot of the full program for the rotation curve of the Milky Way (figure \ref{plot_1_PRD}).} \label{schermata_9} \end{figure} \begin{figure}[t] \centering \includegraphics[scale=0.8]{fig09.eps}\\ \caption{Plot of the galactic rotation curve by using the full program for Milky Way (figure \ref{schermata_9}). The cases are the following: GR (dashed line), GR$+$DM (dashed and dotted line), FOG (solid line), FOG$+$DM (dotted line). The values of masses are $\mu_1\,=\,10^{-2}\,\text{Kpc}^{-1}$ and $\mu_2\,=\,10^2\,\text{Kpc}^{-1}$ \citep{StSc}} \label{plot_1_PRD} \end{figure} \begin{figure}[t] \centering \includegraphics[scale=0.8]{fig10.eps}\\ \caption{Superposition of theoretical behaviors GR (dashed line), GR$+$DM (dashed and dotted line), FOG (solid line), FOG$+$DM (dotted line) by using the full program (figure \ref{schermata_9}) on the experimental data for Milky Way. The values of masses are $\mu_1\,=\,10^{-2}\,\text{Kpc}^{-1}$ and $\mu_2\,=\,10^2\,\text{Kpc}^{-1}$ \citep{StSc}.} \label{plot_2_PRD} \end{figure} As it is possible to see from figure \ref{plot_2_PRD} \citep{StSc}, the agreement of our model with the experimental data of the Milky Way is very good. Only for very low values of the distance $R$ the agreement is not perfect. This suggest us that we only need an improvement of the parameters in the code, maintaining the code itself essentially unchanged. | 14 | 3 | 1403.7951 |
1403 | 1403.5679_arXiv.txt | { Cosmological perturbations of FRW solutions in ghost free massive bigravity, including also a second matter sector, are studied in detail. At early time, we find that sub horizon exponential instabilities are unavoidable and they lead to a premature departure from the perturbative regime of cosmological perturbations.} \begin{document} | Dark Energy is the dominant component of our Universe, if future observations will establish that its equation of state differ from the one of a Cosmological Constant contribution, then we have a case for modifying GR at large distances and massive gravity can be a compelling candidate. Great effort was devoted to extend at the nonlinear level~\cite{Gabadadze:2010, Hassan:2011vm} the seminal work of Fierz and Pauli (FP)~\cite{Fierz:1939ix} and recently a Boulware-Deser (BD) ghost free theory was found~\cite{Gabadadze:2011, HR}. Unfortunately, cosmological solutions of the ghost free dRGT theory are rather problematic: spatially flat homogenous Friedmann-Robertson-Walker (FRW) solutions simply do not exist~\cite{DAmico} and even allowing for open FRW solutions~\cite{open} strong coupling~\cite{tasinato} and ghostlike instabilities~\cite{defelice-prl, defelice} develop. In addition the cutoff of the theory is rather low~\cite{AGS}, namely $\Lambda_3=\left(m^2 \, M_{Pl} \right)^{1/3}$. For a recent review see \cite{de-rham-long,defelice-rev}. A possible way out is to give up Lorentz invariance and requires only rotational invariance~\cite{Rubakov,dub,usweak}. Within the general class of theories which propagate five DoF found in~\cite{uscan,uslong}, in the Lorentz breaking case most of the theories have much safer cutoff $\Lambda_2 =(m \, M_{Pl})^{1/2}\gg \Lambda_3$ and also avoid all of the phenomenological difficulties mentioned above, including the SWtroubles with cosmology~\cite{cosmogen}. Another option is to promote the nondynamical metric entering in the construction of massive gravity theory to a dynamical one~\cite{DAM1,PRLus} entering in the realm of bigravity originally introduced by Isham, Salam and Strathdee~\cite{Isham}. In the bigravity formulation FRW homogenous solutions do exist~\cite{uscosm,hasscosm,russ}, however cosmological perturbations, for modes inside the horizon, start to grow too early and too fast when compared with GR, as a result the linear regime becomes problematic already during the radiation/matter era~\cite{uspert}. The reason of such peculiar behaviour of the scalar perturbations could be {\it naively} traced back to the FRW background solution which is controlled by the parameter $\xi$ (the ratio of the conformal factors of the two metrics) and to the absence of matter coupled to the second metric whose pressure could support inside horizon gravitational perturbations. In presence of only ordinary matter, coupled with the first metric, {\it only} small values of the parameter $\xi$ give an acceptable early time cosmology. The introduction of the second matter component provides other consistent background solutions where the values of $\xi$ can be also of order~1 and, at the same time, provides the necessary pressure support to infall perturbations. So in this paper we will extend our previous analysis to the case where an additional matter sector is minimally coupled to the second metric. Though we do not consider the problem, the second matter sector could be also relevant for dark matter~\cite{Yuk1,Yuk2}. The outline of the paper is the following: in section \ref{bi} we review the bigravity formulation of massive gravity and the extension to the case where a second matter sector is present; in section \ref{frwsec} we study FRW solutions and cosmological perturbations are analysed in section \ref{pert-sect}. | \label{con} We studied in detail the dynamics of scalar perturbations in massive bigravity. Beside its theoretical interest, massive gravity could be an interesting alternative to dark energy. As a general ground, the ghost free massive gravity theories can be classified according to the global symmetries of the potential $V$ in the unitary gauge~\cite{uslong}. The ones characterized by Lorentz invariance on flat space have a number of issues once an homogeneous FRW background is implemented . In the bigravity formulation, with a \underline{single matter sector}, things get better and FRW cosmological solutions indeed exist~\cite{russ,uscosm,hasscosm}. However, cosmological perturbations are different from the ones in GR. Already during radiation domination, sub horizon scalar perturbations tend to grow exponentially~\cite{uspert}. The manifestation of such instabilities is rather peculiar. In the sector one, composed by ordinary matter and the metric $g$, their perturbations are very close to the ones of GR. The instability manifests as an exponential sub horizon growth of the field $\E$ and of the second scalar mode $\Phi_2$, one of the Bardeen potentials of $\tilde g$, which quickly invalidate the use of perturbation theory at very early time. This is very different from GR where perturbations become large (power law growth) only when the universe is non relativistic. The emergence of an instability only in the perturbations of the second metric suggests its origin may resides in the matter content asymmetry of the two sectors, since only the physical metric is coupled to matter. Indeed, the only background solutions acceptables have a ratio $\xi=\omega/a$ of the metrics' scale factors such that $\xi \ll 1$. Adding a \underline{second matter sector} sourcing the second metric, opens up the possibility (case ({\bf C})) to have a more symmetric background with $\xi \sim 1$ and one may hope the exponential instability to be absent. Unfortunately, we have shown that this is not the case. Though, the pressure provided by the second matter stabilizes $\Phi_2$ and its dynamics becomes similar to GR, the sub horizon instability persists for $\E$ that represents a purely gravitational extra scalar field. We managed to analyze the perturbations in whole range of $\xi$ compatible with the early Universe evolution (matter and radiation). The cases ({\bf A}) and ({\bf B}) represent regions of very small $\xi$ where only one matter sector dominates, likewise the case with a single matter, and both $\E$ and $\Phi_2$ grow exponentially inside the horizon. When $\rho_1 \gg \rho_2$, the values of the tachyonic mass responsible for that instability does not depend on $w_2$ and actually coincides with the one found in the case where $\rho_2=0$~\cite{uspert}. In region ({\bf C}) both the matter sectors are important. While, the Bardeen potentials $\Phi_{1,\,2}$ are stable, the purely scalar gravitational field $\E= E_1-E_2$ (see Appendix \ref{pert-app}) that involves both metrics has early time instabilities. Finally, the region ({\bf D}), characterized by very large values of $\xi$, already at the level of background, spoils early time standard FRW cosmology. Spanning the whole range of $\xi$ compatible with a standard early time cosmology, when $m^2 M_{pl}^2$ is the order of the present cosmological constant, the bottom line is that massive bigravity has an intrinsic exponential instability. Looking at the behaviour of the matter contrast which is the same of GR, one may speculate that some sort of Vainshtein~\cite{vain} cosmological mechanism could take place, though here the trouble is with perturbations and not with the background. Even if that happens, the deal is rather pricey: perturbation theory will fail both at Solar System and cosmological scales. \vskip 1cm \no {\Large \bf Acknowledgements} \vskip .5cm \no M.C. thanks A. Emir G\"umr\"uk\c c\"uo\u glu for useful discussions and the {\it Fondazione Angelo Della Riccia} for financial support. L.P. thanks the Cosmology and Astroparticle Physics Group of the {\it University of Geneva} for hospitality and support. D.C. thanks { Negramaro} for their {\it Senza fiato} inspiring song. \begin{appendix} | 14 | 3 | 1403.5679 |
1403 | 1403.4096_arXiv.txt | {} {The main aim of the present work is to derive an empirical mass-loss (ML) law for Population II stars in first and second ascent red giant branches.} {We used the Spitzer InfraRed Array Camera (IRAC) photometry obtained in the 3.6--8\,\micron range of a carefully chosen sample of 15 Galactic globular clusters spanning the entire metallicity range and sampling the vast zoology of horizontal branch (HB) morphologies. We complemented the IRAC photometry with near-infrared data to build suitable color-magnitude and color-color diagrams and identify mass-losing giant stars.} {We find that while the majority of stars show colors typical of cool giants, some stars show an excess of mid-infrared light that is larger than expected from their photospheric emission and that is plausibly due to dust formation in mass flowing from them. For these stars, we estimate dust and total (gas + dust) ML rates and timescales. We finally calibrate an empirical ML law for Population II red and asymptotic giant branch stars with varying metallicity. We find that at a given red giant branch luminosity only a fraction of the stars are losing mass. From this, we conclude that ML is episodic and is active only a fraction of the time, which we define as the duty cycle. The fraction of mass-losing stars increases by increasing the stellar luminosity and metallicity. The ML rate, as estimated from reasonable assumptions for the gas-to-dust ratio and expansion velocity, depends on metallicity and slowly increases with decreasing metallicity. In contrast, the duty cycle increases with increasing metallicity, with the net result that total ML increases moderately with increasing metallicity, about 0.1\,\msun\ every dex in [Fe/H]. For Population II asymptotic giant branch stars, we estimate a total ML of $\le 0.1$\,\msun, nearly constant with varying metallicity. } {} | Mass loss (ML) affects all stages of stellar evolution and its parametrization remains a vexing problem in any modeling, since satisfactory empirical determinations as well as a comprehensive physical description of the involved processes are still lacking. This is especially true for Population II red giant branch (RGB) and asymptotic giant branch (AGB) stars. The astrophysical impact of ML in Population II giants is huge and affects not only stellar evolution modeling, but also related subjects, like, for example, the UV excess in ellipticals or the interaction between the cool intracluster medium and hot halo gas. There is a great deal of indirect, but quantitative evidence for ML during the RGB evolution, namely the horizontal branch (HB) morphology and the 2nd parameter problem, the pulsational properties of RR Lyrae, the absence of AGB stars significantly brighter than the RGB tip, and the masses of white dwarfs (WDs) in Galactic globular clusters (GCs) \citep[see, e.g.,][]{roo73,ffp75,ffp76,ren77,ffp93,fer98,cru96,han05,kal07,cat09}. On the contrary, there is no empirical ML law directly calibrated on Population II giants with varying metallicity and only a few estimates of ML for giants on the brightest portion of the RGB and AGB exist. As a consequence, ML timescales, driving mechanisms, dependence on stellar parameters, and metallicity are still open issues. There is little theoretical or observational guidance on how to incorporate ML into models. With no better recipe, models of stellar evolution incorporate ML by using analytical ML formulae calibrated on bright Population I giants. The first and most used of these is the \citet{rei75a,rei75b} formula, extrapolated toward lower luminosity and introducing a free parameter $\eta$ (typically equal to 0.3) to account for a somewhat less efficient ML along the RGB. A few other formulae, which are variants of the Reimers formula, have been proposed in the subsequent years \citep[see, e.g.,][]{mul78,gol79,jud91}. More recently, \citet{cat00} revised these formulae by using a somewhat larger database of stars than in previous studies, but still amounting to 20--30 giants only, the majority being AGB stars. \citet{sc05} propose a new semi-empirical formula that explicitly includes a dependence from all the stellar parameters. Further advances clearly require empirical estimates of ML rates in low-mass giants along the entire RGB and AGB extension. There are two major diagnostics of ML in giant stars: the detection of outflow motions in the outer regions of the stellar atmosphere or the detection of circumstellar (CS) envelopes at much larger distances from the star. After the pioneering work by \citet{rei75a,rei75b}, the systematic investigation of chromospheric lines in giants stars with possible emission wings started in the 1980s. \citet{gra83,cac83,gra84} measured H$\alpha$ emission in old, bright giants near the RGB tip, members of Galactic globular and open clusters. They found H$\alpha$ emission in a significant fraction of them and, by using the simple recombination model by \citet{cohen76}, they estimated average $ dM/dt \approx 10^{-8} M_{\odot}\,{\rm yr^{-1}}$ ML rates. However, \citet{dup84} and \citet{dup86} argued that the H$\alpha$ wings could naturally arise in a static stellar chromospheres. Other authors \citep[e.g.,][]{peterson81,peterson82,dup92,dup94,lyons96,smith04,cac04,mau06,vie11} investigated the possible presence of profile asymmetries and coreshifts in a large number of chromospheric lines, by means of high resolution spectroscopy over a wide spectral range, from UV (MgII h,k $\lambda$2800 \AA) to optical (CaII K, NaI D, H$\alpha$) and IR (HeI $\lambda$10830.3 \AA). These line asymmetries and coreshifts can be accounted for only by an active chromosphere and/or mass outflow, with typical velocity fields of 10--20 km/s. The difficulty of converting the chromospheric line diagnostics into ML rates is certainly related to modeling uncertainties, for example because of the lack of any detailed knowledge of the structure and excitation mechanism of the wind region. However, it is also clear that the outflow region traced by the chromospheric lines is still too close to the star, to sample the bulk of the mass lost, likely accumulated at larger distances. Hence, the chromospheric line method seems more effective in tracing the region of wind formation and acceleration, rather than most of the outflow. Finally, it must be recalled that even with 8m-class telescopes it is at best expensive and often impossible to obtain high-resolution, high S/N spectra of Population II giants along the entire RGB extension. A CS envelope around a cool giant can be detected by measuring IR dust emission, linear polarization, microwave CO emission and radio OH masers. However, CS envelopes of low-mass giants have intrinsically low surface brightness. Far IR and radio receivers have neither sufficient spatial resolution nor sensitivity to study Population II CS envelopes in dense stellar fields. Linear polarization, intrinsically well below 1\%, is also hardly measurable. Hence, array photometry in the 3--20\,\micron region remains the most effective way to detect Population II CS envelopes. Mid-IR observations have the advantage of sampling an outflowing gas fairly far from the star (typically, tens/hundreds stellar radii). Such gas left the star a few decades previously, hence the inferred ML rate is also smoothed over such a timescale. In the late 1980s, the first measurements of dust excess in Galactic GC giants by means of mid-IR photometry from the ground \citep{frog88} and with IRAS \citep{gil88,ori96} became available, although the spatial resolution of these detectors was insufficient to properly resolve most of the stars. A decade later, the Infrared Space Observatory (ISO) satellite allowed new observations, but was still limited in spatial resolution and sensitivity. A few bright AGB stars in 47~Tuc have been measured by \citet{ram01}, finding dust excess in two objects only. Our group performed a deep survey with the ISO Infrared Camera ISOCAM of six massive GCs \citep{ori02}, namely 47~Tuc, NGC~362, $\omega$ Cen, NGC~6388, M~15 and M~54, in the 10\,\micron window. From a combined physical and statistical analysis, our ISOCAM study provided ML rates and frequency for some giants near the tip \citep[see also][]{ori07}. However, the small sample of observed giants and the limited capabilities of ISOCAM allowed us to reach only weak conclusions on the ML dependence on luminosity, metallicity, and HB morphology. The advent of Spitzer with its mid-InfraRed Array Camera (IRAC) has opened a new window in the study of CS envelopes around Population II giants. Indeed, the IRAC bands between 3.6 and 8\,\micron are effective in detecting warm dust with spatial resolution good enough to resolve a large fraction of the GC giants. By using the $(3.6-8)$ Spitzer-IRAC color as a diagnostic, dust excess has been detected around some of the brightest giants in $\omega$ Cen, M~15, NGC~362 and 47~Tuc \citep{boy06,boy08,boy09,boy10}. In Cycle 2 (program ID \#20298), our group was granted 26\,hr of Spitzer-IRAC observing time to map 17 Galactic GCs down to the HB level. We combined Spitzer-IRAC photometry with high-resolution near-IR photometry from the ground and used (K-IRAC) colors as diagnostics of possible circumstellar dust excess. Results for 47~Tuc have been published in \citet{ori07,ori10}, while those for the complex stellar system $\omega$ Cen will be presented in a forthcoming paper. Here we present the photometric analysis for the remaining 15 GCs in our sample, we discuss ML rates and duty cycles in Population~II giants and we derive an empirical law of ML for Population II giants with varying metallicity. | We have inspected the near- and mid-infrared color-magnitude and color-color diagrams of a carefully chosen sample of 15 Galactic GCs spanning the entire metallicity range from about one hundredth up to almost solar and, for a given metallicity, with different HB morphology. All GCs, including the most metal-poor ones, have RGB and AGB giant stars with color excess, plausibly due to dust formation in mass flowing from them. Such dusty giants are detected down to M$_{bol}\le -1.5$ at all metallicities and down to M$_{bol}\approx$0 in the most metal-rich GCs. We find that the fractional number of giants stars with color excess increases towards higher luminosities and metallicities. By modeling the mid-infrared color excess of our sample of GC giants, we are able to derive ML rates in a representative sample of Population II RGB and AGB stars with varying metallicity. At a given $M_{\rm bol}$ only a fraction of stars are losing mass\footnote{This evidence is in agreement with the consideration that it is impossible for a low-mass ($\approx 0.8\, M_{\odot}$) giant to lose mass at the estimated rates (see Sect.~\ref{results}) during the entire time of its ascent of the RGB and AGB, simply because it would eject an amount of gas exceeding its total mass.}. From this, we conclude that the ML is episodic. The observed fraction of dusty giants gives the time that the ML is ``on.'' Combining this duty cycle with the ML rates yields the total ML. In the following subsections, we summarize our findings about ML and its possible dependence on the metallicity and HB morphology of the parent cluster. \subsection{Mass loss and metallicity} Our estimates of ML in Population II RGB stars indicate that ML depends only moderately on metallicity. Indeed, ML rates slowly decrease with increasing metallicity, while duty cycles more rapidly increase with increasing metallicity, with the net result that total ML moderately increases with increasing metallicity, about 0.08\,\msun\ every dex in [Fe/H]. By using an indirect method based on the estimate of stellar masses on the HB, \citet{gra10} find a similar dependence of total ML on metallicity. The ML rates in Population II AGB stars show a similar dependence on metallicity as RGB stars, while duty cycles increase more slowly with it (see Sect.~\ref{fnum}). We estimate $\le 0.1$\,\msun\ of total ML on the AGB, nearly constant with varying metallicity. The fact that ML rates in both Population II RGB and AGB stars seem to increase with decreasing metallicity, although rather slowly, would suggest that the outflow cannot be mainly driven by mechanisms involving opacity from metals. \begin{figure}[] \begin{center} \includegraphics[width=9cm]{fig8.jpg} \end{center} \caption{Global histograms of $\Delta$[Log(ML rates)] (${\rm measured-best fit}$) for metal-rich (left panel) and metal intermediate/poor (right panel) GCs, grouped in two subsamples, namely with normal (empty histograms) and extended (gray histograms) HB. } \label{isto} \end{figure} \begin{figure}[] \begin{center} \includegraphics[width=9cm]{fig9.jpg} \end{center} \caption{Cumulative distributions of $\Delta$[log(ML rates) (${\rm measured-best fit}$) for metal-rich (gray lines) and metal intermediate/poor (black lines) GCs, grouped in two subsamples, namely with normal (solid lines) and extended (dashed lines) HB.} \label{cum} \end{figure} \subsection{Mass loss rates and HB morphology} The last generation of HST color-magnitude diagrams in the optical \citep[see, e.g.,][]{ric97,dot10} and UV \citep[see, e.g.,][]{fer98,dal13a,dal13b} prove that the HB morphology of GCs is even more complex than previously believed and several 2nd parameters can be invoked \citep[see, e.g.,][]{roo73,ffp75,ffp76,ren77,ffp93,cru96,cat09,dot10,gra10}. Some quantitative investigations of the HB morphology of the massive GCs NGC~2808, NGC~6388, and NGC~6441 HB, were recently performed \citep{bus07,dal08,bro10,dal11}. A significant population of blue, extreme blue, and blue hook HB stars (hereafter BHB, EHB and BHk, respectively) was found. For example, in the metal-intermediate GC NGC~2808 \citet{dal11} account for 39\% BHB, 11\% EHB, and 9\% BHk, while in the metal-rich GC NGC~6388 \citet{dal08} account for 15\% BHB, 2\% EHB, and 2\% BHk. In these GCs, HB models with normal He abundance ($Y\approx0.24$) and ML can account for red HB stars. On the contrary the hotter BHB and EHB could be explained by a higher He content. BHk stars are extremely hot HB stars with a significant spread in luminosity, likely due to a delayed, hot He-flash. It has been suggested that these stars could have experienced an enhanced ML during the RGB evolution \citep{cas09,moe07,dal11}, or alternatively, they could have an extremely large He content ($Y>0.5$) \citep{dan08} due to extra mixing processes undergone during their RGB phase. In the following, we briefly explore whether and in which terms our results on ML could eventually provide additional constraints to these working scenarios. We computed the ratio between the measured ML rates in each RGB star and the corresponding best-fit value or equivalently the difference of their logarithmic values. We then constructed global histograms of $\Delta$log(ML rates) for metal-rich ([Fe/H]$>-1.0$) and metal-intermediate/poor ([Fe/H]$\le-1.0$) clusters, grouped in two sub-samples, namely those with normal and extended HB, as shown in Figure~\ref{isto}. The histograms of GCs with normal HBs have Gaussian dispersion $\sigma \approx 0.2$ dex (rich) and $\sigma \approx 0.12$ dex (metal-intermediate/poor). A larger dispersion of $\Delta$log(ML rates) in metal-rich GCs is not surprising, given that these stars have a larger {\it turnoff} mass and a wider range of possible masses in the red part of the HB. The histograms of GCs with extended HBs have Gaussian dispersion similar to those of GCs with normal HBs, but have a tail (that is an excess of stars) toward higher ML rates. Independent of metallicity, the bulk ($>90$\%) of RGB stars in GCs with normal HB have rates within a factor of two from the average value. In GCs with extended HB about 15\% of RGB stars have ML rates in excess by a factor of two (i.e., by 2--3$\sigma$) from the average value. We also computed cumulative distributions $\Delta$[log(ML rates)] for the ML rates, as shown in Figure~\ref{cum}. The cumulative distribution of $\Delta$[log(ML rates)] in metal-rich GCs with normal HB is more bent than the corresponding distribution for metal-intermediate/poor GCs, in agreement with the larger Gaussian dispersion. The cumulative distributions of $\Delta$[log(ML rates)] in GCs with extended HB are also more bent (particularly for metal-poor GCs) and shifted toward higher ML rates, compared to those GCs with normal HBs. The KS-tests give probabilities of $\approx 5$\% (metal-rich) and $<0.1$\% (metal-intermediate/poor) that normal and extended HB distributions be extracted from the same parent population. By comparing the $\approx 15$\% estimated percentage of stars with ML rates in excess by a factor of two from the average values with the HB population ratios in NGC6388 (metal-rich) and NGC2808 (metal intermediate) reported above, we can speculate that: (1) metal-rich GC RGB stars with ML rates within a factor of two from the average value will probably evolve as red clump stars or moderate BHB, depending on their actual ML rate, duty cycle, and He content, while those with the highest ML rates will likely evolve as hot BHB, EHB, or BHs stars. It is also possible that those stars with extreme ML rates will move directly to the WD cooling sequence, without experiencing any He-flash; (2) metal-intermediate/poor GC RGB stars with ML rates within a factor of two from the average value can evolve either as red or BHB and EHB stars, depending on their actual ML rate, duty cycle, and He content. Those RGB stars with the highest ML rates (in excess by a factor of two from the average value) can be precursors of the hottest EHB and BHk stars. In practice, for a given temperature on the HB, there can be a certain level of degeneracy between ML and He content, the two parameters being somehow anti-correlated. Indeed, according to evolutionary tracks \citep{pie06} with normal and enhanced He content, for equal age and metallicity, a star with higher He content has a smaller {\it Turnoff} mass compared to a star with normal He, hence the former should need less ML than the latter to reach a given temperature on the HB. | 14 | 3 | 1403.4096 |
1403 | 1403.4119_arXiv.txt | The classical dwarf spheroidals (dSphs) provide a critical test for Modified Newtonian Dynamics (MOND) because they are observable satellite galactic systems with low internal accelerations and low, but periodically varying, external acceleration. This varying external gravitational field is not commonly found acting on systems with low internal acceleration. Using Jeans modelling, Carina in particular has been demonstrated to require a V-band mass-to-light ratio greater than 5, which is the nominal upper limit for an ancient stellar population. We run MOND N-body simulations of a Carina-like dSph orbiting the Milky Way to test if dSphs in MOND are stable to tidal forces over the Hubble time and if those same tidal forces artificially inflate their velocity dispersions and therefore their apparent mass-to-light ratio. We run many simulations with various initial total masses for Carina, and Galactocentric orbits (consistent with proper motions), and compare the simulation line of sight velocity dispersions (losVDs) with the observed losVDs of Walker et al. (2007). We find that the dSphs are stable, but that the tidal forces are not conducive to artificially inflating the losVDs. Furthermore, the range of mass-to-light ratios that best reproduces the observed line of sight velocity dispersions of Carina is 5.3 to 5.7 and circular orbits are preferred to plunging orbits. Therefore, some tension still exists between the required mass-to-light ratio for the Carina dSph in MOND and those expected from stellar population synthesis models. It remains to be seen whether a careful treatment of the binary population or triaxiality might reduce this tension. | \protect\label{sec:intr} The classical dwarf spheroidal galaxies of the Milky Way are eight low surface brightness galaxies that are currently at distances between 60 and 250~kpc. They have total luminosities in the V-band ranging from $L_V\sim4\times10^5$ to $1.7\times10^7\lsun$ (\citealt{mateo98}) and sizes of order a kiloparsec. For comparison, the Milky Way luminosity and size are $L_V\sim 6\times10^{10}\lsun$ (\citealt{mcgaugh08}) and $\sim30~kpc$. Clearly, such puny luminosities within relatively large volumes earns the dwarf spheroidals (dSphs) their low surface brightness moniker and also puts them in a very interesting category since low surface brightness galaxies typically have large dark matter (DM) components. Being spheroidal systems, information about their dynamical mass can be obtained from Jeans modelling of their stellar velocity dispersions (see \citealt{mamon10} for more information). For this reason, \cite{walker07} obtained hundreds of spectra of probable member stars for each of the dSphs, sampled over their full projected areas. Photometrically and spectroscopically identified interloper stars (non members, typically foreground stars) were rejected and each dSph's projected velocity dispersion, as a function of projected radius, was computed. They then performed Jeans modelling of each dSph, which employs the observed stellar surface brightness profile and fits for the unknown DM profile, by comparing modelled with observed projected velocity dispersions. This blatantly showed that the dSphs are some of the most DM dominated (in Newtonian dynamics) galaxies in the Universe. Although the dynamics of the dSphs can be easily explained by the presence of DM, there are other peculiarities related to their phase-space distribution around the Milky Way which makes one question this conclusion. The major open questions relating to dSphs are comprehensively reviewed in \cite{walker14}, but we restate them here. First of all, from comparison with cold dark matter (CDM) only cosmological simulations (like those of \citealt{klypin99,moore99}) one would naively expect a greater number of these satellite galaxies within 250~kpc of the Milky Way. Certain authors like \cite{benson02,munoz09,maccio10,lihelmi10} have suggested that this lack of satellites may be due to star formation inefficiencies due to re-ionisation and supernova feedback in these lower mass CDM halos which only enables a fraction of all halos to form stars. However, this fails to address the problem noted by \cite{boylan12} that associating the dSphs with the most massive Milky Way subhalos, as we expect in these models, is incompatible with the relatively low masses and densities of the measured DM halos. The other more pressing concern is that the dSphs are not isotropically distributed around the Milky Way. Rather, they are distributed as a great rotationally-supported disk that is surprisingly thin, with an RMS thickness of 10-30~kpc (see \citealt{metz08} and the detailed review of \citealt{kroupa10}), which is substantially smaller than typical RMS thicknesses in nearby groups of galaxies. If it were an isolated incident, this would be less troubling, but \cite{ibata13} have recently shown a similar structure in the satellite galaxy distribution surrounding the M31 galaxy with an RMS thickness of less than 14.1~kpc (with 99\% confidence) to which half the satellites belong. Furthermore, \cite{chiboucas13} have recently identified a flattened distribution of satellites around M81. These satellite distributions have been shown to be highly unlikely to arise from CDM cosmological simulations, although once in place they could naturally be stable (\citealt{adk11,pawlowski12,deason11,bowden13}). On the other hand, following a merger or a flyby (e.g., \citealt{zhao13}) between two galaxies, with mass ratios between 1:1 and 1:4, the probability of forming such a polar disk of satellites could easily reach 50\% (\citealt{pawlowski12}). Separately, there are observations of dwarf galaxies forming out of the tidal debris produced from a wet galactic merger (\citealt{bournaud07}), which may demonstrate evidence for MOND (\citealt{gentile07a,milgrom07b}). Returning to the dSphs, if they are in fact tidally formed they should not have large DM abundances. Furthermore, they have very little neutral hydrogen (\citealt{mateo98}) and no significant emission from molecular gas. However, these eight classical dSphs do require large DM abundances when interpreted with Newtonian dynamics, and they have a peculiar orbital distribution that may be difficult to explain within the CDM framework. Therefore, it is worth investigating their dynamics in an alternative theory of gravity that can, in principal, be consistent with the merger scenario and the large velocity dispersions without galactic DM. One such alternative is Modified Newtonian Dynamics (MOND; \citealt{milgrom83a} and see \citealt{famaey12} for a thorough review). \cite{brada00b} used a particle-mesh N-body solver to study the influence of the Milky Way on the dSphs. Their work preceded the high quality velocity dispersion data, but demonstrated that there are orbital regions where dSphs can orbit with adiabatic (reversible) changes to their velocity dispersion and density profiles. In addition, there are non-adiabatic regions where the rapid change of the external gravitational field of the Milky Way disturbs the density profile at pericentre and this does not recover by the time the dSph returns to apocentre. Finally, there are tidal regions where mass will be stripped from the dSphs at pericentre. Using the data of \cite{walker07}: \cite{angus08} and \cite{serra10} performed Jeans modelling in MOND. There, the goal was to isolate the two free parameters: the mass-to-light ratio of the stellar population and the velocity anisotropy. Velocity anisotropy is the {\it a priori} unknown relationship between the probability of radial and tangential stellar orbits within the dSph. This can also be used as a free parameter in the context of DM halo fitting, but is somewhat redundant given the freedom of possible DM halo profiles. In MOND, it is an essential ingredient to alter the shape of the projected velocity dispersion profile, whereas all the mass-to-light ratio can do is raise or lower the amplitude of the velocity dispersions. \cite{angus08} found that the four dSphs with the highest surface brightness (highest internal gravities) had reasonable mass-to-light ratios, but the other four required mass-to-light ratios that were larger than the expected range of 1 to 5 in the V-band found from stellar population modelling (\citealt{maraston05}). Much simulation work has been done in this vein in the standard paradigm (see e.g. \citealt{kroupa97,klessen03,read06,penarrubia09,klimentowski09}). More specifically, the work of \cite{munoz08} focused on a very similar thesis as ours, which was whether tidally disturbed mass-follows-light models of a DM dominated Carina dSph are consistent with the observed projected surface density and projected velocity dispersion profile. Those authors found that there were indeed combinations of mass and orbital parameters that could faithfully reproduce the Carina dSph. \cite{sanchez07} investigated the likelihood of survival for the dSphs in MOND after successive orbits over a Hubble time. They found that only Sextans was likely to dissolve in less than a few Gyr, but that the deduced dynamical mass-to-light ratios of Ursa Minor and Draco (out of the eight classical dSphs) were too large to be consistent with only the stellar populations. They also showed, based on their current positions, that tidal stirring might be an important consideration for Sextans, Sculptor and Ursa Minor, but not Carina. Other relevant work was carried out by \cite{sanchez10} and \cite{lora13} who looked at the importance of cold kinematic substructures that are found in the Sextans and Ursa Minor dSphs. It was shown their longevity can be used to discriminate between modified gravity and CDM. Given the separation in surface brightness between dSphs that satisfied MOND and those that did not, it was suggested in \cite{angus08} that the latter four dSphs may be subject to tidal forces that produce tidally unbound interloper stars and inflate the velocities of the bound stars. Our aim here is to test this hypothesis by running high resolution MOND N-body simulations of satellite galaxies orbiting the Milky Way and comparing the simulated projected velocity dispersions with the observed ones. Insodoing we also hope to elucidate the zones of possible orbits open to the satellites without being torn to shreds by the Milky Way. This is an essential sanity check for when high accuracy proper motions become available. We focus on the Carina dSph because out of the four least luminous classical dSphs it has a well measured surface brightness profile, large numbers of stellar line of sight velocities for Jeans modelling and relatively accurately measured proper motions. In Section 2 we present the Jeans analysis, in Section 3 we discuss how to incorporate the external field and the setup of our simulations. In Section 4 we compare simulated with observed projected velocity dispersions, in Section 5 we give our results, and finally in Section 6 we draw our conclusions. | Here we have run a suite of MOND N-body simulations of a dSph like Carina with various total masses ($m=$1.32, 2.2, 2.64, 3.08 and 3.96$\times10^6\msun$) and orbital paths around the Milky Way. We have shown that they are stable and long lived on nearly circular orbits at 100~kpc regardless of mass ($\ge m=$1.32$\times10^6\msun$) and even on orbits that plunge to 50~kpc. However, the model most likely to give a good fit to the observed projected velocity dispersions is one with an initial $m=$2.64$\times10^6\msun$, which means a $M/L$ in the range of 5.4 and 5.7 after two orbits ($\sim 5 Gyr$). The more circular the orbit, the less disturbed the internal velocity distribution is. This is important because the observations require substantially negative (tangentially biased) velocity anisotropies. After plunging orbits, the velocity anisotropy becomes slightly more radially biased, reducing agreement with the observations. Considering that a $M/L$ in the range of 5.4 and 5.7 is potentially at odds with stellar populations synthesis models, we considered a model with $m=$2.2$\times10^6\msun$, which after a single orbit corresponds to a $M/L$ between 4.5 and 4.7. This model has a likelihood of matching the observations that is roughly 3.5 times smaller than the model with $M/L$ between 5.4 and 5.7. This range of mass-to-light ratios is slightly above those found from basic Jeans analysis because the isopotential contours are stretched (see e.g. \citealt{milgrom86,zhaot06,wu08}) in the direction away from the Milky Way (which coincides here with our line of sight) due to the external field effect. This leads to a stretching of the dSph along the line of sight, relative to the plane perpendicular, and a reduction of the velocity dispersions. As for the compatibility of different orbits, it would appear that after two orbits with initial $V_y=125~\kms$, the lower masses $m=$2.2 and 2.64$\times10^6\msun$ are not capable of generating a sizable fraction of good fits. $m=$2.2$\times10^6\msun$ would give less than 0.001, $m=$2.64$\times10^6\msun$ less than 0.01, but $m=$3.08$\times10^6\msun$ would produce roughly 0.03. This is because mass has been stripped leaving the true $M/L$ after two orbits to be somewhere between 5.9 and 6.2. Using $m=$2.64$\times10^6\msun$ after only one orbit with $V_y=125~\kms$ gives a fraction of good fits of only 0.015 with a true $M/L$ between 5.1 and 5.4. So the best fit $M/L$ for $V_y=125~\kms$ is likely somewhere between these two limits. However, it will probably still be somewhat less likely than the more circular orbits since the tides adversely affect the velocity anisotropy. For the intermediate orbit with $V_y=150~\kms$, $m=$2.64$\times10^6\msun$ leads to a fraction of 0.04 good fits after three full orbits with a true $M/L$ of $\sim$5.3-5.4. Therefore, for $V_y \ge 125~\kms$ the preferred $M/L$ remains fairly constant (5.3-5.7), but obviously on the more plunging orbits mass is more rapidly stripped and thus it is required that the current $M/L$ is in this range, not the initial one. A parallel observation is that the fraction of stripped mass during a period of almost half the age of the Universe is not more than half on any of the simulated orbits. Therefore, it must be the case that the dSph was formed with a mass very close to its current one and this is likely also true in the CDM paradigm. Although the preferred $M/L$ is between 5.3 and 5.7, there is still a reasonable probability that the $M/L$ is lower than 5. From the various orbits it would seem that even on a near circular orbit, panel (c) of Fig~\ref{fig:ml5} shows (after one orbit) that $M/L\sim4.8$ is more than three times less likely than the best model. Panel (a) of Fig~\ref{fig:ml5} suggests that on an orbit with a 50~kpc pericentre, a $M/L\sim4.5$ has an insignificant probability of producing a good fit. A larger sample of stellar line of sight velocities might subdue the errors here to distinguish between different mass-to-light ratios. Therefore, higher precision proper motions, larger samples of stars, ultra-precise photometry for the total luminosity, and more sophisticated and reliable stellar population synthesis models, as well as a full-fledged treatment of binaries for dwarf spheroidals would be enormously useful for future studies. Another factor that should be built in to future studies of Carina, is the possibility for triaxiality in the 3D stellar distribution. This must be an important factor because all dSph surface brightnesses are observed to be moderately elliptical (\citealt{irwinhatz}). | 14 | 3 | 1403.4119 |
1403 | 1403.5868_arXiv.txt | HARPS and {\it Kepler} results indicate that half of solar-type stars host planets with periods $P<100$~d and masses $M<30$~$M_{\oplus}$. These super Earth systems are compact and dynamically cold. Here we investigate the stability of the super Earth system around the K-dwarf HD40307. It could host up to six planets, with one in the habitable zone. We analyse the system's stability using numerical simulations from initial conditions within the observational uncertainties. The most stable solution deviates 3.1$\sigma$ from the published value, with planets e and f not in resonance and planets b and c apsidally aligned. We study the habitability of the outer planet through the yearly-averaged insolation and black-body temperature at the pole. Both undergo large variations because of its high eccentricity and are much more intense than on Earth. The insolation variations are precession dominated with periods of 40~kyr and 102~kyr for precession and obliquity if the rotation period is 3~d. A rotation period of about 1.5~d could cause extreme obliquity variations because of capture in a Cassini state. For faster rotation rates the periods converge to 10~kyr and 20~kyr. The large uncertainty in the precession period does not change the overall outcome. | Since the discovery of the first extrasolar planet in 1995 (Mayor \& Queloz, 1995) there has been a surge in research in planetary science and in the detection of new planets, with the HARPS survey (Mayor et al., 2003) and NASA's {\it Kepler} mission leading the field. The high number of detected and candidate planets allows for statistical studies and several trends have emerged, which are shared among both the HARPS and Kepler data (Figuera et al., 2012). Some of these include: \begin{itemize} \item Approximately half of all solar-type stars contain planets with a projected mass $m_p \sin I <30$~Earth masses ($M_\oplus$) (Borucki et al., 2011; Mayor et al., 2011; Chiang \& Laughlin, 2013), where $m_p$ is the planet's mass, and $I$ is the angle between the planet's orbit and the observer. \item Planets with a short orbital period tend to be of low ($<30$~$M_\oplus$) mass (Mayor et al., 2011; Batalha et al., 2013). Most of these have radii $R \in [1,4]$~Earth radii ($R_\oplus$) and masses between Earth's and Neptune's. These planets are often referred to as super Earths. In contrast, hot Jupiters are rare (Mayor et al., 2011). \item The number of planets increases with decreasing mass and/or radius (Howard et al., 2010; Howard et al., 2012) and approximately 23\% of stars have Earth-like close-in planets with periods $P<50$~days (d). \item Approximately 73\% of low-mass planets with periods shorter than 100~d reside in multiple systems (Mayor et al., 2011; Fang \& Margot, 2012), with only 26\% of these multiples containing a gas giant (Mayor et al., 2011). This suggests that super Earths form in clusters close to the parent star and are isolated from potential giant planets in the system. \item Systems of multiple super Earths on short periods tend to be compact (Fang \& Margot, 2012; Chiang \& Laughlin, 2013) and have low ($<3^\circ$) mutual inclinations (Fang \& Margot, 2012; Tremaine \& Dong, 2012) and most likely also low ($<0.2$) eccentricities (Mayor et al., 2011; Wu \& Lithwick, 2013). \item The period distribution is more or less random with some excesses just slight of the 3:2 and 2:1 mean motion resonances (Fabrycky et al., 2012). The near-resonance of some pairs has been attributed to tidal decay (Batygin \& Morbidelli, 2013; Lithwick \& Wu, 2012), { though Petrovitch et al. (2013) proposed an alternative scenario based on planet growth}. \item Although still actively debated, the period distribution of exoplanets with short ($P<200$~d) periods suggests an in-situ formation scenario (Raymond et al., 2008; Hansen \& Murray, 2012; Chiang \& Laughlin, 2013) rather than formation farther out followed by migration (Lopez et al., 2012; Kley \& Nelson, 2012; Rein, 2012). An intermediate scenario in which planetary embryos migrate inwards followed by a giant impact stage (Ida \& Lin, 2010) may also work. \item The mutual spacing of most of these super Earths is between 5 to 30 Hill radii (Lissauer et al., 2011; Fabrycky et al., 2012), which encompasses the spacing between the giant planets (12) and terrestrial planets (40). However, their proximity to the star requires their orbits to be dynamically cold to prevent orbit crossing. \item The Kepler catalogue contains a few confirmed super Earth planets in the habitable zone of their parent stars, { with a further 20 candidates} (Batalha et al., 2013). The habitable zone (HZ) is the region where radiation received by the planet from the star is enough for it to sustain liquid water under sufficient atmospheric pressure (Kasting et al., 1993; Kaltenegger \& Sasselov, 2011). \end{itemize} Thus, it seems the super Earth population resembles the regular satellite populations of the giant planets: both show a typical mass ratio of $m_p/M_* \sim 10^{-4.5}$ and regularly spaced, dynamically cold orbits. Some are far enough out to be in the habitable zone.\\ The regularity of the orbits and tight spacing provide a formidable challenge to theorists of planet formation and dynamicists alike. If the best determined orbits are not entirely circular determining whether or not these systems are dynamically stable and fall within the observational uncertainties is challenging. The aim of this study is to analyse the stability of one such compact super Earth system: HD 40307. This system is interesting because one planet, HD 40307 g, may be in the habitable zone. Therefore we also study how the dynamics of the whole system affects the long-term habitability of planet g.\\ The term `habitable' encompasses many things, however, thus we only focus on the long-term variations in the insolation caused by the dynamics of the whole system and the solid body response of planet g. On Earth, in addition to stellar activity, geological activity and temperature regulation through the carbonate-silicate cycle (Williams \& Kasting, 1997), the long-term climate is driven externally by the Milankovi\'{c} cycles (Milankovi\'{c}, 1941). Earth's orbit is perturbed by other planets, causing quasi-periodic variations in eccentricity and inclination on a time scale of 100~kyr. The Earth's obliquity is also affected and oscillates on a time scale of 41~kyr (e.g. Laskar et al., 1993). The combined effect of the variations in eccentricity, obliquity and precession angle are the Milankovi\'{c} cycles. Those periods of the Milankovi\'{c} cycles are much shorter than the relaxation time of the carbonate-silicate feedback mechanism.\\ The perturbations of other planets cause these changes and so influence the insolation accordingly, driving the ice ages on the Earth (Imbrie \& Imbrie, 1980). Small variations in eccentricity and obliquity most likely yield stable and favourable conditions for habitability (Atobe et al., 2004; Brasser et al., 2013).\\ We want to know what are the dynamical properties of close super Earth systems and how dynamics affects the habitability of potentially habitable planets. This paper is a proof of concept on how to determine the dynamical stability of a compact super Earth system, and how the long-term insolation variation of any planets in the habitable zone depends on the dynamics of the whole system. Other effects, such as a general circulation model (GCM) of the planet, and affects of atmospheric heat transport and buffering, ice-albedo feedback, carbon dioxide cloud formation, are not a part of this study but will be part of future projects.\\ This paper is organised as follows. The next section contains an overview of the HD 40307 planetary system. In Section 3 we describe our numerical methods. In Section 4 we summarise the theory of how orbital perturbations affect the obliquity and the black-body equilibrium temperature of the planet. This is followed by our results in Section 5. Section 6 focuses on the long-term climate cycles and Section 7 is reserved for a discussion. We present a summary and conclusions in Section 8. | We investigated the dynamical stability of the HD 40307 planetary system with the aid of numerical simulations. Once a stable solution was found it was used to determine the long-term insolation variation of planet g, which is situated in the habitable zone of the star. We found that the most stable orbital solution of the whole system requires a 2.6$\sigma$ increase in the period of planet e. This places planet e outside of a 3:2 mean-motion resonance with planet f. {It further requires a reduction in its eccentricity of 1.3$\sigma$.}\\ The high eccentricity of planet b is the result of forcing from the other planets, mostly from planet c. Its own eccentricity eigenmode is most likely damped by tides and thus it is in apsidal alignment with planet c. {The most stable configuration of the system requires some further reduction in the eccentricities of planets b and d. It is 3.1$\sigma$ from the nominal solution with a reduced $\chi^2 = 4.36$.}\\ The Milankovi\'{c} cycles on planet g manifest themselves with a period similar to those on Earth, but the polar black-body temperature variations are much more intense than on Earth because of planet g's high eccentricity. For this reason we cautiously conclude that planet g may not be very habitable at high latitudes when the obliquity is low, thereby reducing its overall habitability. The high eccentricity could cause regular, intense ice ages and severe ocean level changes on a wet planet such as Earth and regular disappearance and reappearance of dry polar ice caps such as on Mars. While the periodicities are uncertain by factors of a few, the variation in the insolation is not and thus the overall conclusion remains the same. If planet g formed with a fast rotation through a giant impact stage, the rotation likely would have remained fast, causing rapid precession and short periods between ice ages, as well as reducing heat transport from warmer to cooler regions (Williams, 1988). | 14 | 3 | 1403.5868 |
1403 | 1403.7789_arXiv.txt | In this paper we contrasted two cosmological perturbation theory formalisms, the \textit{1+3 covariant gauge invariant} and the \textit{gauge invariant} by comparing their gauge invariant variables associated with magnetic field defined in each approach. In the first part we give an introduction to each formalism assuming the presence of a magnetic field. We found that gauge invariant quantities defined by 1+3 covariant approach are related with spatial variations of the magnetic field (defined in the gauge invariant formalism) between two closed fundamental observers. This relation was computed by choosing the comoving gauge in the gauge invariant approach in a magnetized universe. Furthermore, we have derived the gauge transformations for electromagnetic potentials in the gauge invariant approach and the Maxwell's equations have been written in terms of these potentials. | Cosmological perturbation theory has become a standard tool in modern cosmology to understand the formation of the large scale structure in the universe, and also to calculate the fluctuations in the Cosmic Microwave Background (CMB)\cite{Padmanabhan}. The first treatment of perturbation theory within General Relativity was developed by Lifshitz \cite{lifshitz}, where the evolution of structures in a perturbed Friedmann-Lema\^{\i}tre-Robertson-Walker universe (FLRW) under synchronous gauge was addressed. Later, the covariant approach of perturbation theory was formulated by Hawking \cite{Hawking} and followed by Olson \cite{Olson} where perturbation in the curvature was worked rather than on metric variables. Then, based on early works by Gerlach and Sengupta \cite{Gerlach}, Bardeen \cite{bardeen} introduced a full gauge invariant approach to first order in cosmological perturbation theory. In his work, he built a set of gauge invariant quantities related to density perturbations commonly known as Bardeen potentials (see also Kodama \& Sasaki \cite{kodama} for an extensive review). \\ However, alternative representations of previous formalisms were appearing due to the gauge-problem \cite{sach}. This issue arises in cosmological perturbation theory due to the fact that splitting all metric and matter variables into a homogeneous and isotropic space-time plus small desviations of the background, is not unique. Basically, peturbations in any quantity are defined choosing a correspondence between a fiducial background space-time and the physical universe. But, given the general covariance in perturbation theory, which states that there is not a preferred correspondence between these space-times\footnote{The only restriction is that perturbation be small respect to it's value in the background, even so, it doesn't help to specify the map in a unique way.}, a freedom in the way how to identify points between two manifolds appears \cite{nakamura1}. This arbitrariness generates a residual degree of freedom, which would imply that variables might not have a physical interpretation. \\ Following the research mentioned above, two main formalisms have been developed for studying the evolution of matter variables and to deal with the gauge-problem, that will be reviewed in this paper. The first is known as \textit{1+3 covariant gauge invariant} presented by Ellis \& Bruni \cite{ellis}. This approach is based on earlier works of Hawking and Stewart \& Walker \cite{steward1}. The idea is to define covariantly variables such that they vanish in the background, therefore, they can be considered as gauge invariant under gauge transformation in according to Stewart-Walker lemma \cite{steward2}. In the 1+3 covariant gauge invariant, gauge-invariant variables manage the gauge ambiguities and acquire a physical interpretation. Since the covariant variables do not assume linearization, exact equations are found for their evolution. The second approach considers arbitrary order perturbations in a geometrical perspective, it has been deeply discussed by Bardeen \cite{bardeen}, Kodama \& Sasaki \cite{kodama}, Mukhanov, Feldman \& Brandenberger \cite{mukhanov}, and Bruni \cite{bruni1} and it is known as \textit{gauge invariant} approach. Here, perturbations are descomposed into the so-called scalar, vector and tensor parts and the gauge invariant are found with the gauge transformations and using the Stewart-Walker lemma. The gauge transformations are generated by arbitrary vector fields, defined on the background spacetime and associated with a one-parameter family of diffeomorphisms. This approach allows to find the conditions for the gauge invariance of any tensor field, although at high order sometimes appears unclear. As alternative description of the latter approach, it is important to comment the work done by Nakamura \cite{nakamura2} where he splits the metric perturbations into a gauge invariant and gauge variant part, and thus, evolution equations are written in terms of gauge invariant quantities.\\ Given the importance and advantage of these two approaches is nessesary to find equivalences between them. Some authors have compared different formalisms, for example \cite{bruni2} discussed the invariant quantities found by Bardeen with the ones built on the 1+3 covariant gauge invariant in a specific coordinate system, also the authors in \cite{vitenti} found a way to reformulate the Bardeen approach in a covariant scenario and the authors in \cite{malik1} constrasted the non-linear approach described by Malik et al. \cite{malik2} with the Nakamura's approach. \\ The purpose of this paper is to present a way for contrasting the approaches mentioned above. To this aim, we follow the methodology used by \cite{bruni2} and \cite{malik3} where a comparation of gauge invariant quantities built in each approach is made. However, we address the treatment in the cosmological magnetic fields context, where cosmological perturbation theory has played an important role for explaining the origin of magnetic fields in galaxies and clusters from a weak cosmological magnetic field originated before to recombination era. This means that magnetic fields can leave imprints of theirs influence on evolution of the universe, whether in Nucleosynthesis or CMB anisotropies \cite{grasso,javier,tina1}. In fact, the study of primordial magnetic fields will offer a qualitatively window to the very early universe \cite{giovanini}. Cosmological perturbations models permeated by a large-scale primordial magnetic field has been widely worked by Tsagas \cite{tsagasa,tsagasb,barrow} and Ellis \cite{carguese}, where they found the complete equations system which shows a direct coupling between the Maxwell and the Einstein fields and also, gauge invariant for magnetic fields were built in the frame of 1+3 covariant approach. Furthermore, in previous works, we have obtained a set of equations which describe the evolution of cosmological magnetic fields up to second order in the gauge invariant approach, with their respective gauge transformations for the fields, important for building the gauge invariant magnetic variables \cite{hortua}. Therefore, studying in detail the magnetic gauge invariant quantities in each one of the formalisms, we can find equivalences between themselves. In addition, we have built the invariant gauge for the electromagnetic four-potentials and the Maxwell equation are written in terms of these potentials. \\ The outline of the paper is as follows: In section 2 and 3, the 1+3 covariant and gauge invariant formalisms are reviewed and the key gauge-invariant variables are defined. In section 4, we introduce the electromagnetic four-potentials in perturbation theory using the gauge invariant formalism, also the gauge transformations are deduced and the Maxwell equations are written here in terms of the potentials. The section 5 shows the equivalence between the 1+3 covariant and gauge invariant formalism, studying in detail the invariant gauge quantities and discussing the physical meaning of these variables. The last section, is devoted to a discussion of the main results.\\ We use Greek indices $\mu, \nu, ..$ for spacetime coordinates and Roman indices $i, j,..$ for purely spatial coordinates. We also adopt units where the speed of light $c=1$ and a metric signature $(-,+,+,+)$. | Relativistic perturbation theory has been an important tool in theoretical cosmology to link scenarios of the early universe with cosmological data such as CMB-fluctuations. However, there is an issue in the treatment of this theory, which is called gauge problem. Due to the general covariance, a gauge degree of freedom, arises in cosmological perturbations theory. If the correspondence between a real and background space-time is not completely specified, the evolution of the variables will have unphysical modes. Different approaches have been developed to overcome this problem, amoung them, 1+3 covariant gauge invariant and the gauge invariant approaches, which were studied in the present paper. Following some results shown in \cite{bruni2,bruni3,baker} and \cite{malik3}, we have contrasted these formalisms comparing their gauge invariant variables defined in each case. Using a magnetic scenario, we have shown a strong relation between both formalisms, indeed, we found that gauge invariant defined by 1+3 covariant approach is related with spatial variations of the magnetic field energy density (variable defined in the invariant gauge formalism) between two closed fundamental observers as it is noticed in equations (\ref{equvalence1}), (\ref{equvalence2}) and (\ref{vecta11}). Moreover, we have also derived the gauge transformations for electromagnetic potentials, equations (\ref{phitrans}) and (\ref{Atrans}), which are relevant in the study of evolution of primordial magnetic fields in scenarios such as inflation or later phase transitions. With the description of the electromagnetic potentials, we have expressed the Maxwell's equations in terms of these ones, finding again an important coupling with the gravitational potentials. | 14 | 3 | 1403.7789 |
1403 | 1403.5751_arXiv.txt | We investigate in detail the 21 May 2004 flare using simultaneous observations of the {\it Nobeyama Radioheliograph}, {\it Nobeyama Radiopolarimeters}, {\it Reuven Ramaty High Energy Solar Spectroscopic Imager} (RHESSI) and {\it Solar and Heliospheric Observatory} (SOHO). The flare images in different spectral ranges reveal the presence of a well-defined single flaring loop in this event. We have simulated the gyrosynchrotron microwave emission using the recently developed interactive IDL tool GX Simulator. By comparing the simulation results with the observations, we have deduced the spatial and spectral properties of the non-thermal electron distribution. The microwave emission has been found to be produced by the high-energy electrons ($>100$ keV) with a relatively hard spectrum ($\delta\simeq 2$); the electrons were strongly concentrated near the loop top. At the same time, the number of high-energy electrons near the footpoints was too low to be detected in the RHESSI images and spatially unresolved data. The SOHO {\it Extreme-ultraviolet Imaging Telescope} images and the low-frequency microwave spectra suggest the presence of an extended ``envelope'' of the loop with lower magnetic field. Most likely, the energetic electron distribution in the considered flare reflects the localized (near the loop top) particle acceleration (injection) process accompanied by trapping and scattering. | Energetic electrons play a key role in solar flares and therefore knowing their distributions is highly important for better understanding the flare mechanisms and verifying the flare models. A lot of information ({\it e.g.}, the electron spectra, energetics, spatial distribution) can be inferred from the hard X-ray observations. However, at energies above $\sim 50$ keV, we usually see only hard X-ray emission from the footpoints of the flaring loops, which are located far from the particle acceleration sites; the electron number and spectra in the footpoints thus could be strongly affected by the propagation and trapping processes. At relatively low energies (up to a few tens of keV), the electrons in the corona can also be studied using X-ray imaging (see, {\it e.g.}, \opencite{kon11}; \opencite{guo12}; \opencite{jef13}). In particular, the observations by \inlinecite{kon11}, \inlinecite{bia11}, and \inlinecite{bia12} have indicated that the particle propagation in the flaring loops could be strongly affected by magnetohydrodynamic (MHD) turbulence. At higher energies ($\gtrsim 100$ keV), the coronal X-ray emission is usually too weak, so it can be observed only occasionally --- in partially occulted events where the bright footpoints are not visible \cite{kru08b,kru08a,kru10}. On the other hand, the high-energy electrons in the solar corona can be studied using radio observations, because they produce intense gyrosynchrotron emission in the microwave range. Diagnosing the energetic electrons (and other parameters of solar flares) from the microwave emission meets two main difficulties. Firstly, we need well-calibrated observations with high spatial, temporal and spectral resolutions. Secondly, the emission mechanism (even for the incoherent gyrosynchrotron radiation) is rather complicated and depends on many parameters. Therefore, recovering the emission source parameters from microwave observations is a nontrivial task, which generally requires 3D simulations. Such simulations have been performed by, {\it e.g.}, \inlinecite{pre92}, \inlinecite{kuc93}, \inlinecite{wan95}, \inlinecite{bas98}, \inlinecite{nin00}, \inlinecite{lee00}, and \inlinecite{tza08}. However, until now the number of such studies has been limited because precise gyrosynchrotron simulations for realistic 3D configurations tend to be very time-consuming. An important step in improving the simulation tools was the development of the ``fast gyrosynchrotron codes'' \cite{fle10} that allow to compute the gyrosynchrotron emission parameters with high speed and accuracy. These codes have been used in the interactive IDL tool \texttt{GS Simulator} for 3D simulations of gyrosynchrotron emission from model magnetic tubes with a dipole magnetic field \cite{kuz11}. The next iteration of this simulation tool, \texttt{GX Simulator}, uses realistic magnetic field configurations based on the extrapolation of observed photospheric magnetograms \cite{nit11a,nit11b,nit12}, which enables us to perform a quantitative comparison between the observations and the simulation results. By varying the model parameters and analyzing the simulation results, we can choose the set of parameters that provides the best fit to the observations. Since the automatic forward-fitting algorithms have not been implemented yet, the described diagnosing method can be effectively applied only to the events with the simplest structure. In this work, we analyze the observations and perform simulations for the flare on 21 May 2004, in which the observations with different instruments indicate the presence of a well-resolved single flaring loop. The main purpose of the work is to reconstruct the spatial distribution of the energetic electrons along the loop and to determine the electron energy spectra in the corona. The observations are summarized in Section \ref{Observations}. In Section \ref{Simulations}, we present the 3D simulations of the microwave emission. The implications of the obtained results are discussed in Section \ref{Discussion}. The conclusions are drawn in Section \ref{Conclusion}. \begin{figure} \centerline{\includegraphics{FigLC.eps}} \caption{NoRP microwave (top) and RHESSI hard X-ray (bottom) lightcurves of 21 May 2004 flare.} \label{FigLC} \end{figure} | We have shown that spatially-resolved microwave observations together with 3D simulations can be an effective tool for diagnosing the energetic electrons in solar flares. By using the IDL program \texttt{GX Simulator}, varying the model parameters and comparing the simulation results with observations, it is possible to reconstruct the spatial distributions of energetic electrons in flaring loops and to estimate their energy spectrum and total number. For the 21 May 2004 flare, we have achieved a good agreement between the simulated microwave data and both the spatially resolved and unresolved observations. On the other hand, the described diagnosing method still requires some additional data, besides the microwave observations --- namely, a 3D model of the magnetic field in the corona. Currently, this field is obtained using extrapolation of a photospheric magnetogram and therefore the simulation/diagnosing results are dependent on the extrapolation method used. However, we anticipate that this problem will be solved soon by using simultaneous multiwavelength imaging observations in the radio/microwave range with new or upgraded instruments (such as the {\it Chinese Solar Radioheliograph}, {\it Upgraded Siberian Solar Radio Telescope} and {\it Expanded Owens Valley Solar Array}); we expect that the new observations will enable us not only to compare and verify different magnetic field extrapolation techniques, but to perform independent measurements of the coronal magnetic field. We have found that in the analyzed flare (21 May 2004), the energetic electrons were concentrated near the loop top. It seems that the energetic electron population consisted of two components: a strongly peaked (near the loop top) component and a more homogeneous ``background''; this spatial distribution might be formed due to a combination of the processes of particle acceleration, trapping and scattering. The X-ray emission at high energies ($>100$ keV) was below the detection level, despite of a relatively large total number of high-energy electrons; this contradiction can be explained by the fact that most of the energetic electrons are trapped in the coronal part of the flaring loop, where they do not produce a significant X-ray emission. The microwave and EUV observations also indicate that, besides the main flaring loop, the active region might contain a more extended gyrosynchrotron emission source filled with energetic electrons but with a relatively low magnetic field. \begin{acks} This work was supported in part by the Russian Foundation of Basic Research (grants 12-02-00173, 12-02-91161, 13-02-10009 and 13-02-90472) and by a Marie Curie International Research Staff Exchange Scheme "Radiosun" (PEOPLE-2011-IRSES-295272) and STFC consolidated grant. The authors thank Natasha Jeffrey for help to improve the manuscript. \end{acks} | 14 | 3 | 1403.5751 |
1403 | 1403.5567_arXiv.txt | We investigate large-scale structure formation of collisionless dark matter in the phase space description based on the Vlasov (or collisionless Boltzmann) equation whose nonlinearity is induced solely by gravitational interaction according to the Poisson equation. Determining the time-evolution of density and peculiar velocity demands solving the full Vlasov hierarchy for the moments of the phase space distribution function. In the presence of long-range interaction no consistent truncation of the hierarchy is known apart from the pressureless fluid (dust) model which is incapable of describing virialization due to the occurrence of shell-crossing singularities and the inability to generate vorticity and higher cumulants like velocity dispersion. Our goal is to find a simple ansatz for the phase space distribution function that approximates the full Vlasov distribution function without pathologies in a controlled way and therefore can serve as theoretical N-body double and as a replacement for the dust model. We argue that the coarse-grained Wigner probability distribution obtained from a wave function fulfilling the Schr\"odinger-Poisson equation (SPE) is the sought-after function. We show that its evolution equation approximates the Vlasov equation and therefore also the dust fluid equations before shell-crossing, but cures the shell-crossing singularities and is able to describe regions of multi-streaming and virialization. This feature was already employed in cosmological simulations of large-scale structure formation by Widrow \& Kaiser (1993). The coarse-grained Wigner ansatz allows to calculate all higher moments from density and velocity analytically, thereby incorporating nonzero higher cumulants in a self-consistent manner. On this basis we are able to show that the Schr\"odinger method (ScM) automatically closes the corresponding hierarchy such that it suffices to solve the SPE in order to directly determine density and velocity and all higher cumulants. | The standard model of large-scale structure (LSS) formation and halo formation is based on collisionless cold dark matter (CDM), a yet unknown particle species that for purposes of LSS and larger halos can be assumed to interact only gravitationally and to be cold or initially single-streaming. We are therefore interested in the dynamics of a large collection of identical point particles that via gravitational instability evolve from initially small density perturbations into eventually bound structures, like halos that are distributed along the loosely bound LSS composed of superclusters, sheets, and filaments \cite{P80,SWF05,TSM13}. All these structures depend on cosmological parameters, in particular the background energy density of CDM and the cosmological constant. We therefore require accurate modelling and theoretical understanding of CDM dynamics to extract those cosmological parameters from observations. While the shape of the LSS can be reasonably well described by modelling the CDM as a pressureless fluid (dust), it necessarily fails at small scales where multiple streams form. Multi-streaming is especially important for halo formation -- virialization, but already affects LSS and its observation in redshift-space. On sub-Hubble scales and for non-relativistic velocities the Newtonian limit of the Einstein equations is sufficient to describe the time evolution of structures within the universe \cite{CZ11,GW12,KUH14}. Furthermore the large number of particles under consideration suppresses collisions such that the phase space dynamics is only affected by the smooth Newtonian potential \cite{G68}. Therefore the time-evolution of the phase space distribution function $f(t,\v{x},\v{p})$ is governed by the Vlasov (or collisionless Boltzmann) equation whose nonlinearity is induced by the gravitational force obtained from the Poisson equation sourced by $\int \vol{3}{p}\!f(t,\v{x},\v{p})$. Even though this model seems to be quite simple from a conceptual point of view, no general solution is known and one usually has to resort to N-body simulations which tackle the problem of solving the dynamical equations numerically, see \cite{T02,SWF05,SWF06,SW09,AHK12,HAK13}. From the analytical point of view, different methods to describe LSS formation based on the dust model have been developed. The dust model describes CDM as a pressureless fluid using hydrodynamic equations \cite{P80}, and is studied especially in the context of perturbation theory. Among them the two most commonly used methods are the Eulerian framework describing the dynamics of density and velocity fields, see \cite{B02}, and the Lagrangian description following the field of trajectories of particles \cite{B94}. The dust model is an exact solution to the Vlasov equation which describes absolutely cold dark matter and works quite well in the linear and quasi-linear regime of LSS formation. But the dust model not only fails to catch the dynamics when multiple streams occur in the N-body dynamics, but actually runs into so called shell-crossing singularities or caustics forming at the smallest scales. One might therefore say that the dust model is UV-incomplete.\\ A possibility to circumvent the formation of singularities and to restore agreement with simulations in the weakly nonlinear regime is to introduce an artificial viscosity term in the pressureless fluid equations which is effective only in regions where the dust evolution would predict a singularity. This phenomenological model proposed in \cite{G89} is known as adhesion approximation and was shown to be able to reproduce the skeleton of the cosmic web in \cite{WG90}. However, such ad-hoc constructions remain quite unsatisfying from a conceptual point of view; for example the size of formed structures directly depends on the viscosity parameter rather then the initial conditions and it is unclear how well the Vlasov equation is approximated. A more general reasoning was pursued in the direction of coarse-grained perturbation theory which led to models that were argued to incorporate adhesive features. When the dynamical evolution of a many-body system is described by means of a continuous phase space distribution one has to consider coarse-grained or macroscopic quantities, thereby neglecting detailed information about the microscopic degrees of freedom. Although at a first glance this might seem inconvenient, it is indeed an advantageous point of view, especially when comparing to data inferred from observations or simulations, that are fundamentally coarse-grained. Therefore the dynamical evolution of smoothed density and velocity fields relevant for cosmological structure formation has been under investigation, see for example \cite{D00,BD05}, where it was argued that coarse-graining may lead automatically to adhesive behavior. Furthermore it was shown in \cite{P12} that for averaged fields the correspondence between the occurence of velocity dispersion and multi-streaming phenomena due to shell-crossing breaks down. This is due to the fact that the coarse-graining introduces a nonzero velocity dispersion between the particles within each coarse-graining cell which mimics microscopic velocity dispersion connected to genuine multi-streaming.\\ Solving the Vlasov equation is equivalent to solving the infinite coupled hierarchy of equations for the cumulants of the distribution function $f$ with respect to momentum $\v{p}$. This means that in order to determine the time evolution of the zeroth and first cumulants, related to density and velocity, all higher cumulants starting with velocity dispersion are relevant, see \cite{PS09}. Only neglecting them entirely is consistent \cite{PS09}; in this case one is lead to the popular dust model \cite{P80}. Gravity is the dominant force on cosmological scales and in the early stages of gravitational instability matter is distributed very smoothly with nearly single-valued velocities. Therefore the dust model has proven quite successful in describing the evolution as long as the collective motion of particles is well-described by this coherent flow. However, as soon as the density contrast becomes non-linear, multiple streams form and become relevant in the Vlasov dynamics while caustics -- called `shell-crossing' singularities -- are developed indicating that the underlying approximations are no longer justified and the model looses its predictability. The problem of developing singularities and failure of being a good description afterwards, also occurs in the first order Lagrangian solution, called Zel'dovich approximation \cite{Z70}, which is the exact solution in the plane-parallel collapse studied in Sec.\,\ref{sec:numerics}. The Schr\"odinger method (ScM), originally proposed in \cite{WK93,DW96} as numerical technique for following the evolution of CDM, models CDM as a complex scalar field obeying the coupled Schr\"odinger-Poisson equations (SPE) \cite{SC02,SC06,G95} in which $\hbar$ merely is a free parameter that can be chosen at will and determines the phase space resolution. The ScM is comprised of two parts; (1) solving the SPE with desired initial conditions and (2) taking the Husimi transform \cite{H40} to construct a phase space distribution from the wave function. The correspondence between distribution functions in classical mechanics and phase space representations of quantum mechanics has been investigated in detail by \cite{T89}, both analytically as well as by means of numerical examples. It turned out that the Wigner function, obtained from a wave function fulfilling the SPE, corresponds poorly to classical dynamics. In contrast, the coarse-grained Wigner or Husimi distribution was shown to be indeed a good model for coarse-grained classical mechanics \cite{T89, WK93}. \\ The SPE can be seen as the non-relativistic limit of the Klein-Gordon-Einstein equations \cite{W97, GG12}. From this perspective the physical interpretation (if $\hbar$ takes the value of the Planck constant) is that CDM is actually a non-interacting and non-relativistic Bose-Einstein condensate in which case the SPE can be interpreted as a special Gross-Pitaevskii equation, see \cite{R13} for a review. In plasma and solid state physics as well as mathematical physics the equation is known as Choquard equation \cite{L77, AS01}. In the context of gravitational state reduction this equation, denoted by Schr\"odinger-Newton equation, was studied e.g. in \cite{MPT98}. There have also been investigations on the connection between general fluid dynamics and wave mechanics \cite{M27, S80}. The similarity between the SPE and the dust model has been also employed in the context of wave mechanics. There the so-called free-particle approximation (based on the free-particle Schr\"odinger equation, see \cite{Th11}) was shown to closely resemble the Zel'dovich approximation \cite{SC02, SC06} while avoiding singularities. In some works a modified SPE system with an added quantum pressure term was considered, \cite{JLH09,TWJ11} which then is equivalent to the usual fluid system. Clearly this approach is not advantageous since the fluid description is known to break down at shell-crossing. This had lead to the claim in \cite{JLH09} that also the Schr\"odinger method breaks down. In \cite{SK02} perturbation theory based on the SPE in the limit $\hbar \rightarrow 0$ was considered where it was emphasized that shell-crossing singularities are avoided. However their calculations assumed $\hbar =0$ identically, which leads to results equivalent to standard perturbation theory (SPT) based on a dust fluid, without solving the shell-crossing problem. That the ScM is a viable model for cosmological structure formation and in particular capable of describing multi-streaming was exemplarily demonstrated by means of numerical examples in \cite{WK93,DW96, SBRB13}. However, the bulk of these investigations were aimed at replacing N-body simulations by a numerical solution to the SPE. Therefore the methods applied therein are unsuitable and inconvenient for the genuine analytical approach we want to establish. In \cite{WK93,DW96} a superposition of $N$ Gaussian wave packets was used as initial wave function, thereby closely resembling the $N$ particles in a N-body simulation. In \cite{SBRB13} CDM was modeled by $N$ wave functions coupled via the Poisson equation. We will study the case of a single wave function on an expanding background with nearly cold initial conditions. The result suggests that indeed the ScM is a substantially better suited analytical tool to study CDM dynamics than the dust model: in the single-stream regime they stay arbitrarily close to each other, but while dust fails and stops when multi-streaming should occur, the Schr\"odinger wave function continues without any pathologies and behaves like multi-streaming CDM when interpreted in a coarse-grained sense. Although it was already observed in \cite{SC02} that the wave function does not run into singularities, it was claimed that it still cannot describe multi-streaming or virialization. Indeed, our numerical example closely resembling that of \cite{SC02}, but generalised to an expanding background, proves the contrary. Fig\,\ref{fig:phaseplot} shows the dynamics of the Husimi function $f_{\rm H}$ using the ScM: the density remains finite at shell-crossing, $f_{\rm H}$ forms multi-stream regions and ultimately virializes. None of these features necessary for a full description of LSS and halo formation are accessible with the dust model. \paragraph*{Goal} The aim of this paper is to present the Schr\"odinger method, already investigated in the context of cosmological simulations, as a theoretical N-body double for the phase space distribution function $f$. We show that phase space density $f_{\rm H}$ obtained from the ScM solves the Vlasov equation approximately but in a controlled manner. We demonstrate that $f_{\rm H}$ closes the hierarchy of moments automatically but yet allows for multi-streaming and virialization. We give explicit analytic expressions for higher order non-vanishing cumulants, like velocity dispersion, in terms of the wave function and in terms of the macroscopic physical density and velocity fields. This constitutes a new approach to tackle the closure problem of the Vlasov hierarchy apart from truncation or restricting oneself to the dust model and its limitations. We shed light on the physical interpretation by means of a numerical study of pancake formation. In summary this means that the ScM models CDM in a well-behaved manner with initial conditions and single-stream dynamics arbitrarily close to dust. Unlike dust, the ScM captures all relevant physics for describing CDM dynamics even in the deeply nonlinear regime and does not fail on the smallest scales, therefore providing a UV-completion of dust. \paragraph*{Structure} This paper is organized as follows: In Sec.\,\ref{sec:PS-CDM} we review the phase space description of cold dark matter and explain how one is lead to the Vlasov equation on an expanding background. After introducing the dust model we re-derive the coarse-grained Vlasov equation. We then introduce the Wigner function as an ansatz for the phase space distribution and explain its connection to the dust model. We derive the corresponding Wigner-Vlasov equation as well as its coarse-grained version and discuss their relations to the Vlasov equation and the coarse-grained Vlasov equation, respectively. In Sec.\,\ref{sec:Hierarchy} we determine the moments of the three different phase space distributions -- the dust model, the Wigner function and the coarse-grained Wigner or Husimi distribution. In Sec.\,\ref{sec:numerics} we investigate the pancake collapse to illustrate that the dynamics of the complex scalar field is free from the pathologies of the dust fluid and serves therefore both as a theoretical N-body double and as a UV completion of dust. On this basis we explain how the closure of the hierarchy of moments can be achieved and finally discuss the implications. In Sec.\,\ref{sec:prospects} we make suggestions about possible future research based on ScM and conclude in Sec.\,\ref{sec:conclusion}. \vspace{-1cm} \begin{center} \begin{figure*}[!] \includegraphics[width=0.90\textwidth]{phaseplot}\\ \caption{Collapse of a pancake (plane-parallel) density profile on a Einstein-de Sitter background as seen in phase space using the ScM. \textit{blue} contours: Phase space density $f_{\rm H}$ calculated from Eqs.\,(\ref{schrPoissEqFRW}, \ref{Husimi}) at four moments in time. \textit{red dotted} line: the Zel'dovich solution of Eq.\,\eqref{ZeldoPancake} is the exact dust solution, valid until $a=1$. Only the first panel of the four characteristic moments can be described by dust. Shell-crossing (2nd panel), multi-streaming (3rd panel) and viralisation (4th panel) are accessible with the ScM but not with dust. That the dynamics corresponds to CDM is proven in Sec.\,\ref{subsection:husimivlasovmap}. How to obtain cumulants without constructing $f_{\rm H}$ is shown in Sec.\,\ref{HusHierarchy}.} \label{fig:phaseplot} \end{figure*} \end{center} | \label{sec:conclusion} We started with the coupled nonlinear Vlasov-Poisson system \eqref{VlasovPoissonEq} for the phase space distribution function $f$ which is relevant for LSS formation of CDM particles which interact only by means of the gravitational potential. Inspired by the Schr\"odinger method (ScM) proposed in \cite{WK93} for numerical simulations we aimed at employing its ability to describe effects of multi-streaming while including recent studies regarding coarse-grained descriptions of CDM and their implications investigated in \cite{P11,D00}. \\ Following closely \cite{WK93}, we introduced a complex field $\psi$ whose time-evolution is governed by the Schr\"odinger-Poisson equation (SPE) \eqref{schrPoissEqFRW} and constructed the coarse-grained Wigner probability distribution $ \bar f_\W$ according to \eqref{fcgWigner} from this wave function. We derived that the time-evolution of $ \bar f_\W$ is determined by Eq.\,\eqref{cgWignerVlasov} which is in good correspondence to the one governed by the coarse-grained Vlasov equation \eqref{cgVlasovEq}. Using a numerical toy example we showed how the ScM is able to regularize shell-crossing singularities and allows to follow the dynamics into the fully nonlinear regime. Furthermore we showed how higher order cumulants \eqref{momentscgw} like velocity dispersion can be calculated directly from the wave function and that a vorticity is generated by the coarse-graining procedure. This means that it suffices to solve the SPE \eqref{schrPoissEqFRW}, express the result obtained for $\psi$ in Madelung form $\sqrt n \exp\left(i\phi/\hbar\right)$, and then simply coarse-grain $n$ and $n \v{\nabla} \phi$ to obtain the physical density $\bar n$ and momentum $m \bar n \bar{\v{u}}$, respectively. In a similar fashion all higher cumulants \eqref{cumulants} following from \eqref{genfuncgW} can be obtained from a solution to SPE \eqref{schrPoissEqFRW}. We derived the corresponding closed-form fluid-like equations \eqref{FluidcgW} for the smooth density field $\bar n$ and the mass-weighted velocity $\bar{\v{u}}$. This is only possible because the `quantum pressure' term proportional to $\hbar^2$ resolves shell-crossing singularities already on the microscopic level. We showed that solving the macroscopic equations \eqref{FluidcgW} means closing the hierarchy for the moments of $ \bar f_\W$, without truncating the cumulant hierarchy, thereby proposing a different approach to the closure problem than truncation in terms of cumulants. Indeed, all higher cumulants can be written in terms of of $\bar n$ and $\bar{\v{u}}$. | 14 | 3 | 1403.5567 |
1403 | 1403.5084_arXiv.txt | Models of planet formation have shown that giant planets have a large impact on the number, masses and orbits of terrestrial planets that form. In addition, they play an important role in delivering volatiles from material that formed exterior to the snow-line (the region in the disk beyond which water ice can condense) to the inner region of the disk where terrestrial planets can maintain liquid water on their surfaces. We present simulations of the late stages of terrestrial planet formation from a disk of protoplanets around a solar-type star, and we include a massive planet (from 1 \mearth to 1 \mjup) in Jupiter's orbit at $\sim$5.2 AU in all but one set of simulations. Two initial disk models are examined with the same mass distribution and total initial water content, but with different distributions of water content. We compare the accretion rates and final water mass fraction of the planets that form. Remarkably, all of the planets that formed in our simulations without giant planets were water-rich, showing that giant planet companions are $\it{not}$ required to deliver volatiles to terrestrial planets in the habitable zone. In contrast, an outer planet at least several times the mass of Earth may be needed to clear distant regions from debris truncating the epoch of frequent large impacts. Observations of exoplanets from radial velocity surveys suggest that outer Jupiter-like planets may be scarce, therefore the results presented here suggest the number of habitable planets that reside in our galaxy may be more than previously thought. | Oceans cover more than two-thirds of the surface of our planet, and water and other volatile compounds are the dominant constituents of living organisms on Earth. Yet by cosmic standards, Earth is highly deficient in volatiles. The condensed component of a solar composition mixture that is cool enough for all of the H$_2$O to be in solid form is $\gtrsim$ 50\% ice by mass. In contrast, the Earth's oceans and other near-surface reservoirs represent only 0.03\% of our planet's mass, with several times this amount of water thought to lie in the mantle \citep{Marty.Yokochi:2006}. Nonetheless, the Earth was able to accrete enough water and other volatile constituents such as carbon and nitrogen to support life as we know it. Understanding where these volatiles originated from, and how they made their way to Earth, is important in determining the likelihood that life exists beyond our Solar System. The region of the disk where Earth now resides was too hot for water ice to have condensed during Earth's formation, therefore the bulk of Earth's water must have originated from other reservoirs. The leading theories for the origin of Earth's water have focused on icy comets and water-rich asteroids as the main sources. Comparisons of the isotopic deuterium to hydrogen (D/H) ratio measured in Earth's oceans, atmospheres and present-day mantle \citep{Lecuyer.etal:1998} to the D/H ratio measured in meteorites and comet spectra have provided valuable clues used to constrain these theories. The contribution of water from a bombardment of comets is thought to be limited to $\lesssim$10\% of the Earth's crustal water \citep{Morbidelli.etal:2000, Morbidelli.etal:2012,Marty:2012}. These limitations were partially based on the D/H ratios measured from several comet samples that were found to be more than double the D/H ratio of water found on Earth \citep{Balsiger.etal:1995, Bockel.etal:1998, Eberhardt.etal:1995, Meier.etal:1998}. The recent discovery of comet Hartley 2 with an Earth-like D/H ratio \citep{Hartogh.etal:2011} sparked new interest into comets as a viable source of Earth's water. However, \citet{Alexander.etal:2012} points out that Earth would have accreted entire comets, not just cometary ice, which have compositions that are more deuterium-rich than water due to the large amounts of organic material. Despite these developments, a more serious problem that remains is that the collision probability of comets with Earth is very low \citep{Morbidelli.etal:2000}, most likely too low to provide the bulk of water collected by Earth. The best explanation for the origin of Earth's water is that water-rich chondritic planetary embryos formed in the outer asteroid region and were perturbed inwards during Earth's formation \citep{Morbidelli.etal:2000}. Geochemical data from meteorite samples have uncovered a rough correlation between water content and the heliocentric distance in the disk from which they are thought to have originated. Enstatite and ordinary chondrites from the inner asteroid belt ($\sim$1.8 -- 2.8 AU) tend to be dry \citep{Fornasier.etal:2008, Binzel.etal:1996}, whereas carbonaceous chondrites from the middle and outer asteroid belt (approximately 2.5 -- 4 AU) are relatively water rich (up to $\sim$20\% in mass) \citep{Mason:1963}. The D/H ratio in Earth's oceans is nearly identical to the mean D/H ratio found in carbonaceous chondrites \citep{Dauphas.etal:2000}, supporting the outer asteroid region as the primary source of Earth's water. In addition, the relative abundances of hydrogen, carbon, and noble gases in Earth have been found to be roughly chondritic \citep{Marty:2012}. For our dynamical analysis herein, we adopt the model that chondritic material from the outer asteroid region is the principal source of Earth's volatiles. Giant planets, which likely form prior to the epoch of terrestrial planet formation that we model herein \citep{Lissauer:1987, Lissauer.etal:2009,Movshovitz.etal:2010}, have long been thought to be a crucial factor in inducing the radial mixing among growing protoplanets that is needed for water delivery. Numerous simulations of the accretion of planetesimals from volatile-rich regions of protoplanetary disks by terrestrial planets have been performed \citep{Raymond.etal:2004,Raymond.etal:2005a,Raymond.etal:2005b,Raymond.etal:2006a,Raymond.etal:2006b,Raymond.etal:2007,Raymond.etal:2007b,Raymond:2006}. These studies included various different disk surface density profiles, star masses, and the (in many cases large) influences that varying these parameters have on water content of the planets that are ultimately formed. Giant planets exterior to the terrestrial region around low mass stars were found to reduce accretion timescales, leading to typically drier terrestrial planets \citep{Raymond.etal:2005a}. High resolution (large initial number of planetesimals) simulations found that giant planet eccentricities have a large influence on the amount of water delivered to terrestrial planets \citep{Obrien.etal:2006, Raymond.etal:2009}. Simulations that include a solar-type stellar companion show similar results \citep{Haghighipour.etal:2007}, as expected, although simulations of accretion in binary star systems are limited due to the much larger parameter space (stellar masses and orbits) available. We investigate herein the effects of different-sized planets in Jupiter's orbit on the accretion of volatiles by terrestrial planets in or near the habitable zone of a Sun-like star during the late stages of planet formation. We first study the case of planet formation around an isolated star with no distant companions perturbing the system. We then perform analogous simulations but include a planet of mass 1 \mearth, 10 \mearth or 1 \mjup on an initial orbit comparable to Jupiter's current orbit (semimajor axis \semimajC = 5.2 AU and \eccC = 0.05). We examine two additional variations of systems that include a Jupiter-like planet: Jupiter on a circular orbit (\eccC = 0) and Jupiter with \eccC = 0.05 with the addition of a Saturn-like planet (\semimajC = 9.5 AU and \eccC = 0.05). Our $N$-body simulations begin at the epoch of planet formation in which planetesimals and planetary embryos have already formed in a disk, as have any giant planets included, and the gas in the disk has been dispersed. We examine two different models for the water distribution in the disk and follow the accretion evolution of the bodies for 1 Gyr. Our approach employs moderate-resolution simulations that have sufficiently modest computational requirements to allow us to perform enough simulations to disentangle effects of the companion body from stochastic variations that are an important aspect of terrestrial planet growth. Although these models cannot provide ab initio estimates of the water accreted by terrestrial planets, models of this type are well suited for comparing the relative amounts of water accreted by terrestrial planets with different outer planet companions perturbing the system. Our results are presented in a manner that allows for the incorporation of any model of the distribution of volatiles within the disk, provided these volatiles don't substantially alter the mean densities of the bodies. The next section describes our initial conditions and numerical model. Section 3 presents the results of our accretion simulations and the volatile inventory of the final planetary systems that form, and we summarize our results in Section 4. | We have performed 33 simulations of the late stage of planet formation with and without outer planet companions perturbing the disk. Although giant planets can have a profound effect on the types of planetary systems that form, we found that they are $\it{not}$ required to provide the radial mixing needed for volatile material from beyond the snow line to accrete onto terrestrial planets in the habitable zone. We present two water mass fraction disk models, but we provide in the electronic edition of this article a full table of all final planets and their composite embryos and planetesimals (and their initial semimajor axes) to allow for the incorporation of any distribution of \water in the initial disk. Table \ref{tbl-2} provides a sample of the electronic table. Results can also be scaled for different stellar types with the formulae presented in \citet{Quintana.etal:2006}. Can we conclude that most terrestrial planets formed around isolated stars are likely habitable? Not quite -- a more serious problem for the habitability of terrestrial planets in systems lacking giant planets is that small bodies persist beyond 2 AU for far longer. This allows the tail of the accretionary epoch to extend well beyond that within our Solar System. Impacts of objects an order of magnitude less massive than the planetesimals in our simulations (and therefore probably much more numerous in a realistic protoplanetary disk) are still so large that their accretion onto an Earth-like planet would produce environmental damage probably sufficient to wipe out all life as we know it \citep{Zahnle.Sleep:1997}. Thus, without giant planets, devastating impacts might well persist for billions of years, rendering Earth-like planets unsuitable for all but perhaps the simplest and most rapidly formed life. | 14 | 3 | 1403.5084 |
1403 | 1403.5421_arXiv.txt | The China Dark Matter Experiment reports results on light WIMP dark matter searches at the China Jinping Underground Laboratory with a germanium detector array with a total mass of 20~g. The physics threshold achieved is 177 eVee (``ee" represents electron equivalent energy) at 50\% signal efficiency. With 0.784 kg-days of data, exclusion region on spin-independent coupling with the nucleon is derived, improving over our earlier bounds at WIMP mass less than 4.6 GeV. | Compelling evidence from astroparticle physics and cosmology indicates that dark matter constitutes about 27\% of the energy density of our Universe \cite{Beringer2012,*PlanckCollaboration2013a}. Weakly interacting massive particles (WIMPs, denoted by $\chi$) are the leading candidate for cold dark matter \cite{Kelso2012}. It is expected that WIMPs would interact with normal matter through elastic scattering. Direct detection of WIMPs has been attempted with different detector technologies \cite{Lewin1996}. The anomalous excess of unidentified events at low energy with the DAMA \cite{DAMACollaboration2011,*Bernabei2010}, CoGeNT \cite{Aalseth2011,*Aalseth2013,*CoGeNT_AM_2014}, CRESST-II \cite{Angloher2012} and CDMS (Si) \cite{CDMS_Si_PRL_2013} data has been interpreted as signatures of light WIMPs. They are however inconsistent with the null results from XENON \cite{Aprile2012a}, TEXONO \cite{TEXONO_2013,*TEXONO_BS_2014}, CDMSlite \cite{CDMSLite_2014_PRL}, LUX \cite{LUX_2013_PRL}, SuperCDMS \cite{SuperCDMS_2014_PRL} and CDEX-1 \cite{CDEX_1kg_2013,*CDEX_1kg_hardware,*CDEX_1kg_2014_arxiv} experiments. It is crucial to continue probing WIMPs with lower mass achievable by available techniques. Our earlier measurements \cite{CDEX_1kg_2013,*CDEX_1kg_hardware,*CDEX_1kg_2014_arxiv} have provided the first results on low-mass WIMPs from the China Dark Matter Experiment phase I (CDEX-1). With a 994 g point-contact germanium detector, an energy threshold of 400 eVee was achieved. The experiment was performed at the China Jinping Underground Laboratory (CJPL) \cite{CDEX_introduction}, which was inaugurated at the end of 2010. With a rock overburden of more than 2400 m giving rise to a measured muon flux of 61.7 y$^{-1}\cdot$m$^{-2}$ \cite{WuYC2013}, CJPL provides an ideal location for low-background experiments. We report final results of the ``CDEX-0" experiment at CJPL, which is based on a pilot measurement with an existing prototype Ge detector with sub-keV energy threshold at a few gram modular mass. The experimental setup, candidate event selection procedures and constraints on WIMP-nucleon spin-independent elastic scattering are discussed in the subsequent sections. \begin{figure}[h] \includegraphics[width=1.0\linewidth]{cdex0_20g_facility.png} \caption{\label{fig:facility} Schematic diagram of the experimental setup which includes the germanium detector array and NaI(Tl) anti-Compton detector, as well as the enclosing OFHC Cu shielding. The entire structure is placed inside a passive shielding system described in Ref.\cite{CDEX_introduction}.} \end{figure} \begin{figure*}[htb] \includegraphics[width=1.0\linewidth]{cdex0_20g_DAQ.pdf}% \caption{\label{fig:daq} Schematic diagram of the electronics and the DAQ system of the germanium array and the NaI(Tl) detector.} \end{figure*} \begin{figure}[htb] \includegraphics[width=1.0\linewidth]{cdex0_20g_calibrate.pdf} \caption{\label{fig:calibrate} Calibration line relating the optimal Q measurements from SA$_{6}$ with the known energies from X-ray sources. The error bars are smaller than the data point size. The energy difference between the energy derived from the calibration and the real energy are depicted in the inset.} \end{figure} \begin{figure}[htbp] \includegraphics[width=1.0\linewidth]{cdex0_20g_AC_cut.pdf} \caption{\label{fig:AC_cut} Scatter plots of the difference between Ge and NaI(Tl) timing versus measured energy along with AC$^{-}$ selection and rejected parameter space.} \end{figure} \begin{figure}[htb] \includegraphics[width=1.0\linewidth]{cdex0_20g_time_cut.pdf} \caption{\label{fig:time_cuts} (a) Raw signal from the reset preamplifier along with the timing of reset inhibit and a typical physics event. (b) The distributions of T$_{\text{-}}$ for random trigger events (blue) and background events (black) are shown as well as the rejected parameter space. (c) The reset period cut and its rejected parameter space are displayed. } \end{figure} \begin{figure}[htb] \includegraphics[width=1.0\linewidth]{cdex0_20g_pulse_para.pdf} \caption{\label{fig:pulse_para} Pulse shape parameters for PN Selection: Ped is the average of the first 200 time bins; MIN and MAX are the minima and maxima of the pulses, respectively, t$_{\text{MAX}}$ is the location of the maxima relative to the trigger instant and PW characterizes the pulse width. Energy is defined by the area SA$^{\text{Q}}_{6}$.} \end{figure} \begin{figure}[htbp] \includegraphics[width=1.0\linewidth]{cdex0_20g_PNi.pdf} \caption{\label{fig:PNi_cut} Energy-independent PN$_{\text{i}}$ cut on Ped of SA$_{6}$. } \end{figure} \begin{figure*}[htb] \includegraphics[width=1.0\linewidth]{cdex0_20g_PNd.png} \caption{\label{fig:PNd_cuts} The energy-dependent PN$_{\text{d}}$ cuts: (a) MIN cut, (b) t$_{\text{MAX}}$ cut, (c) MAX cut and (d) PW cut, based on the parameters defined in Fig.~\ref{fig:pulse_para}.} \end{figure*} \begin{figure}[htbp] \includegraphics[width=0.9\linewidth]{PNn_cut_work.pdf} \caption{\label{fig:PN_work} The source events at 130-200 eVee versus the relative temporal distance between ULEGe triggers and AC signals are shown. The substantial value of survival efficiency at the coincidence time demonstrates the effectiveness of the MAX cut.} \end{figure} \begin{figure}[htbp] \includegraphics[width=1.0\linewidth]{cdex0_20g_MAX_trig.pdf} \caption{\label{fig:MAX_trig} (a) MAX cut efficiencies with two different fitting functions. (b) Trigger efficiencies derived from the source AC$^{+}$ events.} \end{figure} \begin{figure}[htb] \includegraphics[width=1.0\linewidth]{cdex0_20g_combined_eff.pdf} \caption{\label{fig:combined_eff} The combined efficiencies in the low energy range and in an extended energy range are depicted respectively. In the latter one the error bars are smaller than the data point size.} \end{figure} \begin{figure}[htbp] \includegraphics[width=1.0\linewidth]{cdex0_20g_spec.pdf} \caption{\label{fig:spec} Measured energy spectra of 20g-ULEGe, showing the raw spectra and those at the different stages of the analysis. The inset figure shows the low-energy spectrum after subtraction of a flat background due to high-energy $\gamma$ rays, superimposed with the predicted spectra for 3 GeV WIMPs with ${\sigma}^{\text{SI}}_{\chi{\text{N}}} = 2 \times 10 ^{-39}$ cm$^2$ and ${\sigma}^{\text{SI}}_{\chi{\text{N}}} = 5 \times 10 ^{-39}$ cm$^2$. } \end{figure} | The results presented in this article correspond to the first completed program of the pilot experiment at the new underground facility CJPL. Improved constraints are derived with a conventional Ge detector of good threshold response but only a few gram modular target mass. Novel p-type point-contact germanium detectors were developed in the past few years \cite{Luke1989,*P.S.Barbeau2007}, offering sub-keV energy threshold with kg-scale target such that the background level per unit mass is greatly reduced due to self-attenuation effects. Dark matter experiments with this detector technique are being pursued at CJPL \cite{CDEX_1kg_2013,*CDEX_1kg_hardware,*CDEX_1kg_2014_arxiv} and elsewhere \cite{Aalseth2011,*Aalseth2013,TEXONO_2013,*TEXONO_BS_2014}. The projected sensitivities of the realistic benchmark sensitivities of 100 eVee threshold at 1~kg$^{-1}$ keV$^{-1}$ day$^{-1}$ background level for 10 kg-year exposure is overlaid in Fig.~\ref{fig:ex-plot}. | 14 | 3 | 1403.5421 |
1403 | 1403.0032_arXiv.txt | Characterizing the ejecta in young supernova remnants is a requisite step towards a better understanding of stellar evolution. In Cassiopeia~A the density and total mass remaining in the unshocked ejecta are important parameters for modeling its explosion and subsequent evolution. Low frequency ($<$100~MHz) radio observations of sufficient angular resolution offer a unique probe of unshocked ejecta revealed via free-free absorption against the synchrotron emitting shell. We have used the Very Large Array plus Pie Town Link extension to probe this cool, ionized absorber at 9$\arcsec$ and 18\farcs5 resolution at 74~MHz. Together with higher frequency data we estimate an electron density of 4.2~cm$^{-3}$ and a total mass of 0.39~M$_{\sun}$ with uncertainties of a factor of $\sim$2. This is a significant improvement over the 100~cm$^{-3}$ upper limit offered by infrared [\ion{S}{3}] line ratios from the \emph{Spitzer Space Telescope}. Our estimates are sensitive to a number of factors including temperature and geometry. However using reasonable values for each, our unshocked mass estimate agrees with predictions from dynamical models. We also consider the presence, or absence, of cold iron- and carbon-rich ejecta and how these affect our calculations. Finally we reconcile the intrinsic absorption from unshocked ejecta with the turnover in Cas~A's integrated spectrum documented decades ago at much lower frequencies. These and other recent observations below 100~MHz confirm that spatially resolved thermal absorption, when extended to lower frequencies and higher resolution, will offer a powerful new tool for low frequency astrophysics. | \subsection{General Background on Cas~A} Cassiopeia~A (Cas~A; 3C 461, G111.7-2.1) is the 2nd-youngest-known supernova remnant (SNR) and, at a distance of 3.4 kpc \citep{rhf95}, it lies just beyond the Perseus Arm of the Galaxy. With the discovery of light echoes from the explosion, we now know that Cas~A resulted from a type IIb explosion \citep{kbu08}. Cas~A is one of the strongest synchrotron radio-emitting objects in the sky and has been observed extensively with the Very Large Array (VLA) since its commissioning in 1980. The morphology of Cas~A is quite complex with structure distributed over a variety of spatial scales. The terminology used to describe some of these structures was coined in some of the earliest papers describing the optical and resolved radio images \citep{vd70,ren65}. The most prominent feature is the almost circular ``Bright Ring'' at a radius of $\approx100\arcsec$ which is generally regarded as marking the location of ejecta that have interacted with the reverse shock \citep[see e.g.][]{mfc04}. A fainter ``plateau'' of radio emission is seen out to a radius of $\approx150\arcsec$. To the northeast, where the shell becomes broken, is the ``jet'' and extending in the opposite direction to the southwest is the counter-jet. The jet and counter-jet do not represent outflow in the classical sense. Instead they describe locations where the fastest-moving ejecta are observed well beyond the plateau and farthest from the explosion center \citep[see e.g.][]{hf08}. To the southeast, iron-rich ejecta extend beyond the Bright Ring and into the plateau. The jets and extended iron-rich structure are likely the result of an asymmetric explosion of the progenitor \citep[see e.g.][]{hrb00,hl12}. The light echo data also indicate an asymmetric explosion \citep{rfs11}. \subsection{Unshocked Ejecta} In addition to the shocked ejecta described above, there is a class of ejecta still interior to the reverse shock in Cas~A. These ``unshocked ejecta'' were discovered via absorption of low frequency ($<$100~MHz) radio emission \citep{kpd95} and are also seen to radiate in the infrared in the emission lines of [\ion{O}{4}], [\ion{S}{3}], [\ion{S}{4}], and [\ion{Si}{2}] \citep{err06}. The term ``unshocked'' is somewhat of a misnomer because all of the ejecta were originally shocked by the passage of the blast wave through the star. However, the ejecta cooled during the subsequent expansion of the SNR. What we consider to be shocked ejecta today are those ejecta that have crossed through the reverse shock with the term ``unshocked ejecta'' referring to those ejecta that are still interior to the reverse shock. Thus the simple cartoon of Cas~A's structure is that of cold ejecta in the interior of a roughly spherical shell composed of shocked gas that radiates strongly in multiple bands. At low frequencies, the radio emission from the far side of the shocked shell is absorbed by the cold, unshocked ejecta in the interior. In \S\ref{sec:geometry} we provide a more thorough description of the geometry assumed for our analysis. The infrared emission from the unshocked ejecta in Cas~A occurs because they have been photoionized. An analysis of the unshocked ejecta observed in the supernova remnant SN1006, based on calculations of photoionization cross-sections and Bethe parameters, showed that the unshocked ejecta are photoionized by two primary sources \citep{hf88}. Ambient ultraviolet starlight is all that is necessary to photoionize \ion{Si}{1} and \ion{Fe}{1} since their ionization potentials are below the Lyman limit. For ions with ionization potentials above the Lyman limit, the ultraviolet and soft X-ray radiation field of the shocked ejecta in SN1006 is such that species up to \ion{O}{3} (54.9~eV), \ion{Si}{4} (45.1~eV), and \ion{Fe}{4} (54.8~eV) can be photoionized. A similar analysis was performed for Cas~A assuming photoionization equilibrium, abundances appropriate for a core-collapse SNR, a thermal bremsstrahlung spectrum for the shocked ejecta, and utilizing a simple one-dimensional hydrodynamic model to follow Cas~A's evolution \citep{e09}. This simplified model predicts that [\ion{Si}{2}] (34.8\micron) and [\ion{O}{4}] (25.9\micron) should be the dominant infrared lines in the unshocked ejecta, directly in line with observations \citep{err06}. In addition, \citet{e09} predicts strong [\ion{O}{3}] (88.4\micron), which we will show in \S\ref{sec:composition} is present in spectra from the \emph{Infrared Space Observatory} \citep[\emph{ISO},][]{upc97,ds10}. Cas~A was spectrally mapped with the \emph{Spitzer} Space Telescope and Doppler shifts were measured which allowed a 3D mapping of the ejecta distribution, including the unshocked component \citep{drs10,ird10}. We now know based on the \emph{Spitzer} data that the ejecta are organized into a ``thick disk'' structure, tilted at $\sim70\degr$ from the line-of-sight, providing further evidence that the explosion, or subsequent evolution of the SNR prior to the reverse shock encounter, must have been asymmetric. An upper limit of 100~cm$^{-1}$ was determined for the electron density of the unshocked ejecta based on infrared [\ion{S}{3}] line ratios, but the actual density is likely much lower \citep{e09,srd09}. The absorption seen in the low frequency radio observations provides a means to probe the density and mass of the unshocked ejecta because the free-free optical depth ($\tau_{\nu}$) is related to emission measure and thus density. \citet{kpd95} attempted to determine the total mass of the unshocked ejecta, but due to using a $\tau_{\nu}$ appropriate for a hydrogenic gas and a temperature that was too high, arrived at 19M$_{\sun}$, which is unreasonably large considering that the total ejecta mass is likely only 2-4~M$_{\sun}$ \citep{hl12}. Given the role that the ejecta play in the evolution of SNRs, it is important to provide an accurate census of the total mass present, thus prompting a new look at the low frequency absorption analysis of \citet{kpd95}. \subsection{Low Frequencies on the VLA} \subsubsection{Low Frequencies on the Legacy VLA} The upgraded Karl G. Jansky Very Large Array (VLA) primarily accesses frequencies above 1~GHz through its broadband Cassegrain focus systems. Its predecessor, hereafter the ``legacy'' VLA, also accessed two relatively narrow bands below 1~GHz through its primary focus systems \citep{kpe93,kle07}. These included the ``P~band'' and ``4~band'' systems operating at 330~MHz (1990-2009) and 74~MHz (1998-2009), respectively. Both systems provided sub-arcminute resolution imaging and were widely used over their lifetimes. These systems were removed during the VLA upgrade and have been only recently replaced with a new ``Low Band'' receiving system \citep{ckh11}. The first call for proposals using the 330 MHz band of this new system was issued by NRAO in February, 2013. \subsubsection{Pie Town Link} The legacy 74~MHz and 330~MHz VLA systems achieved their maximum angular resolution of $\sim20\arcsec$ and $\sim6\arcsec$ in the A~configuration (maximum baseline $\sim$36~km), respectively. An 8-antenna prototype of the 74~MHz system was used to observe Cas~A \citep{kpd95} early on, adding to the body of VLA work extending from 330~MHz to higher frequencies. As the resolution was still relatively poor compared to shorter wavelengths, NRL and NRAO added a 74~MHz feed system to the Pie Town antenna\footnote{The Pie Town antenna already had a permanent 330~MHz feed as part of the VLBA.}, utilizing an optical fiber link connecting the innermost Very Large Baseline Array (VLBA) antenna to the legacy VLA\footnote{The optical fiber link was experimental and is no longer active.}. With a maximum baseline of 73~km, this improved the angular resolution at 74~MHz and 330~MHz by a factor of two to approximately 9$\arcsec$ and 3$\arcsec$ respectively. Cas~A was thereafter reobserved with the full (27-antenna) 74~MHz and the 330~MHz legacy systems using the Pie Town link capability in August 2003. \subsection{This Paper} In this paper, we report on legacy VLA observations in 1997-1998 at frequencies of 5~GHz, 1.4~GHz, 330~MHz, and 74~MHz, and on the Pie Town link, observations at frequencies of 330~MHz and 74~MHz in 2003. We use our derived images to determine the degree of free-free absorption present, applying the appropriate formula for when the composition of the gas is not dominated by hydrogen. Finally, we calculate the density and mass of the unshocked ejecta and discuss the implications of our results. | We have imaged Cas~A from 5~GHz to 74~MHz in all four configurations of the Legacy VLA with follow-up observations at 74 and 330~MHz with the legacy VLA+PT link. Our spatially resolved spectral index maps confirm the interior spectral flattening measured earlier, but at higher signal-to-noise and resolution. Comparison with \emph{Spitzer} infrared spectra confirms the earlier hypothesis that the spectral flattening is due to thermal absorption by cool, unshocked ejecta photoionized by X-ray radiation from Cas~A's reverse shock. We use the spectral flattening to measure the free-free optical depth. Next, using priors of electron temperature, atomic number, and electron to ion ratios, we derive an emission measure from the measured optical depth. With an assumed geometry, informed from three-dimensional modeling based on higher frequency studies, we use the emission measure to place constraints on both the density and total mass of the unshocked ejecta. We consider modest, physically plausible variations in both our priors and the assumed geometry, and find that the effect on the total mass is relatively modest, varying by a factor of about two. Furthermore, our derived total mass is consistent with recent model predictions \citep{hl12}. After accounting for the relative ages of Cas~A and SN1006, our derived mass density is much higher than found in SN1006, not unexpected since Cas~A (Type IIb) and SN1006 (Type Ia) emerged from two fundamentally different supernova explosion types. However, if there is a systematic difference in unshocked ejecta density for core collapse vs. Type Ia SNRs, low frequency radio data can be used to test this hypothesis. Finally, we consider the contribution of the intrinsic thermal absorption to the known turnover of Cas~A's integrated spectrum at much lower frequencies. We find that the intrinsic thermal absorption from the unshocked ejecta, combined with extrinsic absorption from a known, patchy distribution of low density ISM gas, are completely consistent with the low frequency turnover. The promise of the emerging instruments is expanding the population of SNRs, young and old, that can be probed for intrinsic and extrinsic thermal absorption and shock acceleration variations beyond pathologically bright sources like Cas~A. More generally, the seemingly ubiquitous detection of resolved thermal absorption by the 74~MHz legacy VLA against the Galactic background \citep{nhr06} and towards, discrete non thermal sources \citep[e.g. see][]{llk01,blk05,cdb07} confirms the phenomena will continue to emerge as a powerful tool for low frequency astrophysics. | 14 | 3 | 1403.0032 |
1403 | 1403.6939_arXiv.txt | The galactic Cepheid S Muscae has recently been added to the important list of Cepheids linked to open clusters, in this case the sparse young cluster ASCC 69. Low-mass members of a young cluster are expected to have rapid rotation and X-ray activity, making X-ray emission an excellent way to discriminate them from old field stars. We have made an XMM-Newton observation centered on S Mus and identified (Table 1) a population of X-ray sources whose near-IR 2MASS counterparts lie at locations in the J, (J-K) color-magnitude diagram consistent with cluster membership at the distance of S Mus. Their median energy and X-ray luminosity are consistent with young cluster members as distinct from field stars. These strengthen the association of S Mus with the young cluster, making it a potential Leavitt Law (Period-Luminosity relation) calibrator. | Galactic open clusters are an important means of calibrating the absolute magnitude of Cepheids (An, Terndrup, and Pinsonneault 2007; Feast and Walker 1987; Turner and Burke 2002). Recently Anderson, Eyer, Mowlavi (2013, hereafter AEM) made an all-sky survey of possible linkages between Cepheids and parent clusters based on position, velocity, distance, abundance and age. They found a highly probable connection between the Cepheid S Mus and the sparse cluster ASCC 69 = [KPR2005] 69 (Kharchenko, et al. 2005) The decrease in X-ray activity in low mass stars as they age and spin down is well known (Pallavicini et al., 1981). This means that X-ray activity provides an excellent discriminant between young stars and the old field population. Physical companions of Cepheids must be young, and hence X-ray bright. We have used this approach to confirm possible resolved companions of Cepheids (Evans, et al. 2013; Evans, et al. 2014 in preparation) identified in a Hubble Space Telescope (HST) Wide Field Camera 3 (WFC3) survey of 69 bright Cepheids. In this study, we use X-rays to identify low mass members of the cluster. Identification of low mass stars as cluster members is usually plagued by contamination of old field stars of similar colors, which limits the value of this mass range in, for instance, determining the distance to the cluster and studying the cluster population. | An XMM-Newton observation centered on the Cepheid S Mus identifies a concentration of 19 X-ray sources with 2MASS magnitudes appropriate for a 30 Myr cluster at the distance of the Cepheid. These are low mass stars which are likely cluster members, supporting the identity of the sparse cluster ASCC 69. The paper demonstrates the value of using X-ray observations to identify young X-ray active low-mass cluster candidates from a large number of older stars in the field. Confirmation of the cluster strengthens the association of S Mus with the cluster. | 14 | 3 | 1403.6939 |
1403 | 1403.8115_arXiv.txt | % We present nanoscale explosives as a novel type of dark matter detector and study the ignition properties. When a Weakly Interacting Massive Particle WIMP from the Galactic Halo elastically scatters off of a nucleus in the detector, the small amount of energy deposited can trigger an explosion. For specificity, this paper focuses on a type of two-component explosive known as a nanothermite, consisting of a metal and an oxide in close proximity. When the two components interact they undergo a rapid exothermic reaction --- an explosion. As a specific example, we consider metal nanoparticles of 5 nm radius embedded in an oxide. One cell contains more than a few million nanoparticles, and a large number of cells adds up to a total of 1 kg detector mass. A WIMP interacts with a metal nucleus of the nanoparticles, depositing enough energy to initiate a reaction at the interface between the two layers. When one nanoparticle explodes it initiates a chain reaction throughout the cell. A number of possible thermite materials are studied. Excellent background rejection can be achieved because of the nanoscale granularity of the detector: whereas a WIMP will cause a single cell to explode, backgrounds will instead set off multiple cells. If the detector operates at room temperature, we find that WIMPs with masses above 100 GeV (or for some materials above 1 TeV) could be detected; they deposit enough energy ($>$10 keV) to cause an explosion. When operating cryogenically at liquid nitrogen or liquid helium temperatures, the nano explosive WIMP detector can detect energy deposits as low as 0.5 keV, making the nano explosive detector more sensitive to very light $<$10 GeV WIMPs, better than other dark matter detectors. | Introduction} The majority of the mass in the Universe is known to consist of dark matter (DM) of unknown composition. Identifying the nature of this dark matter is one of the outstanding problems in physics and astrophysics. Leading candidates for this dark matter are Weakly Interacting Massive Particles (WIMPs), a generic class of particles that includes the lightest supersymmetric particle. These particles undergo weak interactions and their expected masses range from 1~GeV to 10~TeV. Many WIMPs, if present in thermal equilibrium in the early universe, annihilate with one another, leaving behind a relic density found to be roughly the correct value. Furthermore, recent interest in low mass WIMPs lead us to mention Asymmetric Dark Matter models, which naturally predict light WIMPs \cite{Zurek}. Thirty years ago, Refs. \cite{Drukier:1983gj, Goodman:1984dc} first proposed the idea of detecting weakly interacting particles, including neutrinos and WIMPs, via coherent scattering with nuclei. Soon after \cite{DFS} computed detection rates in the context of a Galactic Halo of WIMPs. This work also showed that the count rate in WIMP direct detection experiments will experience an annual modulation \cite{DFS,Freese:1987wu} as a result of the motion of the Earth around the Sun. Then development of ultra-pure Ge detectors permitted the first limits on WIMPs \cite{Ahlen:1987mn}. Since that time, a multitude of experimental efforts to detect WIMPs has been underway, with some of them currently claiming detection. The basic goal of direct detection experiments is to measure the energy deposited when weakly interacting particles scatter off of nuclei in the detector, depositing small amounts of energy, e.g. 1-10 keV, in the nucleus. A recent review of the basic calculations of dark matter detection, with an emphasis on annual modulation, may be found in \cite{Freesereview}. Numerous collaborations worldwide have been searching for WIMPs using a variety of techniques to detect the nuclear recoil. In this paper we elaborate on a novel mechanism for direct detection of WIMPs using explosives \cite{Andrzej}. The small amount of energy deposited in the nucleus by the WIMP scattering event can be enough to trigger an explosion. The registration of such an explosion then indicates that a WIMP/nucleon scattering event took place. In our search for appropriate explosive materials, we realized a key limitation, which we named ``Greg's rule." Everything on the surface of earth, including the conventional chemical explosives, has been constantly bombarded by ionizing particles coming from trace amounts of naturally occurring radioactive materials and cosmic radiation. Since conventional explosives can be stored in large quantities for extended periods of time (without blowing up), we may conclude that all the conventional explosives that are currently being used in commercial or military applications cannot be used in DM detection applications. This does not imply that there are no explosives that can be detonated by a single highly ionizing particle. If one were to synthesize such a material it would be highly unstable and would mysteriously explode. We need to be ``contrarians" and test such ``unsafe" explosives, which were discovered but rejected in prior R\&D. Luckily there are two directions to pursue. First, the chemical explosive, nitrogen triodine (NI$_3$), has been studied and can be ignited by a single highly ionizing particle (e.g. an $\alpha$-particle) \cite{old_stuff}. Future work on using NI$_3$ for DM detectors will be interesting. In this paper we instead study the second approach, nanothermites. Thermites have been used for more than 100 years to obtain bursts of very high temperatures in small volumes, typically a few cm$^3$. Thermites are two component explosives, consisting of a metal and either an oxide or a halide. These two components are stable when kept separated from one another; but when they are brought together they undergo a rapid exothermic reaction --- an explosion. The classical examples are \begin{eqnarray} \text{Al}_2 + \text{Fe}_2\text{O}_3 & \rightarrow & \text{Al}_2\text{O}_3 + 2 \text{Fe} + 851.5\ {\rm kJ/mole},\label{thermite} \\ \text{Al}_2 + \text{WO}_3 & \rightarrow & \text{Al}_2\text{O}_3 + \text{W} + 832.0\ {\rm kJ/mole}.\\ \nonumber \end{eqnarray} One advantage of thermites is the impressive number of elements, which can be used. Classic implementation of thermites uses micron scale (1 to 10 microns) granulation, but in recent years nano-sized granules of high explosives have been increasingly used \cite{nano_thermites}. These nano-thermites make interesting dark matter detectors. When a WIMP strikes the metal layer, the metal may heat up sufficiently to overcome the chemical energy barrier between the metal and metal-oxide. An explosion results. Nanoexplosive dark matter detectors have several advantages: \begin{enumerate} \item They can operate at room temperature; \item Low energy threshold of 0.5 keV, allowing for study of low mass $<10$ GeV WIMPs; \item Flexibility of materials: One may choose from a variety of elements with high atomic mass ({\it e.g.} Tl or Ta) to maximize the spin-independent scattering rate. Given a variety of materials one can also extract information about the mass and cross section of the WIMPs; \item One can also select materials with high nuclear spin to maximize spin-dependent interaction rate; \item Signal is amplified by the chain reaction of explosions; \item Excellent background rejection due to physical granularity of the detector. Because the cells containing the nanoparticles are less than a micron in size, the detector has the resolution to differentiate between WIMP nuclear recoils, which only interact with one cell of our detectors, and other backgrounds (such as $\alpha$-particles, $\beta$-particles and $\gamma$-rays) which travel through many cells. Thus, if the background has enough energy to cause the ignition of one cell, then it would ignite multiple cells. In the section $\textbf{Backgrounds}$, the typical ranges ( $\gtrsim 10$ $\mu$m) of $\alpha$ and $\beta$ particles are shown. \item Depending on the specifics of the detector design, the possibility of directional sensitivity with nanometer tracking; this possibility will be studied in future papers. \end{enumerate} To allow for specific calculation we study oxide-based nano-thermites, which consists of metal spheres with a radius of 5 nm embedded in an oxide. Motivated by their optical, magnetic and electronic applications, metal nanoparticles have been synthesized using both liquid and gas phase methods \cite{Oushing}\cite{Kruis}. In situations where the metal nanoparticles are susceptible to oxidation, the nanoparticles can be coated by a thin layer of an inert metal \cite{O'Conner}. To form a nano-thermite the metal particles must be mixed by an appropriate gel of oxide \cite{Nano-wire}. Alternatively, the oxide can be replaced by an appropriate halide \cite{Andrzej}. Enough energy deposit in the metal sphere heats it up to the point where there is an explosion beginning at the interface of the two materials at the edge of the nanoparticle. As a specific design, we imagine constructing a ``cell" which consists of $\sim 10^6$ metal nanoparticles embedded in an oxide. A full detector will need many of these cells; e.g. to obtain 1 kg of target material (the metal) there will be $\sim 10^{14}$ cells. A WIMP hitting the target will cause only one of these cells to explode. \begin{figure}[h!] \includegraphics[width=1.1\linewidth]{detectorfinal1.jpg} \caption{This figure depicts a schematic view of the nano-thermite detector studied. An array of cells of length $0.5 \ \mu$m is embedded into an insulator, which thermally decouples the cells from each other. Each cell contains more than a few million metal nanoparticles embedded into a metal-oxide. Two different images are depicted at the bottom of the figure: (a )shows the design model used for all calculations, and (b) represents a more realistic depiction of the nano-thermite detector. The dissimilarity between both images is the addition of a passivation layer in image (b). A passivation layer is a metal-oxide coating placed around the nanoparticle in order to prevent oxygen molecules interacting with the metal. An oxidized metal will not react chemically with a metal-oxide, since it is no longer favorable to gain oxygen atoms. Thus, an oxidized metal will not produce a thermite reaction. The passivation layer covering the metal nanoparticle would be required in the synthesis of the detector; since it would prevent oxidation of the metal nanoparticle during construction of the detector (i.e. before embedding the nanoparticle into the cell). As well, in some differing implementations, the metal-oxide of the cell could be comprised of mixed nano-wires \cite{Nano-wire}, which would produce a larger temperature increase due to a higher effective thermal resistance between the oxide and the metal. Image (a) represents a simplified design model, which enabled analytic results in later sections.} \label{detectorfinal} \end{figure} More precisely, when a WIMP elastically scatters with a metal nucleus and deposits energy to the metal, then that energy is converted into a temperature increase. If the temperature increase is big enough to overcome the potential barrier of the thermite reaction, then the metal will react with the surrounding oxidizer exothermically. In the design using metal nanoparticles, after the first thermite reaction of one nanoparticle occurs, the exothermic heat produced by the thermite will heat up the other metal nanoparticles within the 0.5 $\mu$m cell; thus creating a chain reaction which amplifies the signal to a measurable effect. Utilizing Eq \ref{thermite} as an example, the amplification factor for the signal is on the order of $10^4$-$10^5$. The detection of the cell explosions could be made by sensitive microphones or spectroscopic studies of the debris. Figure \ref{detectorfinal} shows a schematic representation of the nano-thermite detector studied in this paper. On top, the first picture of Figure \ref{detectorfinal} shows an array of cells embedded into an insulating material. The insulator is used to thermally decouple the cells; so that the reaction within a cell does not cause the explosion of neighboring cells. The length of each cell is taken to be $0.5 \ \mu$m. The spatial scale of the cells enable us to distinguish background from WIMP/nucleus collisions. Backgrounds composed of $\alpha$, $\beta$ and $\gamma$ particles will traverse multiple cells; whereas a recoiled ion from a WIMP/nucleus collision will only interact with a single cell. The middle picture in the figure is a magnified view of an individual cell. Inside each cell there will be more than a few million nanoparticles. The nanoparticles, represented by the white circles, are embedded into the metal-oxide, shown as the black background. Finally, the bottom pictures of Figure \ref{detectorfinal} depicts an enlarged section of the cell surrounding a single nanoparticle of radius $5$ nm. There are two pictures at the bottom. Image (a) shows the simplified model used to make all the calculations in the sections $\mathbf{Temperature\ Increase}$ and $\mathbf{Results}$. In contrast, image (b) depicts a more realistic design for the nano-thermite detector. A thin passivation layer is placed around the metal to prevent oxidation of the metal during the construction of the detector (i.e. before embedding the metal nanoparticle into the cell). The passivation layer is a metal-oxide coating placed around the nanoparticle in order to prevent oxygen molecules interacting with the metal. An oxidized metal will not react chemically with a metal-oxide, since it is no longer favorable to gain oxygen atoms. Thus, an oxidized metal will not produce a thermite reaction. However, the passivating barrier is lowered if the metal nanoparticle or the passivation layer melts due to the temperature increase. In a realistic scenario, the synthesis of the nanoparticle embedded into the oxide would require a passivation layer. It should be noted that the addition of an extra layer between the metal and the oxide of the cell would produce an additional thermal resistance at the interfaces. This thermal resistance would cause the metal to hold in heat; and thus, increase the temperature increase yield after a WIMP/nucleus collision, when compared to the results presented in this paper. As well, in some differing implementations, the metal-oxide of the cell could be comprised of mixed nano-wires \cite{Nano-wire}, which would produce a larger temperature increase due to a higher effective thermal resistance between the oxide and the metal. As explained in the section $\mathbf{Temperature\ Increase}$, the temperature increase is calculated utilizing the design model of image (a) (i.e. no passivation layer) and zero thermal resistance between the oxide and the metal nanoparticle. Thus, our calculations are conservative and underestimate the temperature increase sourced by an elastic collision between a WIMP and a metal nucleus. More generally, many other detector designs may be possible, such as two parallel layers of the two components. This latter design would allow determination of the direction from which the WIMP came, as only WIMPs headed first into the metal (rather than first into the oxide) would initiate an explosion. The goal of this paper is to study the ignition of the explosion when a WIMP hits the metal nanoparticle. A parallel paper \cite{Andrzej} studies the nano boom dark matter detectors more generally, including methods of detection and readout of the explosion; alternate explosives other than thermites; and other aspects of the problem. In this paper we begin by reviewing the relevant particle and astrophysics of direct detection, and then turn to the viability of a nanothermite detector for WIMPs. For our calculations we consider WIMP masses of $m_{\chi}=10,100\ \text{and\ }1000$ GeV. | Summary} We have studied the ignition properties of nanoscale explosives as a novel type of dark matter detector. Other design concepts may be employed for the nanothermite dark matter detector, which could obtain lower energy thresholds and/or measure directionality of the recoiling nucleus sourced by a WIMP/nucleus interaction. We focused on two-component nanothermite explosives consisting of a metal and an oxide. As a specific example, we considered metal nanoparticles of 5 nm radius embedded in a gel of oxide, with millions of these nanoparticles constituting one ``cell" isolated from other cells. A large number of cells adds up to a total of 1 kg detector mass. A WIMP striking a metal nucleus in the nanoparticle, deposits energy that may be enough to initiate a reaction at the interface between the two layers. We calculated the temperature increase of a metal nanoparticle due to a WIMP interaction and compared it to the ignition temperature required for the nanoparticle to explode. We computed the range of the nuclear recoil using the Lindhard formula; if the recoiling nucleus did not stop inside the nanoparticle, we considered only the fraction of the energy that was deposited inside the metal nanoparticle itself. This energy fraction was then converted to a temperature increase. We needed to know how long the nanoparticle remained hot in order to determine whether an explosion was set off. This timescale was obtained from the heat transfer equation. All assumptions made during the calculations were chosen in the spirit of being as conservative as possible. We then compared the temperature increase to the ignition temperature required to set off a nanothermite explosion. This ignition temperature varies for different thermite materials, and was computed by requiring two conditions to be met: (i) for each of the thermites we considered, there should be no spontaneous combustion of any of the metal nanoparticles for at least a time period of one year, and (ii) the temperature increase from a WIMP interaction must be sufficiently high to overcome an activation barrier and allow the thermite reaction to proceed. We searched through a variety of thermite materials to find those whose temperature increases from WIMP interactions would exceed their ignition temperatures for an explosion. We found aluminum, ytterbium, thallium and tantalum to be particularly suited to discover WIMPs via the explosion they would induce. We note that our model assumed that both the metal and oxide interact as solids. However, if the metal changes physical state into a gas due to a correspondingly high temperature increase, then the nano-thermite reaction rate may drastically increase. Excellent background rejection can be achieved because of the nanoscale granularity of the detector. The WIMP makes only one cell explode: the chain reaction initiated by one exploding metal nanoparticle is restricted to nanoparticles within only one cell, which is thermally isolated from neighboring cells by an insulating material. The range of the $\alpha$-particles on the other hand is longer than size of one cell, on the order of $10\ \mu$m, and therefore makes approximately 20 or more cells explode at once. Thus the nanoscale granularity is key for background rejection. Betas and gammas, on the other hand, rarely set off an explosion at all. We found a number of thermites that would serve as efficient WIMP detectors. Using a single model, we found that if the detector operates at room temperature, WIMPs with masses above 100 GeV (or for some materials above 1 TeV) could be detected; they deposit enough energy ($>$10 keV) to cause an explosion. When operating cryogenically at liquid nitrogen or liquid helium temperatures, the nano explosive WIMP detector can detect energy deposits as low as 0.5 keV, making the nano explosive detector sensitive to very light $<$10 GeV WIMPs. Even with the conservative model presented in this paper, our calculations suggest that oxide-based nano-thermites would work as a dark matter detector. We look forward to experiments which will establish accurately the minimal energy deposition by a recoiling nucleus necessary for a nano-thermite combustion. \\ | 14 | 3 | 1403.8115 |
1403 | 1403.4082_arXiv.txt | {The recent literature suggests that an evolutionary dichotomy exists for early-type galaxies (Es and S0s, ETGs) whereby their central photometric structure ({\it cuspy} versus {\it core} central luminosity profiles), and figure of rotation (fast (FR) vs. slow (SR) rotators), are determined by whether they formed by ``wet'' or ``dry'' mergers.} {We consider whether the mid infrared (MIR) properties of ETGs, with their sensitivity to accretion processes in particular in the last few Gyr (on average $z\lesssim$0.2), can put further constraints on this picture.} {We investigate a sample of 49 ETGs for which nuclear MIR properties and detailed photometrical and kinematical classifications are available from the recent literature.} {In the stellar light {\it cuspy/core} ETGs show a dichotomy that is mainly driven by their luminosity. However in the MIR, the brightest {\it core} ETGs show evidence that accretions have triggered both AGN and star formation activity in the recent past, challenging a ``dry'' merger scenario. In contrast, we do find, in the Virgo and Fornax clusters, that {\it cuspy} ETGs, fainter than M$_{K_s}=-24$, are predominantly passively evolving in the same epoch, while, in low density environments, they tend to be more active.\\ A significant and statistically similar fraction of both FR (38$^{+18}_{-11}$\%) and SR (50$^{+34}_{-21}$\%) shows PAH features in their MIR spectra. Ionized and molecular gas are also frequently detected. Recent star formation episodes are then a common phenomenon in both kinematical classes, even in those dominated by AGN activity, suggesting a similar evolutionary path in the last few Gyr.} {MIR spectra suggest that the photometric segregation between {\it cuspy} and {\it core} nuclei and the dynamical segregation between FR and SR must have originated before $z\sim$0.2).} \keywords {Galaxies: elliptical and lenticular, cD -- Infrared: galaxies -- Galaxies: fundamental parameters -- Galaxies: formation -- Galaxies: evolution} | A relatively large fraction of ETGs at high-redshift show clear evidence of interaction and/or merger morphologies and active star formation \citep[e.g.][]{Treu05} supporting the model view these galaxies are produced by a halo merger process \citep[see e.g.][and reference therein]{Mihos04,Cox08,Khochfar11,DeLucia11}. A further element for high-redshift formation scenarios comes from their measured [$\alpha$/Fe] ratios, encoding information about the time-scale of star formation. In massive ETGs this ratio has super-solar values, suggesting that they formed on relatively short time-scales \citep[see e.g][]{Chiosi98,Granato04,Thomas05,Annibali07,Clemens09}. \citet{Annibali07} estimated that a fair upper limit to the recent {\it rejuvenation} episodes is $\sim$25\% of the total galaxy mass but that they are typically much less intense than that \citep[see e.g. the {\it Spitzer}-IRS study of NGC 4435 by][]{Panuzzo07}. However, rejuvenation signatures in ETGs are often detected, not only in the galaxy nucleus, but also in the disk, rings and even in galaxy outskirts, as clearly shown by GALEX \citep[e.g][]{Rampazzo07,Marino09,Salim10,Thilker10,Marino11}, so that the different phases of galaxy assembly/evolution, and their link to morphological and kinematical signatures, are vivaciously debated. The merger process may involve either relatively few (major) or multiple (minor) events during the galaxy assembly. Furthermore, it may or may not include dissipation (and star formation), two possibilities often called ``wet" and ``dry'' mergers, respectively \citep[see e.g.][]{vanDokkum05}. Other mechanisms, however, like conversion of late-type galaxies into ETGs by environmental effects, like strangulation, ram-pressure etc. \citep[e.g.][]{Boselli06}, and by energy feedback from supernovae may also be important \citep[e.g.][]{Kormendy09}. Two {\it observable} quantities are thought to distinguish ETGs produced by ``wet'' and ``dry'' mergers. The first, mainly fruit of high resolution observations with the {\it Hubble} Space Telescope and of high precision photometric analyses, is the presence of either a {\it cusp} or a {\it core} in the inner galaxy luminosity profile \citep{Lauer91,Lauer92,Cote06,Turner12}. In contrast to {\it cuspy} profiles, the surface brightness in {\it core} profiles becomes shallower as $r \rightarrow 0$. The same concept is considered by \citet{Kormendy09} who divide ETGs into {\it cuspy--core} and {\it core--less}, depending on whether the luminosity profile {\it misses light} or has an {\it extra-light} component with respect to the extrapolation of the Sersic's law at small radii. \citet{Kormendy09} suggest that {\it cuspy--core} nuclei have been scoured by binary black holes (BHs) during (the last) dissipationless, ``dry'', major merger. In contrast, {\it core--less} nuclei originate from ``wet'' mergers. Analogously, \citet{Cote06} and \citet{Turner12} found an extra stellar nucleus in the profile decomposition of ETGs in their Virgo (ACSVCS) and Fornax (ACSFCS) surveys in addition to simple Sersic profiles, in particular in low-luminosity(/mass) ETGs. They proposed that the most important mechanism for the assembly of a stellar nucleus is the infall of star clusters through dynamical friction, while for more luminous(/massive) galaxies a ``wet scenario'' (gas accretion by mergers/accretions and tidal torques) dominates. The second observable quantity is the kinematical class. The class is defined by a parameter describing the specific baryonic angular momentum defined as follows, $\lambda_r$=$\langle r|V|\rangle/\langle r \sqrt{V^2 + \sigma^2}\rangle$, where $r$ is the galacto-centric distance, $V$ and $\sigma$ are luminosity weighted averages of the rotation velocity and velocity dispersion over a two-dimensional kinematical field. The measure refers to the inner part of the galaxy, typically of the order or less than 1 effective radius, $r_e$, i.e. significantly larger than the regions where cusps and cores are detected. $\lambda_r$ divides ETGs into the two classes of fast (FR) and slow (SR) rotators \citep[][and reference therein]{Emsellem11}. FR are by far the majority of ETGs (86$\pm$2\% in the ATLAS$^{3D}$ survey). SR represent massive ETGs that might have suffered from significant merging without being able to rebuild a fast rotating component. \citet[][]{Khochfar11} find that the underlying physical reason for the different growth histories is the slowing down, and ultimately complete shut-down, of gas cooling in massive, SR galaxies. On average, the last gas-rich major merger interaction in SR happens at $z > 1.5$, followed by a series of minor mergers which build-up the outer layers of the remnant, i.e. do not feed the inner part of the galaxy. FRs in the models of \citet{Khochfar11} have different formation paths. The majority (78\%) have bulge-to-total stellar mass ratios (B/T) larger than 0.5 and manage to grow stellar discs due to continued gas cooling as a results of frequent minor mergers. The remaining 22\% live in high--density environments and consist of low B/T galaxies with gas fractions below 15\%, that have exhausted their cold gas reservoir and have no hot halo from which gas can cool. Summarizing, a dissipative merging and/or a gas accretion episode from interacting companions, could be the way for the galaxy to rebuild a fast-rotating disk-like component. SR and FR basically correspond to the paradigms of ``dry'' vs. ``wet'' accretions/mergers respectively. Recently, \citet{Lauer12} attempted to unite the structural and kinematical views, claiming that they are the two aspects of the same process. Using the specific angular momentum $\lambda_{r_e/2}$, computed from the 2D kinematics within half the effective radius by \citet{Emsellem11}, \citet{Lauer12} showed that {\it core} galaxies have rotation amplitudes $\lambda_{r_e/2} \leq 0.25$ while all galaxies with $\lambda_{r_e/2} > 0.25$ and ellipticity $\epsilon_{r_e/2} > 0.2$ lack cores. Some FR have a core profile but they argue that both figure of rotation and the central structure of ETGs should be used together to separate systems that appear to have formed from ``wet'' and ``dry'' mergers. \citet{Krajnovic13b} show, however, that there is a genuine population of FR with cores. They suggest that the cores of both FR and SR are made of old stars and are found in galaxies typically lacking molecular and atomic gas, with few exceptions. For the sake of simplicity throughout the paper, we will call {\it core} ETGs those galaxies for which the luminosity profile shallows out as $r \rightarrow 0$ (i.e. cuspy-core in \citet{Kormendy09}, core in \citet{Lauer12}, non-nucleated in \citet{Cote06,Turner12}). We will refer to {\it cuspy} ETGs as those which present an extra central luminosity component (i.e. core-less in \citet{Kormendy09}, power-law + intermediate in \citet{Lauer12}, nucleated in \citet{Cote06,Turner12}) with respect to a fit of a Sersic model. Depending on the accurate surface brightness profile decomposition performed by the above authors, rarely the {\it cuspy} versus {\it core} classification given by different authors for the same ETG is discrepant. This note aims to contribute to the debate on the origin of {\it core}/{\it cuspy} and FR/SR ETGs, and the connection to the ``wet'' vs. ``dry'' merger hypotheses, using mid-infrared (MIR) spectra of well-studied ETGs. The paper is organized as follows. In \S~2 we briefly describe how {\it Spitzer}-IRS spectra trace the recent few Gyr evolution in ETGs. We present the MIR vs. the {\it cuspy}/{\it core} nuclear properties of ETGs \citep{Kormendy09,Cote06,Turner12,Lauer12} in \S~3, and vs. the FR/SR kinematical classes \citep{Emsellem11} in \S~4. Conclusions are presented in \S~5. \begin{table*} \begin{minipage}{180mm} \caption{The Virgo sample in \citet{Kormendy09} with {\it Spitzer}-IRS MIR class} \begin{tabular}{llccccccc} \hline Galaxy & RSA & n$_{tot}$ & \% & $M_{K_s}$ & MIR & Kinematical & Kinematical and Morphological & Dust\\ & Type & & light& & class & class & peculiarities &\\ & & & & & & & & \\ \hline core & & & missing light & & & & \\ \hline NGC 4472 & E1/S0$_1$(1) & 5.99$^{+0.31}_{-0.29}$ & -0.50$\pm$0.05 & -25.73 & 1 &SR & CR s-s (1) & Y\\ NGC 4486 & E0 & 11.84$^{+1.79}_{-1.19}$ & -4.20$\pm$1.00 & -25.31 & 4& SR & SC(2), jet & Y\\ NGC 4649 & S0$_1$(2) & 5.36$^{+0.38}_{-0.32}$ & -1.05$\pm$0.07 & -25.35 & 1 & FR & asym. rot. curve (6) & N\\ NGC 4365 & E3 & 7.11$^{+0.40}_{-0.35}$ & -0.63$\pm$0.07 & -25.19 & 0 & SR & Faint SW fan (4) & N \\ NGC 4374 & E1 & 7.98$^{+0.71}_{-0.56}$ & -1.52$\pm$0.05 & -25.13 & 2 & SR &V$\approx$0 (3); SC (2) & Y \\ NGC 4261 & E3 & 7.49$^{+0.82}_{-0.60}$ & -1.84$\pm$0.05 & -25.24 & 4& SR & NW tidal arm, faint SE fan (4) & Y\\ NGC 4382 & S0$_1$(3) pec & 6.12$^{+0.31}_{-0.27}$ & -0.18$\pm$0.06 & -25.13 & 1 &FR& MC(2), shells (7) & Y \\ NGC 4636 & E0/S0$_1$(6) & 5.65$^{+0.48}_{-0.28}$ & -0.22$\pm$0.04 & -24.42 & 2& SR & gas irr. motion (a) & Y\\ NGC 4552 & S0$_1$(0) & 9.22$^{+1.13}_{-0.83}$ & -1.23$\pm$0.09 & -24.31 & 2 & SR & KDC (2) shells (4) & Y \\ & & & & & & & \\ \hline cuspy & & & extra light & & & & \\ \hline & & & & & & & \\ NGC 4621 & E5 &5.36$^{+0.30}_{-0.28}$ & 0.27$\pm$0.06 & -24.13 & 0 & FR& KDC (2) & N\\ NGC 4473 & E5 & 4.00$^{+0.18}_{-0.16}$ & 8.80$\pm$1.00 & -23.77 & 0 & FR & MC (2) &N \\ NGC 4478 & E2 & 2.07$^{+0.08}_{-0.07}$ & 1.12$\pm$0.15 & -23.15 & 0&FR & \dots & N\\ NGC 4570 & S0$_1$(7)/E7 & 3.69$\pm$0.50 & \dots & -23.49 & 0 & FR & MC (2) & N\\ NGC 4660 & E5 & 4.43$\pm$0.38 & \dots & -22.69 & 0 & FR & MC (2) & N\\ NGC 4564 & E6 & 4.69$\pm$0.20 & \dots & -23.09 & 0 & FR & SC (2) & N\\ \hline \hline \end{tabular} \label{tab1} Notes: {\it Core} and {\it cuspy} galaxies correspond to cuspy-core and core-less, respectively, in \citet{Kormendy09}. The Sersic's index, $n_{tot}$, and the percentage of extra light are taken from \citet{Kormendy09}. The $M_{K_s}$ absolute magnitude and the MIR class is obtained from \citet{Brown11} and \citet{Rampazzo13}. Kinematical classes are from \citet{Emsellem11}. Kinematical and morphological peculiarities are coded as follows: {CR s-s}: counter rotation stars vs. stars (1) \citet{Corsini98}; KDC indicates a kinematically decoupled component, not necessarily counter-rotation; MC multiple components; SC single component \citet[][]{Krajnovic08} (2). A description of the kinematic and morphological peculiarities of the galaxies and full references are reported in: (3) \citet[][]{Emsellem04}; (4) \citet{Tal09}; (5) \citet{MC83}; (6) \citet{Pinkney03}; (7) \citet{Kormendy09}. Dust properties (Y if present) are taken from \citet{Cote06} and Table B3 in \citet{Rampazzo13}. \end{minipage} \end{table*} | This note investigates whether the photometric segregation between {\it cuspy} and {\it core} nuclei and/or the dynamical segregation between fast and slow rotators can be attributed to formation via ``wet'' or ``dry'' mergers. We explore the question by comparing the MIR spectral characteristics with their {\it cuspy}/{\it core} morphology \citep[][]{Kormendy09,Lauer12,Cote06,Turner12,Krajnovic13a} and FR/SR characterization (ATLAS$^{3D}$). We use {\it Spitzer}-IRS spectra and MIR classes discussed in \citet{Panuzzo11} and \citet{Rampazzo13}. {\it These spectra are sensitive to the recent few Gyr ($z\lesssim0.2$) evolution of the ETGs.} We find the following: \begin{itemize} \item{With the exception of NGC~4365, which is passively evolving, MIR spectra of all the bright {\it core} ETGs in the \citet{Kormendy09} sample show nebular emission lines, and PAH features are detected in 5 out of 9 objects. These types of nuclei should have recently accreted gas-rich material. If such objects formed via ``dry'' mergers, the process was completed before $z\sim0.2$ and ``wet'' accretions have happened since. AGN feedback does not prevent a late star formation episode in the bright Es NGC~4374, NGC~4636 and NGC~4552. The few (6) faint, {\it cuspy} ETGs in the \citet{Kormendy09} sample all show passively evolving spectra, irrespective of their magnitude.} \item{MIR spectra of the total {\it cuspy/core} sample (44 ETGs), confirm that ETGs fainter than M$_{K_s}$=$-24$ mag, mostly {\it cuspy}, are predominantly passively evolving. This fact is particularly significant in Virgo and Fornax where 82$^{+18}_{-10}$\% are {\it cuspy}, {\it the majority also FRs}. \citet{Kormendy09} noticed that {\it cuspy} ETGs have disky (positive $a_4/a$ Fourier coefficient) isophotes in the nuclear region, a structure that suggests some form of dissipation during the formation. The passive MIR spectra suggest that either this infall was sterile, i.e. without star formation, or happened at $z\gtrsim0.2$, so as to leave no trace in the present MIR spectra. \citet{Khochfar11} models suggest that a fraction of FR lying in high--density environments have a residual gas fraction below 15\%, i.e. they have exhausted their cold gas reservoir and have no hot halo from which gas can cool. Counterparts in low density environments (Figure~\ref{fig2} right panel) show the tendency to be more gas rich and hence more active.} \item{A significant fraction of both FR (38$^{+18}_{-11}$\%) and SR (50$^{+34}_{-21}$\%) shows PAH features in their MIR spectra. Ionized and molecular gas are also commonly detected. Recent star formation episodes are not a rare phenomenon in either FR or SR, even in those dominated by AGN activity \citep[see also.][]{Annibali10}. Recently, observing HI rich ETGs, \citet{Serra14} found that SRs are detected as often, host as much H I and have a similar rate of H I discs/rings as FRs.} \end{itemize} Despite the expectation that the signature of ``wet'' or ``dry'' merger is strongest in the galaxy nucleus, the nuclear MIR spectra do not clearly link the {\it core} versus {\it cuspy} morphology and the FR versus SR kinematical class to these alternative formation scenarios. Within the last few Gyrs, only at the two extremes of the ETG luminosity, does the dichotomy emerge: the brightest {\it core}, mostly SR and faint {\it cuspy}, mostly FR (in the Virgo and Fornax clusters) separate into mostly active and passive ETGs, respectively. The result, however, is in contrast to what is expected for {\it core}-SR versus {\it cuspy}-FR, i.e. a ``dry'' versus ``wet'' accretion scenarios. The obvious possibility is that these photometric and kinematical classes are signatures generated by the two different evolutionary scenarios at $z\gtrsim0.2$ \citep[see e.g.][]{Khochfar11}, so that they do not affect the MIR spectra. On the other hand, adopting the traditional E/S0s morphological subdivision \citet[][]{Rampazzo13} found that Es are significantly more passive than S0s in the same epoch. FR/SR classes may smooth away differences between Es and S0s, since a large fraction of Es transit into the FR class. At the same time, recent observations tend to emphasize the complexity of ETGs when their study is extended to large radii \citep[see also][]{Serra14}. \citet{Arnold14} recently obtained the extended, up to 2-4 $r_e$, kinematics of 22 ETGs in RSA. They find that only SRs remain slowly rotating in their halos, while the specific angular momentum of ETGs classified as FR within $r_e=1$, may dramatically change at larger radii. \citet{Arnold14} suggest that the traditional E/S0 classification better accounts for the observed kinematics up to large radii and likely of their complex evolutionary scenario. | 14 | 3 | 1403.4082 |
1403 | 1403.2567_arXiv.txt | We relate the observed hemispherical anisotropy in the cosmic microwave radiation data to an inhomogeneous power spectrum model. The hemispherical anisotropy can be parameterized in terms of the dipole modulation model. This model leads to correlations between spherical harmonic coefficients corresponding to multipoles, $l$ and $l+1$. We extract the $l$ dependence of the dipole modulation amplitude, $A$, by making a fit to the WMAP and PLANCK CMBR data. We propose an inhomogeneous power spectrum model and show that it also leads to correlations between multipoles, $l$ and $l+1$. The model parameters are determined by making a fit to the data. The spectral index of the inhomogeneous power spectrum is found to be consistent with zero. | The cosmic microwave background radiation (CMBR) shows a hemispherical power asymmetry with excess power in the southern ecliptic hemisphere compared to northern ecliptic hemisphere \cite{Eriksen2004, Eriksen2007,Erickcek2008,Hansen2009,Hoftuft2009,Paci2013,Planck2013a, Schmidt2013,Akrami2014}. The signal is seen both in WMAP and PLANCK data and indicates a potential violation of the cosmological principle. The hemispherical anisotropy can be parametrized phenomenologically by the dipole modulation model \cite{Gordon2005,Gordon2007,Prunet2005,Bennett2011} of the CMBR temperature field, which is given by, \begin{equation} {\bigtriangleup T}(\hat n) = f(\hat n) \left(1+A \hat \lambda \cdot \hat n \right)\,, \label{eq:dipole_mod} \end{equation} where $f(\hat n)$ is an intrinsically isotropic and Gaussian random field and, $A$ is the amplitude of modulation along the direction $\hat \lambda$. Taking the preferred direction along the z-axis, we have $\hat\lambda\cdot \hat n=\cos\theta$. Using the WMAP five year data, the dipole amplitude for $l \le 64$ was found to be $A=0.072\pm0.022$ and the dipole direction, $(l,b) =(224^o,-22^o)\pm24^o$, in the galactic coordinate system \cite{Hoftuft2009}. The PLANCK results \cite{Planck2013a} confirmed this anisotropy with a significance of $3\sigma$ confidence level. A dipole amplitude, $A=0.073\pm0.010$, in the direction of $(l,b)=(217^o,-20^o)\pm15^o$ was found in PLANCK's SMICA map, which is also seen (nearly with same amplitude and direction) in other PLANCK provided clean CMB maps viz., NILC, SEVEM and COMMANDER-RULER maps. Hence the results obtained by WMAP and PLANCK observations are consistent with one another. There were indications that the hemispherical anisotropy might extend to multipoles higher than $64$ \cite{Hoftuft2009,Hansen2009}, however, the effect is found to be absent beyond $l\sim 500$ \cite{Donoghue2005,Hanson2009}. The large scale structure surveys also do not show any evidence for this anisotropy \cite{Hirata2009,Fernandez2013}. This suggests that any model which attempts to explain these observations should display a scale dependent power \cite{Erickcek2009} which should lead to a negligible effect at high$-l$. There also exist other observations which indicate a potential violation of the cosmological principle \cite{Jain1999,Hutsemekers1998,Costa2004, Ralston2004,Schwarz2004,Singal2011,Tiwari2013}. Many theoretical models, which aim to explain the observed large scale anisotropy, have been proposed \cite{Berera2004,ACW2007,Boehmer2008,Jaffe2006,Koivisto2006,Land2006, Bridges2007,Campanelli2007,Ghosh2007,Pontezen2007,Koivisto2008,Kahniashvili2008, Carroll2010,Watanabe2010,Chang2013a,Anupam2013a,Anupam2013b,Cai2013,Liu2013, Chang2013b,Chang2013c,Aluri13,Mcdonald2014,Ghosh2014,Panda14}. It has also been suggested that this anisotropy may not really be in disagreement with the inflationary Big Bang cosmology, which may have a phase of anisotropic and inhomogeneous expansion at very early time. The anisotropic modes, generated during this early phase may later re-enter the horizon \cite{Aluri2012,Pranati2013a} and lead to the observed signals of anisotropy. In a recent paper \cite{Pranati2013b}, we showed that the dipole modulation model, given in Eq. \ref{eq:dipole_mod}, leads to several implications for CMBR. The CMBR temperature field may be decomposed as, \begin{equation} {\bigtriangleup T}(\hat n) = \sum_{lm}a_{lm}Y_{lm}(\hat n)\,. \end{equation} If we assume statistical isotropy, the spherical harmonic coefficients must satisfy, \begin{equation} \langle{a_{lm}a^*_{l'm'}}\rangle_{iso} = C_{l}\delta_{ll'}\delta_{mm'}\,, \label{eq:corr_iso} \end{equation} where, $C_l$ is the angular power spectrum. However in the presence of dipole modulation, statistical isotropy is violated and one finds \cite{Pranati2013b}, \begin{equation} \langle{a_{lm}a^*_{l'm'}}\rangle = \langle{a_{lm}a^*_{l'm'}}\rangle_{iso} +\langle{a_{lm}a^*_{l'm'}}\rangle_{dm}\,, \label{eq:corrdm} \end{equation} where, $\langle{a_{lm}a^*_{l'm'}}\rangle_{iso}$ is the correlation given in Eq. \ref{eq:corr_iso} and the anisotropic \emph{dipole modulation} term can be expressed as, \begin{equation} \langle{a_{lm}a^*_{l'm'}}\rangle_{dm} = A\left(C_{l'}+ C_l\right)\xi^{0}_{lm;l'm'}\,. \label{eq:corr_aniso} \end{equation} Here, $\xi^{0}_{lm;l'm'}$ is given by, \begin{eqnarray} \xi^{0}_{lm;l'm'} & \equiv &\int d\Omega Y_l^{m*}(\hat n)Y_{l'}^{m'}(\hat n)\cos{\theta} \nonumber\\ &= &\delta_{m',m}\Bigg[\sqrt{\frac{(l-m+1)(l+m+1)}{{(2l+1)}{(2l+3)}}}\delta_{l',l+1} \nonumber\\ && +\sqrt{\frac{(l-m)(l+m)}{{(2l+1)}{(2l-1)}}}\delta_{l',l-1}\Bigg]\,. \label{eq:xillprime} \end{eqnarray} Hence the modes corresponding to the multipoles, $l$ and $l+1$ are correlated. We thus define a correlation function \cite{Pranati2013b}, \begin{equation} C_{l,l+1} = \frac{l(l+1)}{2l+1}\sum_{m = -l}^{l} a_{lm}a^*_{l+1,m}\,. \label{eq:corrl_l+1} \end{equation} Here the factor $(2l+1)$ arises in the denominator in order to obtain an average value of the correlation for a particular $l$. Furthermore we multiply by $l(l+1)$ since for low $l$ the power $l(l+1) C_l$ is approximately independent of $l$. Using Eq. \ref{eq:corr_aniso} we deduce that with this factor the correlation $C_{l,l+1}$ would be roughly equal for different $l$ values. Analogously, the signal of hemispherical asymmetry is also observed in the variable $l(l+1) C_l$ \cite{Eriksen2004,Eriksen2007}. We define the statistic, $S_H(L)$, by summing over a range of multipoles, \begin{equation} S_H(L) = \sum_{l = l_{min}}^{L} C_{l,l+1} \,. \label{eq:SH} \end{equation} We point out that if the factor $l(l+1)$ was not inserted in Eq. \ref{eq:corrl_l+1} then the statistic will be dominated by a few low $l$ multipoles. The final value of the data statistic is obtained by maximizing it over the direction parameters. We make a search over the direction parameters in order to maximize the value of the statistic, $S_H(L)$. The resulting statistic is labelled as $S_{H}^{data}$. The corresponding direction gives us the preferred direction, $\hat \lambda$, defined in Eq. \ref{eq:dipole_mod}, which corresponds to the choice of z-axis. In this paper, our objective is twofolds: \begin{enumerate} \item We first update our results in Ref.~\cite{Pranati2013b} for the estimate of the effective value of the dipole modulation parameter, $A$, as a function of $l$. The multipole dependence of $A$ in Ref.~\cite{Pranati2013b} was extracted using the dipole power of the temperature squared field. In the currrent paper we instead use the statistic, $S_H(L)$, which is a much more sensitive probe of $A$ in comparison to temperature square power. In particular, this statistic leads to a higher significance of the signal of dipole modulation. Furthermore in Ref.~\cite{Pranati2013b}, we used COM-MASK-gal-07 for the PLANCK data. In the present paper we use the $KQ85$ mask for WMAP's nine year ILC map and the more reliable, CMB-union mask for SMICA, also known as the U73 mask. \item We also propose a general inhomogeneous primordial power spectrum model. This is presented in the next section, where we argue that the inhomogeneous contribution to the power spectrum in any model must reduce to ours, as long as it is small. The assumption of small inhomogeneity is reasonable and also supported by our fit to data. An inhomogeneous power model is expected to induce an anisotropy in the CMBR as well as in other cosmological observations. We show that our model leads to correlations among the spherical harmonic coefficients similar to those implied by the dipole modulation model. This allows us to relate the inhomogeneous primordial power spectrum with the dipole modulated temperature field and hence the hemispherical anisotropy. We determine the primordial power spectrum which leads to the hemispherical power asymmetry observed in the CMB temperature data. \end{enumerate} The observed anisotropy is parametrized by the two point correlations given by Eq. \ref{eq:corr_aniso}, or equivalently the statistic, $S_H(L)$. We first determine the data statistic, $S_{H}^{data}$, as defined above after Eq. \ref{eq:SH}. We next determine the amplitude, $A$, of the dipole modulation model given by Eq. \ref{eq:dipole_mod}, by making a fit to data. We perform this analysis for the entire multipole range $l=2-64$ and also, independently, over the three multipole ranges, $l=2-22, 23-43, 44-64$. This allows us to determine the variation of $S_H^{data}$ and hence the dipole modulation amplitude, $A$, with the multipole bin. We determine the value of $A$ by simulations, as explained in section \ref{sec:dataanalysis}. The upper limit on $l$ ($l\le 64$) is imposed since the amplitude of the dipole modulation using hemispherical analysis, has been studied most thoroughly only in this range \cite{Hoftuft2009,Planck2013a}. As discussed in the next section, the theoretical analysis is also simplest for low-$l$ modes. In this case we can make a simple approximation for the transfer function. At higher $l$ the transfer function is not as simple and the computation of the correlation, $\langle a_{lm} a^*_{l'm'} \rangle$, in the presence of inhomogeneous and/or anisotropic is significantly more complicated. Furthermore, as we shall see, the contribution due to detector noise is negligible in the range, $l\le 64$. This leads to considerable simplification in our analysis. Simulations at higher $l$ also require much higher computation time. Finally, as we have discussed above, the signal of dipole modulation is expected to die down for values of $l$ beyond a few hundred \cite{Donoghue2005,Hanson2009,Hirata2009,Fernandez2013}. We hope to extend our analysis to higher values of $l$ in a future publication. We finally determine the inhomogeneous power spectrum by making a fit to the data statistic, $S_{H}^{data}$. This calculation is also performed over the entire multipole range, $l=2-64$, and then repeated over the three multipole bins mentioned above. Hence we determine the magnitude as well as the wave number, $k$, dependence of the power spectrum. The possibility of inhomogeneous power has been considered earlier, for example, in Ref.~\cite{Erickcek2008,Hirata2009,Carroll2010,Gao2011}. However our analysis and results differ from these earlier papers. | We have extended the results obtained in a recent paper \cite{Pranati2013b}, which showed that the dipole modulation model leads to correlations among spherical harmonic multipoles, $a_{lm}$ and $a^*_{l'm'}$ with $m'=m$ and $l'=l+1$. In that paper we defined a statistic, $S_H$, which provides a measure of this correlation in a chosen multipole range. By making a fit to this statistic in three multipole bins, $l=2-22,23-43$ and $l=44-64$, we find that the effective dipole modulation parameter $A$ slowly decreases with the multipole, $l$. We propose an inhomogeneous power spectrum model and show that it also leads to a correlation among different multipoles corresponding to $m'=m$ and $l'=l+1$, as in the case of the dipole modulation model. The inhomogeneous power spectrum is parameterized by the function, $g(k)$. We first fit the data by assuming that $g(k)$ follows a purely scale invariant power spectrum, with $\alpha =0$ in Eq. \ref{eq:powergk}. We determine the value of $g_0$ by making a fit over the entire multipole range, $2-64$ for WILC9 and also SMICA . The best fit value for WILC9 is found to be $g_0 = 0.087\pm0.028$ and in case of SMICA the value is $g_0=0.077\pm0.020$. We next make a fit over the three multipole bins, as in the case of the dipole modulation model. Again setting $\alpha=0$, we obtain, $g_0=0.075\pm 0.020$, with $\chi^2=0.50$ and $g_0 = 0.070\pm 0.023$, with $\chi^2=0.22$ for WILC9 and SMICA respectively. Hence this provides a good fit to data and implies that the value of $\alpha$ is consistent with zero. Allowing the parameter $\alpha$ to vary, we find that the one sigma limit of $\alpha$ for WILC9 and SMICA is $-0.34<\alpha<0.40$ and $-0.24<\alpha<0.39$ respectively. | 14 | 3 | 1403.2567 |
1403 | 1403.7204_arXiv.txt | {The globular cluster (GC) M15 (NGC 7078) is host to at least eight pulsars and two low mass X-ray binaries (LMXBs) one of which is also visible in the radio regime. Here we present the results of a multi-epoch global very long baseline interferometry (VLBI) campaign aiming at i) measuring the proper motion of the known compact radio sources, ii) finding and classifying thus far undetected compact radio sources in the GC, and iii) detecting a signature of the putative intermediate mass black hole (IMBH) proposed to reside at the core of M15. We measure the sky motion in right ascension ($\mu_{\alpha}$) and declination ($\mu_{\delta}$) of the pulsars M15A and M15C and of the LMXB AC211 to be $(\mu_{\alpha},\,\mu_{\delta})_{\text{M15A}}=(-0.54\pm0.14,\,-4.33\pm0.25)\,$mas$\,$yr$^{-1}$, $(\mu_{\alpha},\,\mu_{\delta})_{\text{M15C}}=(-0.75\pm0.09,\,-3.52\pm0.13)\,$mas$\,$yr$^{-1}$, and $(\mu_{\alpha},\,\mu_{\delta})_{\text{AC211}}=(-0.46\pm0.08,\,-4.31\pm0.20)\,$mas$\,$yr$^{-1}$, respectively. Based on these measurements we estimate the global proper motion of M15 to be $(\mu_{\alpha},\,\mu_{\delta})=(-0.58\pm0.18,\,-4.05\pm0.34)\,$mas$\,$yr$^{-1}$. We detect two previously known but unclassified compact sources within our field of view. Our observations indicate that one them is of extragalactic origin while the other one is a foreground source, quite likely an LMXB. The double neutron star system M15C became fainter during the observations, disappeared for one year and is now observable again---an effect possibly caused by geodetic precession. The LMXB AC211 shows a double lobed structure in one of the observations indicative of an outburst during this campaign. With the inclusion of the last two of a total of seven observations we confirm the upper mass limit for a putative IMBH to be M$_{\bullet}<500$ M$_{\odot}$.} | \begin{table*}[t] \caption{\label{tab:obs-details}Details of the observations} \centering{}% \begin{tabular}{ccccccc} \noalign{\vskip-0.3cm} \hline\hline & & & & & & \tabularnewline \noalign{\vskip-0.2cm} & & & & \multicolumn{2}{c}{rms {[}$\mu$Jy$\,$beam$^{-1}${]}} & beam size \tabularnewline Epoch & Date & MJD & Array & dirty{*} & cleaned & {[}mas x mas{]}\tabularnewline \hline \noalign{\vskip\doublerulesep} 1 & 10 Nov 2009 & 55146 & JbWbEfOnMcTrNtArGb & 5.1 & 3.1 & 3.3 x 6.4\tabularnewline 2 & 07 Mar 2010 & 55263 & JbWbEfOnMcTrNtGb & 8.5 & 5.4 & 2.3 x 30.9\tabularnewline 3 & 05 Jun 2010 & 55352 & JbWbEfOnMcTrNtArGb & 6.7 & 4.6 & 3.0 x 6.6\tabularnewline 4 & 02 Nov 2010 & 55503 & JbWbEfOnMcTrGb & 11.2 & 7.4 & 2.1 x 26.2\tabularnewline 5 & 27 Feb 2011 & 55620 & JbWbEfOnMcTrArGb & 4.9 & 3.1 & 3.3 x 6.9\tabularnewline 6 & 11 Jun 2011 & 55723 & WbEfOnMcTrArGb & 5.8 & 3.8 & 2.3 x 6.2\tabularnewline 7 & 05 Nov 2011 & 55871 & JbWbEfOnMcTrArGb & 5.2 & 3.3 & 3.1 x 7.0\tabularnewline \hline\hline & & & & & & \tabularnewline \noalign{\vskip-0.3cm} \multicolumn{7}{l}{{\tiny {*} applying natural weights without any cleaning}}\tabularnewline \end{tabular} \end{table*} Pulsars, typically searched for and detected with single dish radio telescopes, are rapidly rotating, highly magnetized neutron stars (NSs). Their spin axis and magnetic field axis -- along which relativistic charged particles are accelerated emitting cyclotron radiation -- are misaligned giving rise to the pulsar phenomenon. Being very stable rotators, pulsars are used as accurate clocks to measure their intrinsic parameters such as rotation period $P$, spin down rate $\dot{P}$, and position. The fastest pulsars, the so-called millisecond pulsars (MSPs, $P<30\,$ms), are the most stable rotators allowing for very accurate tests of theories of gravity \citep[e.g. ][]{antoniadis2013,freire12}.\textcolor{red}{{} }Roughly one half of all MSPs has been found in globular clusters% \footnote{For a compilation of all globular cluster pulsars see the webpage by Paulo Freire: http://www.naic.edu/\textasciitilde{}pfreire/GCpsr.html % } (GCs) where the frequency of stellar encounters is high, favoring the evolution of normal pulsars to MSPs through angular momentum and mass transfer in a binary system \citep[e.g. ][]{bhattacharya91}.\textcolor{red}{{} }In total, about 6\% of the 2302 currently known pulsars (as listed on the ATNF webpage% \footnote{http://www.atnf.csiro.au/research/pulsar/psrcat/, accessed November 18 2013% }, \citealp{Manchester05}) reside in GCs with Terzan 5 and 47 Tuc leading the field with 34 and 23 confirmed pulsars, respectively, all but one being MSPs. Despite their rotational stability, disentangling all parameters of GC pulsars through pulsar timing is sometimes difficult due to the presence of the gravitational field of the GC. In those cases, model independent measurements of intrinsic pulsar parameters such as parallax, $\pi$, and proper motion, $\mu$, can improve the overall timing solution. The ideal way to measure $\pi$ and $\mu$ purely based on geometry is through radio interferometric observations. Here we report about multi-epoch global very long baseline interferometry (VLBI) observations of the core region ($\sim\,$4 arcmin) of the GC M15 (NGC 7078). This GC is one of the oldest (13.2 Gyr, \citealt{mcnamara04}) and most metal poor ({[}Fe/H{]}$=-2.40$, \citealt{sneden97}) GCs known to reside in the Galaxy. It is host to eight known pulsars (four of them being MSPs), one of which is in a binary system with another neutron star (PSR B2127+11C, \citealt{anderson90}, \citealt{anderson1993}). Four of the other seven pulsars are located in close proximity to the cluster core (within $<4.5\,$arcsec $=0.2\,$pc at the distance $d=10.3\pm0.4\,$kpc, \citealt{vandenbosch06}) making them ideal candidates to study cluster dynamics. In the same region, two low mass X-ray binaries (LMXBs, thought of as progenitors to MSPs,\textcolor{red}{{} }e.g. \citealp{tauris06} and references therein) have been reported (\citealt{Giacconi74,Auriere84,white01}). One of them, 4U 2129+12 (AC211), is also detectable as a compact source in the radio regime. This relatively high concentration of compact objects that have been or currently are in a binary system is already indicative of the high stellar density within the core region of M15. In fact, the observed central brightness peak and the stellar velocity dispersion profile gave rise to speculations that M15 could host an intermediate mass black hole (IMBH, e.g. \citealt{newell76}). The predicted IMBH mass $\mbh=1700^{+2700}_{-1700}$ \citep{gerssen03} has, hower, been ruled out by \citet{kirsten2012}. Alternatively, a collection of $\sim1600$ dark remnants such as stellar mass black holes, NSs, and white dwarfs in the central region of M15 could drive cluster dynamics (\citealt{baumgardt03,mcnamara03,murphy11}). Based on the 1.5 GHz radio luminosity and assuming a minimum pulsar luminosity of $2\,\mu$Jy, \citet{sun02} estimate that M15 could host up to $\sim300$ pulsars beaming towards Earth. In this project, we accurately measure the proper motion of all compact objects detectable within our field of view and monitor their variability. Apart from the eight pulsars and the LMXB AC211, two further compact radio sources were reported previously by \citet{machin90} and \citet{knapp96}. Those authors could, however, put no tight constraints on those sources' (non-) association with the cluster. Furthermore, we look for previously undetected compact objects within the observed region that might turn out to be pulsars. The double neutron star system M15C has shown a number of unusual glitches which need to be fitted with a number of parameters that are highly covariant with fits for the proper motion. In particular, the measurement of the orbital period decay caused by the emission of gravitational waves is influenced by an acceleration in the cluster potential and by a contribution due to a transverse motion (``Shklovskii effect'', see \citealp{lorimer05}). Thus determining the transverse motion of the pulsar will allow a better measurement of the line of sight acceleration of the system within the cluster potential. Once the proper motion is determined independently of any model, the covariances in the fits to the timing model can be removed, improving the measurement of all relativistic parameters. M15A is very close to the core and has a negative period derivative, which implies it is accelerating at a fast rate in the cluster's potential. This acceleration rate has now been shown to vary with time \citep{jacoby06}. The detailed variation is of great interest to investigate the gravitational potential in the cluster center, but if we have only the timing it must be disentangled from the proper motion signal. Therefore, an independent estimate of the proper motion of the pulsar will allow a much less ambiguous interpretation of the variation of the acceleration of this pulsar. In the following we will first describe the data taking and data reduction process in section \ref{sec:Observations-and-data}. The data analysis strategy and the results are the subject of section \ref{sec:Results} while section \ref{sec:Discussion} deals with the discussion of the implications of these results. The main findings of this project are briefly summarized in section \ref{sec:Conclusions}. | } We observed the massive globular cluster M15 in a multi-epoch global VLBI campaign in seven observations covering a time span of two years. In our observations we clearly detect five compact radio sources, namely the pulsar M15A, the double neutron star system M15C, the LMXB AC211, and two unclassified sources S1 and S2. Except for M15C (which was only detected in epochs 1, 3, and 7), all sources were detected in all seven epochs. From our proper motion measurements (Table \ref{tab:Details-of-the-astrometric-fits}) and the variability of M15C, AC211, and S2 we conclude: \begin{itemize} \item The projected global proper motion of M15 is $(\mu_{\alpha},\,\mu_{\delta})=(-0.58\pm0.18,\,-4.05\pm0.34)\,$mas$\,$yr$^{-1}$, \item M15A and AC211 have a maximal transverse peculiar velocity $v_{trans}^{max}=66\,$km$\,$s$^{-1}$ within the cluster, \item In epoch 3, the morphology of the LMXB AC211 is not point like but shows a double lobed structure instead. It is quite likely that the source had an outburst shortly before the observations in epoch 3, \item M15C has a transverse velocity of at most $39\,$km$\,$s$^{-1}$ moving towards the north in the cluster, \item The observed 2.5\% phase shift in the pulse profile points to geodetic precession as a possible explanation for the disappearance and reappearance of M15C during the observations, \item S1 is of extragalactic origin, most probably a background quasar, \item S2 is a Galactic foreground source at a distance $d=2.2_{-0.3}^{+0.5}\,$kpc moving at a transverse velocity $v_{t}^{\text{S2}}=26_{-4}^{+5}\,$km$\,$s$^{-1}$ with respect to the LSR, \item The flux density of S2 is variable by a factor of a few on the time scale of a few months. The spectrum seems to be flat indicative of a LMXB. There is, however, no known X-ray source within about 1 arcmin of the radio position of the source. \end{itemize} The proper motions measured here will be important for the analysis of the timing data from M15A and M15C (Ridolfi et al., in preparation). Our model-independent measurement of the proper motion of the pulsar M15A will allow a much less ambiguous interpretation of the variation of the acceleration of this pulsar in the cluster potential. Equally, in the case of M15C, with timing only the glitch signal will be entangled with the proper motion signal. Our measurement of the proper motion will allow an unambiguous study of the rotational behavior of the pulsar. Similar to the first five observations \citep{kirsten2012}, in epochs 6 and 7 we do not detect any significant emission from a putative IMBH within the central 0.6 arcsec of the core region of M15. Excluding any variability of a central object on the time scale of two months to two years, we reconfirm the $3\sigma$ upper limit for the proposed central IMBH mass of M$_{\bullet}=500\,\text{M}_{\odot}$ . | 14 | 3 | 1403.7204 |
1403 | 1403.7518_arXiv.txt | The discovery of primordial tensor perturbations by the BICEP2 experiment~\cite{BICEP2} would be an important step in fundamental physics, if it is confirmed, since it would prove the existence of quantum gravitational radiation. The BICEP2 result would demonstrate simultaneously the reality of gravitational waves, whose existence had previously only been inferred indirectly from binary pulsars~\cite{Hulse-Taylor}, and quantization of the gravitational field. The existence of such tensor perturbations is a generic prediction of inflationary cosmological models~\cite{oliverev,alreview,encyclopedia}, and the BICEP2 result is strong evidence in favour of such models, the `smoking graviton', as it were. Moreover, different inflationary models predict different magnitudes for the tensor perturbations, and the BICEP2 measurement~\cite{BICEP2} of the tensor-to-scalar ratio $r$ discriminates powerfully between models, favouring those with a large energy density $V \sim (2 \times 10^{16}~{\rm GeV})^4$. As such, it disfavours strongly the Starobinsky $R + R^2$ proposal~\cite{Staro,MC,Staro2} and similar models, such as Higgs inflation~\cite{HI} and some avatars of supergravity models~\cite{ENO6,ENO7,KLno-scale,WB,FKR,fklp,AHM,pallis}. That said, the BICEP2 result is in some tension with previous experiments such as the WMAP~\cite{WMAP} and Planck satellites~\cite{Planck}, which established upper limits on $r$ and seemed to favour very small values. We are not qualified to comment on the relative merits of these different experiments, which may be reconciled if the scalar spectral index runs fast, but for the purposes of this paper we take at face value the BICEP2 measurement of $r$~\cite{BICEP2} while retaining the measurements of the tilt in the scalar spectrum, $n_s$, found by the previous experiments~\cite{WMAP,Planck}, with which BICEP2 is consistent. Planck and previous experiments were in some tension with the single-field power-law inflationary potentials of the form $\mu^{4-n} \phi^n$ where $\mu$ is a generic mass parameter. Among models with $n \ge 2$, that might be related directly to models with fundamental scalar fields $\phi$, models with $n = 2$ provided the least poor fits to previous data. However, even such quadratic models were barely compatible with the Planck results at the 95\% CL~\cite{Planck}. Quadratic models \cite{m2} are, in some sense, the simplest, since just such a single form of the potential could describe dynamics throughout the inflationary epoch and the subsequent field oscillations, unlike monomial potentials of the form $\phi^n: n \ne 2$, which would require modification at small $\phi$ in order to accommodate a particle interpretation. Moreover, there are motivated particle models that would yield a quadratic potential, e.g., for the scalar supersymmetric partner of a singlet (right-handed) neutrino in a Type-I seesaw model of neutrino masses~\cite{ERY}. Such a model would make direct contact with particle physics, and the decays of sneutrino inflatons could naturally yield a cosmological baryon asymmetry via leptogenesis. Such a scenario would be a step towards a physical model of inflation. In this paper we first set the scene by revisiting simple slow-roll inflationary models based on single-field monomial potentials of the form $\mu^{4-n} \phi^n$ in light of the BICEP2 result~\cite{BICEP2}. We derive and explore the validity of a general consistency condition on monomial models: \begin{equation} r \; = \; 8 \left( 1 - n_s - \frac{1}{N} \right) \, , \label{consistency} \end{equation} where $N$ is the number of e-folds of inflation. This consistency condition is comfortably satisfied for the value $r = 0.16^{+0.06}_{-0.05}$ (after dust subtraction) indicated by BICEP2~\cite{BICEP2}, and the values $n_s = 0.960 \pm 0.008$ and $N = 50 \pm 10$ consistent with this and other experiments~\cite{WMAP,Planck}. The consistency condition (\ref{consistency}) is independent of the monomial power index $n$, but in the quadratic case $n = 2$ one finds for $N = 50$ that $n_s = 0.960$ and $r = 0.16$, in perfect agreement with the data. On the other hand, an $n = 4$ potential would have $\delta \chi^2 \sim 8$, as we discuss later. Global supersymmetry accommodates very naturally~\cite{ENOT} a single-field $\phi^2$ model, one example being the sneutrino model~\cite{ERY} mentioned above. However, one should embed such a model in the framework of supergravity~\cite{SUGRA}. The first attempt at constructing an inflationary model in $N=1$ supergravity proposed a generic form for the superpotential for a single inflaton~\cite{nost}, the simplest example being $W = m^2 (1-\Phi)^2$~\cite{hrr}. However, these models relied on an accidental cancellation between contributions to the inflaton mass~\cite{lw}. Such cancellations are absent in generic supergravity models, which typically yield effective potentials with higher powers of the inflaton field~\cite{eta,alreview,encyclopedia}. These problems can be alleviated either by employing a shift symmetry in the inflaton direction \cite{kyy} or through no-scale supergravity~\cite{no-scale,LN,EENOS}. Since no-scale supergravity arises as the effective field theory of compactified string theory~\cite{Witten}, and is an attractive framework for sub-Planckian physics~\cite{ENO8}, this is an appealing route towards embedding quadratic inflation in a more complete theory. The bulk of this paper explores possibilities for obtaining a quadratic inflaton potential in the context of supergravity. After briefly reviewing models that invoke a shift symmetry, we turn our focus to no-scale supergravity models. We distinguish two classes of such models, which are differentiated by how the moduli in the theory obtain their vevs. We give an explicit example that incorporates supersymmetry breaking and a simple quadratic inflationary potential embedded in no-scale supergravity with a stabilized K\"ahler modulus. | We have shown that the BICEP2 data on $r$ and the available data on $n_s$ are consistent (\ref{consistency}) with a simple power-law, monomial, single-field model of inflation, and that $V = m^2 \phi^2/2$ is the power-law that best fits the available data (\ref{nvalues}). The required value of $m \simeq 2 \times 10^{13}$~GeV and the small value of the quartic coupling required for the quadratic potential is to be a good approximation when $\phi \simeq \sqrt{200} M_{Pl}$ during inflation are technically natural in a supersymmetric model~\cite{ENOT}. Moreover, it is attractive to identify the inflaton with a singlet (right-handed) sneutrino, since this value of $m$ lies within the range favoured in Type-I seesaw models of neutrino masses. It is natural to embed quadratic (sneutrino) inflation within a supergravity framework, and we have given examples how this may be done in the context of both minimal and no-scale supergravity. Nevertheless, we would like to reiterate that the BICEP2 measurement of $r$ is in tension with the Planck upper limit on $r$, and emphasize that our choice here to discard the latter and explore the implications of the former is somewhat arbitrary. In our ignorance, we have no opinion how the tension between the two experiments will be resolved. If it is resolved in favour of Planck, Starobinsky-like models would return to favour, which can easily be accommodated in the no-scale supergravity framework, in particular, with a relatively simple superpotential such as (\ref{oldW}). Alternatively, if the resolution favours BICEP2, as we have shown in this paper, the simplest possible $m^2\phi^2/2$ potential would be favoured, which offers a very attractive connection to particle physics if the inflaton is identified as a sneutrino. As we have shown, such a model could also be accommodated within a no-scale supergravity framework, though at the expense of a more complicated superpotential such as (\ref{funnyW}) or (\ref{modW}). Models with values of $r$ intermediate between the ranges favoured by Planck and BICEP2 can also be constructed within the no-scale framework. A final caveat is that all our analysis is within the slow-roll inflationary paradigm, whereas the resolution of the tension between Planck and BICEP2 might require going beyond this framework, e.g., to accommodate large running of the scalar spectral index, a stimulating possibility that lies beyond the scope of this work. | 14 | 3 | 1403.7518 |
|
1403 | 1403.6202_arXiv.txt | We perform Differential Emission Measure (DEM) analysis of an M7.7 flare that occurred on 2012 July 19 and was well observed by the Atmospheric Imaging Assembly (AIA) aboard the \emph{Solar Dynamic Observatory}. Using the observational data with unprecedented high temporal and spatial resolution from six AIA coronal passbands, we calculate the DEM of the flare and derive the time series of maps of DEM-weighted temperature and emission measure (EM). It is found that, during the flare, the highest EM region is located in the flare loop top with a value varying between $\sim$ $8.4\times10^{28}$ $\mathrm{cm}^{-5}$ and $\sim$ $2.5\times10^{30}$ $\mathrm{cm}^{-5}$. The temperature there rises from $\sim$ 8 MK at about 04:40 UT (the initial rise phase) to a maximum value of $\sim$ 13 MK at about 05:20 UT (the hard X-ray peak). Moreover, we find a hot region that is above the flare loop top with a temperature even up to $\sim$16 MK. We also analyze the DEM properties of the reconnection site. The temperature and density there are not as high as that in the loop top and the flux rope, indicating that the main heating may not take place inside the reconnection site. In the end, we examine the dynamic behavior of the flare loops. Along the flare loop, both the temperature and the EM are the highest in the loop top and gradually decrease towards the footpoints. In the northern footpoint, an upward force appears with a biggest value in the impulsive phase, which we conjecture originates from chromospheric evaporation. | A solar flare is one of the most violent eruptive phenomena in the solar corona. It is generally accepted that the energy released during the flare is pre-stored in magnetic field and magnetic reconnection plays an essential role in converting magnetic energy into various energy forms like thermal energy of the plasma, kinetic energy of accelerated particles, and emissions in almost all wavelengths. In the past decades, a standard flare model (CSHKP; \citealt{Carmichael1964a, Sturrock1966a, Hirayama1974a, Kopp1976a}) has been established. It can explain many observational properties of flares such as two separating ribbons, formation of a cusp-shaped structure, etc. Even though the standard model has made a great achievement, the detailed process of flare energy release is still not clear, especially how and where the magnetic energy is most effectively converted to other forms of energy.\par When a flare occurs, the flare atmosphere experiences different heating processes such as Joule heating (e.g. \citealt{Spicer1981b, Spicer1981a, Holman1985a}), shock heating (e.g. \citealt{Petschek1964a, Tsuneta1997a}), electron (e.g. \citealt{Fletcher1995a, Fletcher1996a, Fletcher1998a}) or proton (e.g. \citealt{Voitenko1995a, Voitenko1996a, Voitenko1999a}) beam heating, radiative backwarming (e.g. \citealt{Hudson1972a, Metcalf1990a, Metcalf1990b, Ding1996a}), and inductive current heating (e.g. \citealt{Melrose1995a, Melrose1997a}). These heating processes work with different efficiencies in different locations of the flare such as the reconnection site, flare loop top, and footpoints. Until now, to clarify the specific heating processes in flares is still an open question. Therefore, a quantitative assessment of the structure and evolution (particularly the temperature and density) of the flare region are critical to determine when, where and which heating processes are taking place.\par Previous studies on temperature and density of flares are mainly based on X-ray spectral observations (e.g. \citealt{Milkey1971a, Horan1971a, Dere1974a, Dere1977a, Cheng1977a, Landini1979a, Duijveman1983a, Denton1984a, Bornmann1985a}). \cite{Kahler1970a} fitted a thermal model using the X-ray data from OGO-5, and obtained the evolutions of temperature and emission measure (EM) in a flare. The temperatures they derived vary from about 5 MK to more than 10 MK and peak earlier than the EM. \cite{Dere1979a} used the line ratio method to derive the temperature and electron density, and found that when the temperature of flares rises from 1 to 10 MK, the electron density also rises from $1.0\times10^{10}$ to $5.0\times10^{11}$ $\mathrm{cm}^{-3}$. \cite{Doschek1981a} also used the line ratio method to obtain the electron temperatures of two M-class flares. Meanwhile, they derived the electron density using the O \uppercase\expandafter{\romannumeral7} lines. For both flares, they found the peak temperature and peak density are 18 MK and $10^{12}$ $\mathrm{cm}^{-3}$, respectively. Statistical researches were done by \cite{Feldman1996b, Feldman1996a}, who showed that the temperature and the volume emission measure range from 4 to 25 MK and from $10^{46}$ to $10^{50}$ $\mathrm{cm}^{-3}$, respectively, for a sample of more than 860 (A2 to X2 class) flares. \par With the launch of Yohkoh, the multi-wavelength imaging observations make it possible to derive the two-dimensional temperature and EM maps of flares. For example, \cite{McTiernan1993a} obtained maps of temperature and EM of flares for the first time even though with a low spatial resolution. Results with higher resolution were then derived through the data from RHESSI (e.g. \citealt{Li2007a}), Hinode (e.g. \citealt{Reeves2009a, Winebarger2011a, Hahn2011a, Graham2013a}), and \emph{Solar Dynamics Observatory} (\emph{SDO}) (e.g. \citealt{Hannah2012a, Aschwanden2013a, Plowman2013a}). The time evolution of DEM in different regions of a flare was studied by \cite{Battaglia2012a} using the \emph{SDO}/Atmospheric Imaging Assembly (\emph{SDO}/AIA) data.\par In this paper, we investigate the structure and evolution of the flare on 2012 July 19 using the \emph{SDO}/AIA data with unprecedented temporal and spatial resolution. We derive quantitatively the temperature and EM through the DEM analysis, which provide critical information to understand the energy and heating processes of flares. An overview of the flare is presented in Section 2. The data reduction and the DEM method are introduced in Section 3. The results are shown in Section 4, which is followed by a summary and conclusion in Section 5. \par | In this work, we analyze an M7.7 limb flare on 19 July 2012 using the high resolution EUV data observed by \emph{SDO}/AIA. By applying a DEM method, we obtain the quantitative distributions of temperature and EM of the flare region including the flare loop and the reconnection site. The main results are summarized as follows.\par \begin{enumerate} \item{At the beginning of the flare, a significant amount of hot plasma ($\sim$ 5 MK) appeared in the top of the flare loop. As the reconnection continues, some heating mechanisms (such as turbulence or plasma waves) further heat the plasma to a higher temperature of $\ge$ 10 MK in the outflow regions, consistent with the results of \cite{LiuWei2013a}.} \item{Along the flare loop, the temperature and the EM are the highest in the loop top. From the loop top to the footpoints, the temperature and the EM decrease monotonically in the initial phase. However, the EM in the northern footpoint has a rapid increase during the impulsive phase, which is regarded as evidence of chromospheric evaporation. As a result, the net force exerted on the plasma in the northern footpoint changes its direction from downward to upward. Meanwhile, the chromospheric evaporation in the southern footpoint is weak, probably due to the asymmetry of the magnetic topology.} \item{The cusp-shaped structure above the flare loop in the gradual phase is a high-temperature and high-density structure. Above that, there is an elongated structure, probably corresponding to the current sheet. Across the current sheet, there exists a sharp change of temperature and EM, in particular at the northern side, which is probably a signature of a slow MHD shock.} \end{enumerate}\par The above studies and results on the spatially resolved DEM provide an example and clues in learning where and when the energy is released during a flare. In the future, we need to study more events toward a better understanding of the flare energetics and thermal dynamics. | 14 | 3 | 1403.6202 |
1403 | 1403.6728_arXiv.txt | {The measurement of the Rossiter-McLaughlin effect for transiting exoplanets places constraints on the orientation of the orbital axis with respect to the stellar spin axis, which can shed light on the mechanisms shaping the orbital configuration of planetary systems. Here we present the interesting case of the Saturn-mass planet HAT-P-18b, which orbits one of the coolest stars for which the Rossiter-McLaughlin effect has been measured so far. We acquired a spectroscopic time-series, spanning a full transit, with the HARPS-N spectrograph mounted at the TNG telescope. The very precise radial velocity measurements delivered by the HARPS-N pipeline were used to measure the Rossiter-McLaughlin effect. Complementary new photometric observations of another full transit were also analysed to obtain an independent determination of the star and planet parameters. We find that HAT-P-18b lies on a counter-rotating orbit, the sky-projected angle between the stellar spin axis and the planet orbital axis being $\lambda=132\pm15$ deg. By joint modelling of the radial velocity and photometric data we obtain new determinations of the star ($M_\star=0.770 \pm 0.027$ M$_\odot$; $R_\star=0.717 \pm 0.026$ R$_\odot$; $V\sin{I_\star}=1.58 \pm 0.18$ \kms) and planet ($M_{\rm p}=0.196 \pm 0.008$ M$_{\rm J}$; $R_{\rm p}=0.947 \pm 0.044$ R$_{\rm J}$) parameters. Our spectra provide for the host star an effective temperature $T_{\rm eff}=4870 \pm 50$ K, a surface gravity of $\log g_\star=4.57 \pm 0.07 $ \cmss, and an iron abundance of [Fe/H] = $ 0.10 \pm 0.06$. HAT-P-18b is one of the few planets known to transit a star with $T_{\rm eff}\lesssim6250$ K on a retrograde orbit. Objects such as HAT-P-18b (low planet mass and/or relatively long orbital period) most likely have a weak tidal coupling with their parent stars, therefore their orbits preserve any original misalignment. As such, they are ideal targets to study the causes of orbital evolution in cool main-sequence stars.} | \label{Sec:intro} The number of known extrasolar planets has recently passed the milestone of one thousand. While many discovery surveys are still ongoing, the characterization of known extrasolar planetary systems is gaining ever more attention. Transiting extrasolar planets (TEPs) are especially interesting as they allow for the direct determination of fundamental parameters such as planetary mass and radius \citep{2012MNRAS.426.1291S}. Moreover, observations of secondary eclipses put constraints on the planet albedo and brightness temperature, while transmission spectroscopy can be used to probe molecular and atomic features in the planet atmospheres. Another possibility offered by TEPs is to study the Rossiter-McLaughlin (RM) effect, which is an anomaly in the radial velocity orbital trend that occurs when the planet moves across the stellar photospheric disc (see \citealt{2011ApJ...742...69H} and references therein). The measurement of the RM effect permits the determination of the angle $\lambda$, the projection on the sky plane of the misalignment angle $\Theta$ between the stellar spin axis and the planet orbital axis. The knowledge of $\lambda$ can give insight into the mechanisms of formation and orbital migration of exoplanets (\citealt{2011Natur.473..187N}; \citealt{2008ApJ...678..498N}; \citealt{2011ApJ...735..109W}). In the context of GAPS, a long-term observational programme with HARPS-N at TNG (\citealt{2013A&A...554A..28C}, hereafter Paper I; \citealt{2013A&A...554A..29D}), we are carrying out a sub-programme aimed at measuring the RM effect in a sample of TEP host stars. We plan to explore a wide assortment of stellar temperatures, ages, and masses, as well as diverse orbital (period, eccentricity) and physical (mass, radius) planet properties. In this paper, we report on the measurement of the RM effect for the \object{HAT-P-18} transiting system \citep{2011ApJ...726...52H}. \object{HAT-P-18b} is a Saturn-mass planet orbiting a K2 dwarf star with a period $P\sim5.5$ days. \citet{2011ApJ...726...52H} (hereafter H11) pointed out that with a density $\rho_{\rm p}\sim0.25$ g cm$^{-3}$, HAT-P-18b is not expected to have a significant heavy element core, according to the planetary models by \citet{2007ApJ...659.1661F}. | \label{Sec:concl} We have found that the Saturn-mass planet hosted by HAT-P-18, a K2 dwarf star with $T_{\rm eff} = 4870 \pm 50$ K, lies on a retrograde orbit. We discussed how the existence of such object fits in the context of the current alternative theories of giant planet orbital migration. HAT-P-18b scores a point in favour of gravitational N-body (N$\geqslant$3) interactions, while migration in the proto-planetary disc seems unable to explain its existence. HAT-P-18b, which is one of the very few planets around cool stars found to be on a retrograde orbit, also allows setting constraints on the efficiency of tidal interactions in obliquity damping. | 14 | 3 | 1403.6728 |
1403 | 1403.3082_arXiv.txt | We present Spitzer, NIR and millimeter observations of the massive star forming regions W5-east, S235, S252, S254-S258 and NGC7538. Spitzer data is combined with near-IR observations to identify and classify the young population while \coa~and \cob~observations are used to examine the parental molecular cloud. We detect in total 3021 young stellar objects (YSOs). Of those, 539 are classified as Class~I, and 1186 as Class~II sources. YSOs are distributed in groups surrounded by a more scattered population. Class I sources are more hierarchically organized than Class II and associated with the most dense molecular material. We identify in total 41 embedded clusters containing between 52 and 73\% of the YSOs. Clusters are in general non-virialized, turbulent and have star formation efficiencies between 5 and 50\%. We compare the physical properties of embedded clusters harboring massive stars (MEC) and low-mass embedded clusters (LEC) and find that both groups follow similar correlations where the MEC are an extrapolation of the LEC. The mean separation between MEC members is smaller compared to the cluster Jeans length than for LEC members. These results are in agreement with a scenario where stars are formed in hierarchically distributed dusty filaments where fragmentation is mainly driven by turbulence for the more massive clusters. We find several young OB-type stars having IR-excess emission which may be due to the presence of an accretion disk. | Embedded clusters are truly stellar nurseries, more than 90\% of the stars in our Galaxy are formed in such associations \citep{zin07}. Since they are young (with ages of less than $2-3$~Myr), they still contain the imprints of the parental molecular cloud. Moreover, the wide range of number stars \citep[10 to $10^4$,][]{lad03} and high density of members \citep[more than 20~stars$/$pc$^{-2}$,][]{lad03} makes embedded clusters perfect laboratories to study cluster dynamics, stellar evolution and star formation theories. Among embedded clusters, those harboring massive stars (hereafter massive embedded clusters) are particularly important since both the formation of massive stars and the impact of massive stars feedback on the other cluster members and the parental molecular cloud are still not well understood. Massive stars begin hydrogen burning while they are still accreting material and the strong stellar winds and ultraviolet (UV) photons emitted will eventually stop the accretion before the star reaches its final mass. In addition, the emitted UV photons ionize the surrounding cloud and create an expanding \hii~region that disrupts and compress the natal molecular cloud. The feedback effects of massive stars over, for example, their disk life-times (in case they have disk) and/or over other cluster members are unclear. It is also unclear under what conditions the \hii~regions and shock waves will either destroy the molecular cloud or trigger star formation \citep[there are several examples showing molecular gas that has been swept up by expanding \hii~regions and that contains young stars, eg.][]{deh08,cha08a,wan11}. This feedback into the interstellar medium is absent in the case of low-mass stars and it may play an important role in the star formation rate and evolution of the Galaxy. Of the three models proposed to explain the formation of massive stars \citep*[competitive accretion in a protocluster environment, monolithic collapse in turbulent cores and stellar collisions and mergers in very dense systems,][]{bon04,mck03,bon08}, competitive accretion and turbulent cores are somehow a scaled-up version of low-mass star formation. Competitive accretion requires that massive stars form at the center of the cluster (known as primordial mass segregation). This has been observed in some young clusters. However, it can be also achieved by the dynamical interaction between the cluster members and the gas they are embedded in \citep[e.g.][]{cha10}. The turbulent core model proposes that density enhancements created by turbulent motions allows the high-accretion rates necessary for high-mass stars to form. This requires massive cores highly turbulent which have been lately reported \citep[e.g.][]{her12}. In addition, rotating toroid-like structures and outflows (which are star formation indicators via gravitational collapse) have been observed for sources with masses up to 25 \msun~\citep[eg.][]{ces05,gar07}. However, there is still no evidence of disks in O-type stars. This suggests that coalescence may be an alternative theory of formation for stars with masses of more than 30 \msun~\citep{zin07}. However, the high star densities necessary for coalescence to occur have not been yet reported. Since embedded clusters are located deep inside their natal molecular cloud, they can be observed only at infrared (IR), and millimeter wavelengths. In the last years, several authors have studied embedded clusters using a combination of Spitzer-IRAC \citep{faz04,all04} and near-IR (NIR) data, which is proven to be a powerful tool to identify and classify YSOs in regions of star formation \citep*[eg. Ophiuchus, Serpens, Perseus, Taurus and NGC1333,][]{gut08,win07,sch08}, most of them in the low-mass range. \citet{sch08} studied the spatial distribution of different class YSOs in embedded clusters and found that they mostly evolve from a hierarchical to a more centrally concentrated distribution. \citet{gut09} analyzed 36 low-mass embedded clusters and found that YSOs are likely formed by Jeans fragmentation of parsec-scale clumps, in agreement with the accretion scenario. Massive embedded clusters, on the other hand, have been more elusive to scrutinize mainly due to two reasons; a) they are less abundant than low-mass clusters and hence usually located at several kpc from the Sun and, b) the early stages of massive star formation last only a few million years. Because of this, only a few massive embedded clusters have been evenly studied until date using a combination of near-IR and Spitzer data \citep[eg.][]{koe08,cha08a,kir08,dew11,ojh11}. Those studies have been carried out by several authors using different data sets and analysis. As a consequence, their results are difficult to compare between each other and the available statistics is still poor. We present an homogeneous Spitzer-IRAC, NIR and molecular data study on the young stellar population in five high-mass star forming regions: W5-east, S235, S252, S254-S258 and NGC7538. Our study aims to address the following questions: What are the physical properties of YSOs in massive embedded clusters? Are those properties similar to the low-mass case? What are the implications of those properties on the massive star formation scenario? Also, we provide a set of physical quantities with a reasonable statistical weight that will help to constrain theoretical models of star formation and cluster dynamic. Region S254-S258 was presented by \citet{cha08a}. In this work, we use their results for a more recent distance estimate derived from trigonometric parallax of methanol masers \citep[1.6 kpc,][]{ryg10}. The position and distance to the studied regions are summarized in Table~\ref{table_sources}. Following there is a brief description of the regions \citep[see][for a description of region S254-S258]{cha08a}. In \S~\ref{section_observations3} we explain our observations and the data reduction process. Results, including the identification of YSOs, analysis of their spatial distribution and the study of the molecular cloud structure are presented in \S~\ref{section_results3}. In \S~\ref{section_discussion3} discuss and compare the physical properties of YSOs for low-mass and massive embedded clusters. Our conclusions are presented in \S~\ref{section_conclusions3}. The estimation of background contamination, a comparison with previous observations, non-detection estimate and Gaussian decomposition of molecular spectrum are explained in Appendices~\ref{background} through~\ref{section_gaussian}. \begin{table} \caption{List of observed regions} \label{table_sources} \centering \begin{tabular}{lccc} \hline Name & RA (J2000) & Dec (J2000) & D$_{\odot}$ \\ & hh mm ss & dd mm ss & [kpc] \\ \hline W5-east & 03 01 31.20 & 60 29 13.0 & 2.0 \\ S235 & 05 40 52.00 & 35 42 20.0 & 1.8 \\ S252 & 06 09 04.70 & 20 35 09.0 & 2.1 \\ S254-S258 & 06 12 46.00 & 18 00 38.0 & 1.6 \\ NGC7538 & 23 13 42.00 & 61 30 10.0 & 2.7 \\ \hline \end{tabular} \end{table} | We performed a multi-wavelength study on 5 regions of massive star formation: W5-east, S235, S252, S254-S258 and NGC7538. Spitzer-IRAC/MIPS and NIR observations were used to classify the stellar population. While the molecular content was studied using \coa, \cob~observations and extinction maps. We found in total 3021 YSOs, including 539 Class I and 1186 Class II. A minimum spanning tree algorithm was used to identify YSO clusters based on the characteristic separation of their members. A total of 41 embedded clusters were found, 15 of which have not being identified before. The Class I sources are spatially correlated with the most dense molecular material. They are also located in regions with higher YSOs surface density and are distributed more hierarchically than Class II. All this agrees well with the picture where stars are formed in dense and fractally arranged dust filaments. Then, dynamical interactions rearrange the YSOs in a more centrally condensed distribution. We find that the mean separation between cluster members is smaller than the cluster Jeans length in most cases. This difference is more evident in the case of MEC. In addition, the \cob~line width of clusters associated molecular material shows that the clusters are turbulent. This agrees with a scenario in which fragmentation is likely driven by turbulence. Though magnetic fields may also play an important role as a support against gravity. Between 30 and 50\% of the total number of YSOs are not included in clusters. This percentage of scattered population depends on the used cluster finding algorithm. We propose that between 10 and 20\% of the scattered population in the studied regions corresponds, indeed, to cluster members. We compared the physical properties of embedded clusters associated with high-mass stars and clusters with no evidence of harboring massive stars. We find no systematical differences in the correlations derived for both samples. In all cases, the MEC seem to be an extrapolation of the LEC. The correlation between the clusters dense mass and the number of cluster members is investigated. We find that this correlation is close to linear, in agreement with previous findings from \citet{lad10}. We also find that the star formation efficiency is rather constant along the clusters mass range. In average, the estimated SFE agrees well with the mass factor between the cores mass function and the initial mass function. The spatial distribution of Class I and Class II sources in clusters AFGL4029 and G138.15+1.69 suggests a sequential star formation which moves in the same direction as the ionization front. This is in agreement with previous estimation of the YSOs ages around these clusters and supports the hypothesis that both clusters were created in a triggered star formation scenario \citep[see][and references there in]{cha11}. We classify a total of 24 OB-type stars as either Class I or Class II sources. The IR-excess emitted by those sources may be due to a dusty structure around them. Those sources are good candidates for future high resolution studies aimed to search for disks in high-mass stars. The presence of a rotating structure around massive stars suggest that they are form via accretion. | 14 | 3 | 1403.3082 |
1403 | 1403.3561_arXiv.txt | We have used Washington photometry for 90 star cluster candidates of small angular size -typically $\sim$ 11$\arcsec$ in radius- distributed within nine selected regions in the inner disc of the Large Magellanic Cloud (LMC) to disentangle whether they are genuine physical system, and to estimate the ages for the confirmed clusters. In order to avoid a misleading interpretation of the cluster colour-magnitude diagrams (CMDs), we applied a subtraction procedure to statistically clean them from field star contamination. Out of the 90 candidate clusters studied, 61 of them resulted to be genuine physical systems, whereas the remaining ones were classified as possible non-clusters since either their CMDs and/or the distribution of stars in the respective fields do not resemble those of stellar aggregates. We statistically show that $\sim$ (13 $\pm$ 6)$\%$ of the catalogued clusters in the inner disc could be possible non-clusters, independently of their deprojected distances. We derived the ages for the confirmed clusters from the fit of theoretical isochrones to the cleaned cluster CMDs. The derived ages resulted to be in the age range 7.8 $\le$ log($t$) $\le$ 9.2. Finally, we built cluster frequencies for the different studied regions and found that there exists some spatial variation of the LMC CF throughout the inner disc. Particularly, the innermost field contains a handful of clusters older than $\sim$ 2 Gyr, while the wider spread between different CFs has taken place during the most recent 50 Myr of the galaxy lifetime. | Star clusters have long been key objects to reconstruct the formation and the dynamical and chemical evolutions of galaxies. As the Large Magellanic Cloud (LMC) is considered, the study of its star cluster population has allowed us to learn about its spread in metallicity at the very early epoch \citep{betal96}; the existence of a relatively important gap in its age distribution \citep{getal97}; the evidence of vigorous star cluster formation episodes \citep{p11a}; the complexity of the cluster formation rate during the last million years \citep{dgetal13}, etc. Although somewhat inhomogeneous and clearly still incomplete, the astrophysical properties estimated for a significant number of LMC star clusters have been the starting point to address galactic global issues such as the age-metallicity relationship and the cluster formation rate, among others. Therefore, it results of great importance to enlarge the number of objects confirmed as genuine star clusters and to estimate their fundamental parameters. With the aim of providing mainly with age and metallicity estimates for an increasing number of LMC star clusters, we have continued a long term observational program carried out at Cerro Tololo Interamerican Observatory (CTIO) using different telescopes in conjunction with CCD cameras and the Washington photometric filters \citep{c76}. A total of 61 clusters have been observed and their fundamental parameters estimated \citep[see, e.g.][]{getal97,petal99,petal02,getal03,petal03a,petal03b,petal09,petal11}. More recently, we took advantage of a wealth of available images at the National Optical Astronomy Observatory (NOAO) Science Data Management (SDM) Archives\footnote{http://www.noao.edu/sdm/archives.php}, obtained at the CTIO 4-m Blanco telescope with the Mosaic II camera attached (36$\times$36 arcmin$^2$ field with a 8K$\times$8K CCD detector array, scale 0.274$\arcsec$/pixel) and the Washington filters. From the whole volume of observed LMC fields \citep[see][]{petal12} we identified 206 clusters previously catalogued by \citet[hereafter B08]{betal08}, and studied 107 of them with some detail \citep{p11a,p12a}. In this work, we end up the series of studies of unkown or poorly-known LMC clusters with available Washington photometry by analysing the remaining objects in the aforementioned sample. The paper is organised as follows: Washington $C,T_1$ data are presented in Section 2. The star cluster sample is described in Section 3, while the cleaning of the colour-magnitude diagrams (CMDs) and the estimation of the cluster ages are presented in Sections 4 and 5, respectively. We discuss the results and build cluster frequencies in Section 6. Finally, conclusions of this analysis are given in Section 7. | The astrophysical properties of LMC star clusters have been the starting point to address galactic global issues such as the age-metallicity relationship and the cluster formation rate, among others. However, the number of clusters with estimated properties is by far less than a half of the catalogued clusters. Therefore, it results of great importance to enlarge the number of objects confirmed as genuine star clusters and to estimate their fundamental parameters. We have used Washington photometry for 90 star cluster candidates of small angular size, typically $\sim$ 11$\arcsec$ in radius, distributed within nine fields (36$\times$36 arcmin$^2$) located in the LMC inner disc to estimate their ages and to build their local cluster frequencies. Some clusters are projected towards relatively crowded fields. In order to assure a meaninful interpretation of the observed cluster CMDs, we applied a subtraction procedure to statistically clean them from the field star contamination. Thus, we could disentangle cluster features from those belonging to their surrounding fields. The employed technique makes use of variable cells in order to reproduce the field CMD as closely as possible. Out of the 90 cluster candidates studied, 61 resulted to be genuine physical systems, whereas the remaining ones were classified as possible non-clusters since either their CMDs and/or the distribution of stars in the respective fields do not resemble those of stellar aggregates. We statistically show that $\sim$ (13 $\pm$ 6)$\%$ of the catalogued clusters distributed within the inner disc (distance from the LMC $\le$ 4$\degr$) could be possible non-clusters, independently of their deprojected distances. We derived the ages for the confirmed clusters from the fit of theoretical isochrones computed for the Washingotn system to the cleaned cluster CMDs. When adjusting a subset of isochrones we took into account the LMC distance modulus and the individual star cluster colour excesses. The derived ages resulted to be in the age range 7.8 $\le$ log($t$) $\le$ 9.2. Finally, we built CFs from ages available in the literature and from the ages estimated in this work, which encompass 90$\%$ of the whole sample of catalogued clusters located in the studied LMC regions. We found that there exists some spatial variation of the LMC CF throughout the inner disc. Particularly, the innermost field (deprojected distance $\sim$ 0.56$\degr$) contains a handful of clusters older than $\sim$ 2 Gyr, while the wider spread between different CFs has taken place during the most recent 50 Myr of the galaxy lifetime. | 14 | 3 | 1403.3561 |
1403 | 1403.4946_arXiv.txt | We present the adaptable \textsc{Muesli} code for investigating dynamics and erosion processes of globular clusters (GCs) in galaxies. \textsc{Muesli} follows the orbits of individual clusters and applies internal and external dissolution processes to them. Orbit integration is based on the self-consistent field method in combination with a time-transformed leapfrog scheme, allowing us to handle velocity-dependent forces like triaxial dynamical friction. In a first application, the erosion of globular cluster systems (GCSs) in elliptical galaxies is investigated. Observations show that massive ellipticals have rich, radially extended GCSs, while some compact dwarf ellipticals contain no GCs at all. For several representative examples, spanning the full mass scale of observed elliptical galaxies, we quantify the influence of radial anisotropy, galactic density profiles, SMBHs, and dynamical friction on the GC erosion rate. We find that GC number density profiles are centrally flattened in less than a Hubble time, naturally explaining observed cored GC distributions. The erosion rate depends primarily on a galaxy's mass, half-mass radius and radial anisotropy. The fraction of eroded GCs is nearly 100\% in compact, M~32 like galaxies and lowest in extended and massive galaxies. Finally, we uncover the existence of a violent \textit{tidal disruption dominated phase} which is important for the rapid build-up of halo stars. | \label{sec:intro} Globular clusters (GCs) are among the oldest objects in galaxies. They can provide a wealth of information on the formation and evolution histories of their host galaxies as well as on cosmological structure formation \citep{1978ApJ...225..357S, 2010MNRAS.406.2000M, 2013ApJ...772...82H}. This paper is a first step in connecting and understanding properties of GC systems, their host galaxies and their central supermassive black holes. Surveys of elliptical galaxies show radial GC profiles to be less concentrated when compared to the galactic stellar light profiles (\citealt{1979ARA&A..17..241H, 1996ApJ...467..126F, 1999AJ....117.2398M, 2009A&A...507..183C} and references therein). An impressive example is the 10 kpc core in the spatial distribution of GCs in NGC~4874, one of the two dominant elliptical galaxies inside the Coma cluster \citep{2011ApJ...730...23P}. Two competing scenarios attempt to explain these cored distributions. In one scenario, the core originated from processes operating at the onset of galaxy formation in the very early universe \citep{1986AJ.....91..822H, 1993ASPC...48..472H}. The other scenario assumes that GCs were formed co-evally with the field stars with a cuspy distribution, i.e. comparable to the galactic stellar light profile. In this second scenario, the observed cores in the GC distributions are caused by the subsequent erosion and destruction of globular clusters in the nucleus of the galaxy itself \citep{1993ApJ...415..616C, 1998A&A...330..480B, 2000MNRAS.318..841V, 2003ApJ...593..760V}. It is this scenario we would like to shed light on with this study. However, taking all the relevant processes that affect the GC erosion rates in elliptical galaxies into account is numerically challenging. This is due to the fact that there are several internal and external processes acting simultaneously on the dissolution of globular clusters, such as two-body relaxation, stellar mass loss and tidal shocks \citep{1997ApJ...474..223G, 1997MNRAS.289..898V, 2001ApJ...561..751F, 2003MNRAS.340..227B, 2006MNRAS.371..793G}. In this study, we present a new code named \textsc{Muesli} to investigate several processes that dominate cluster erosion: (i) tidal shocks on eccentric GC orbits and relaxation driven dissolution, and their dependence on the anisotropy profile of the GC population, (ii) tidal destruction of GCs due to a central super-massive black hole (iii) stellar evolution and (iv) orbital decay through dynamical friction. That is: (i) GCs lose mass when stars get beyond the limiting Jacobi radius, $r_{J}$, and become unbound to the cluster. Two-body relaxation will cause any GC to dissolve with time. The dissolution time depends on the mass and extent of the GC as well as the strength of the tidal field \citep{2003MNRAS.340..227B}. GCs on very eccentric orbits are particularly susceptible for disintegration within a few orbits owing to the strong tidal forces near the galactic center. In radially biased velocity distributions, large fractions of orbits are occupied by such eccentric, i.e. low angular momentum orbits, and the overall destruction rate of globular clusters is strongly enhanced over the isotropic case. The same holds for triaxial galaxies when globular clusters move on box orbits \citep{1989MNRAS.241..849O, 1997MNRAS.292..808C, 2005MNRAS.356..899C}. (ii) The gradient of the potential which is relevant for the destruction of GCs is increased by the presence of supermassive black holes. SMBHs are commonly found in the cores of luminous galaxies \citep{1998AJ....115.2285M, 2007ApJ...662..808L} and the connection between SMBHs and globular clusters is of particular interest. \citet{2010ApJ...720..516B} and \citet{2011MNRAS.410.2347H} found empirical relations between the total number of GCs and the mass of the central black hole. The origin of this linear $\mbh-N_{\scriptsize{\mbox{GC}}}$ relation is under debate. See \cite{2013arXiv1312.5187H} for a most recent version of the $\mbh-N_{\scriptsize{\mbox{GC}}}$ relation and comparison to other globular cluster/host galaxy relations. There is some evidence that $\mbh$ and $N_{\scriptsize{\mbox{GC}}}$ are indirectly coupled over the properties of their host galaxies \citep{2012AJ....144..154R}, however a direct causal link cannot be ruled out owing to the difficulty of studying the growth of SMBHs from accreted cluster debris. (iii) Another effect is mass loss by stellar evolution (SEV). SEV decreases the globular cluster mass most significantly during an initial phase of roughly $100$ Myr. In this period, O and B stars lose most of their mass through stellar winds and supernovae (e.g.~\citealt{2008sse..book.....D}). Over a Hubble time, a stellar population loses about 30-40\% of its mass due to stellar evolution \citep{2003MNRAS.340..227B} (iv) Finally, massive objects like globular clusters lose energy and angular momentum due to dynamical friction (DF) when migrating through an entity of background particles. GCs will gradually approach the center of the galaxy where they are destroyed efficiently as described above. In low luminosity spheroids ($L\approx10^{10}\lsun$) decaying GCs might also merge together and contribute to the growth of nuclear star clusters \citep{1975ApJ...196..407T, 2011ApJ...729...35A, 2013ApJ...763...62A, 2013arXiv1308.0021G}. Among other quantities, the efficiency of DF depends on the departure of the host galaxy from spherical symmetry \citep{2004MNRAS.349..747P}, and becomes largest for low angular momentum orbits \citep{1992MNRAS.254..466P, 2005MNRAS.356..899C}. Like a real muesli, our \underline{Mu}lti-Purpose \underline{E}lliptical Galaxy \underline{S}CF + Time-Transformed \underline{L}eapfrog \underline{I}ntegrator (\textsc{Muesli}) consists of several well chosen ingredients. \textsc{Muesli} has a high flexibility and is designed for computing GC orbits and erosion rates in live galaxies. It can handle spherical, axisymmetric and triaxial galaxies with arbitrary density profiles, velocity distributions and central SMBH masses for which no analytical distribution functions exist. Since the potential of the galaxy is computed self-consistently, the code can handle time evolving potentials due to e.g. the interaction of the galaxy and a central black hole \citep{1998ApJ...498..625M} or even non-virialised structures. \textsc{Muesli} is designed to constrain the field-star and GC formation efficiencies in the early universe. This can be done by relating the computational outcomes with observations of the GC specific frequency, $S_{N}$, which is the number of observed globular clusters normalized to to total mass/luminosity of the host galaxy \citep{2010MNRAS.406.1967G, 2013ApJ...772...82H, 2013arXiv1307.6563W}. The U-shaped $S_{N}$ distribution, being highest for the least massive and most massive galaxies, traces the impact of feedback processes operating in different galactic environments. However, the quantitative examination of these processes requires knowledge about the total fraction of GCs eroded over time. In this first paper, we provide detailed information about the code and about $N$-body model generation, and we show results from the code testing. We apply our code to erosion processes of GCs inside spherical galaxies with Hernquist and S\'{e}rsic profiles, isotropic and radially biased velocity distributions and central SMBHs. This is done for four representative galaxies. These galaxies cover a wide range of masses ($M_{\scriptsize{\mbox{GAL}}} \approx\unit{10^{9}-10^{12}}{\msun}$), sizes ($R_{e}\approx\unit{10^{2}-10^{4}}{\mathrm{pc}}$) and central SMBH masses ($\mbh\approx \unit{10^{6}-10^{10}}{\msun}$). Erosion rates in axisymmetric and triaxial galaxies, as well as nuclear star cluster and SMBH growth processes by cluster debris are reserved for later publications. The present paper is organized as follows. The \textsc{Muesli} code and the dynamics governing globular cluster dissolution and disruption processes are specified in \S~\ref{sec:method}. At the end of this section we introduce the initial conditions of the GCs and discuss the generation of the underlying galaxy models. Results are presented in \S~\ref{sec:results}, followed by a critical discussion (\S~\ref{sec:critical_discussion}). The main findings are summarized in \S~\ref{sec:conclusion}. Extensive tests of the code are carried out in the Appendix \S~\ref{sec:testing}. | \label{sec:conclusion} We developed a versatile code, named \textsc{Muesli}, designed to investigate the dynamics and evolution of globular cluster systems in elliptical galaxies. It uses the self-consistent field method (SCF) with a time-transformed leapfrog scheme to integrate orbits of field stars and GCs. In this way, velocity-dependent forces like dynamical friction and post-Newtonian effects of central massive black holes can be handled accurately. In order to be able to treat spherical galaxies with anisotropic velocity distributions (as well as non-spherical galaxies), the code uses the ellipsoidal generalization of Chandrasekhar's dynamical friction formula \citep{1992MNRAS.254..466P}. The advantage of \textsc{Muesli} lies in its flexibility to evaluate the impact of complex physical processes on the erosion rates of globular clusters (GC) in evolving galaxies. In a first application, we have investigated if flat central cores in GC distributions around massive elliptical galaxies result from tidal disruption events (TDEs) and cluster dissolution processes through relaxation. Furthermore, we explored the question if the strong tidal field within the compact dwarf galaxy M~32 is responsible for lack of GCs in this galaxy. We used a power-law distribution for the GC masses, and set the initial phase-space distribution of the GCs equal to the stellar phase-space distribution of the host galaxy. The rapid phase of gas expulsion was ignored with the exception of one model. We assumed two cluster dissolution channels: (i) A slightly modified version of relaxation driven mass loss in tidal fields (which also handles SEV) from \cite{2003MNRAS.340..227B} was implemented. Once a cluster mass becomes less than $m_{\scriptsize{\mbox{GC}}}=\unit{100}{\msun}$, it is assumed to be dissolved by relaxation. Additionally (ii), we identified a tidal disruption criterion in terms of the ratio of cluster half-mass radius, $r_H$, to Jacobi radius, $r_J$, in that no cluster was able to survive for a significant amount of time, when the ratio $x = r_H/r_J$ passed a threshold of $x=0.5$. The condition for globular cluster disruption in tidal fields was calibrated by means of direct $N$-body experiments. For this purpose, we used the star cluster code \textsc{Nbody6} to compute the evolution of massive clusters on various orbits within the tidal field of a host galaxy. We found that, after 10 Gyr of evolution, all computed GC systems show signs of central flattening with the central core size depending in a non-trivial way on the mass, scale and anisotropy profile of the host galaxy and threshold GC mass. Galaxies with highly radially biased velocity distributions lose a significant fraction of clusters also at large galactocentric radii. As a result the cores, in their central density profiles are less pronounced than in galaxies with isotropic distributions. The primary factors which determine the disruption rate of GCs are the half-mass radius and mass of the galaxy and the initial degree of radial anisotropy of the GC system. For host galaxies with an isotropic velocity distribution, the fraction of disrupted globular clusters is nearly 100\% in very compact, M~32-like dwarf galaxies. The rate is lowest in the most massive and extended galaxies (50\%) like NGC~4889. The arithmetic mean radius, $\overline{R_{D}}$, where most GC destruction occurred during the last 10 billion years, is roughly equal to the (3D) half-light radius $R_{H}$ in compact dwarf ellipticals and drops to $0.15R_{H}$ in massive elliptical like NGC~4889. An isotropic initial velocity distribution is mostly preserved at large radius ($R>R_{H}$), while the GC velocity profile close to the galactic center become less radial or even tangentially biased. Different degrees of initial radial anisotropy may be the reason for a considerable scatter in the total number of GCs around more massive elliptical galaxies (see Table~\ref{GCDIS}). In compact M~32-like galaxy models with radial anisotropy no single GC survived. The influence of dynamical friction on the overall GC erosion rate in massive elliptical galaxies is insignificant as long as the initial cluster mass function follows a power law distribution with slope $\beta=2$. However, DF yields a small contribution in compact dwarf ellipticals like M~32. Secondary effects like the density profile or the presence of a central massive black hole manifest their influence only in the most massive and extended galaxies. An ultramassive black hole with a mass above ten billion solar masses inside a galaxy like NGC~4889 has a considerable impact on tidal disruption processes. Its presence increases the total fraction of destroyed GCs during the violent phase of tidal disruptions by 2\% to 5\% in absolute terms. We also found that globular cluster erosion processes result in a bell shaped GC mass function and a nearly constant relation between GC mean mass and galactocentric distances as long as the galaxies are not too extended and radially biased. Observations of bell-shaped GC mass functions in extended galaxies may indicate that their GC populations were formed in more compact building blocks of these galaxies, which later merged to form the present-day host. Finally, our results show a strong chronological aspect in the evolution of globular cluster systems. That is, most tidal disruptions occur at early times, on dynamical timescales of the host galaxy. Hence, we call this a \textit{tidal disruption dominated phase} in the evolution of globular cluster systems. Our simulations strongly suggest that the number of GCs in most galaxies was much higher at their formation. Therefore, depending on the fraction of stars in a galaxy which were born in globular clusters, the debris of the disrupted clusters should constitute a significant amount of a galaxy's field population. In the extreme case that all stars in galaxies were born in globular clusters, our study would imply that larger galaxies like NGC 4889 have to be the merger product of many smaller galaxies and/or that the progenitor galaxies were initially much more compact because otherwise 10-50\% of its stellar mass would still have to be locked up in globular clusters (Fig.~\ref{Erosionplot}). Given the fact that only about 0.1\% of all stars seem to be locked up in globular clusters nowadays, our study prefers building blocks of galaxies in the early universe to either have a small fraction of stars being born in very massive globular clusters, or being relatively compact like M\,32, or having highly radially biased GC distributions. Interestingly, we predict the field population coming from disrupted GCs to have complementary orbital properties to the phase space distribution of the surviving clusters. Moreover, we predict the centrally cored GC distributions around SMBHs to be tangentially biased, and thus parts of the field star population to have a pronounced radially biased component from cluster debris. The diffusion of this cluster debris in phase space (in combination with gravitational focussing relevant for unbounded matter) might therefore contribute to the rapid growth of SMBHs in the early universe through the refilling of the black hole loss cone. To which degree will be subject to a future study. | 14 | 3 | 1403.4946 |
9804 | astro-ph9804073_arXiv.txt | We introduce a statistical quantity, known as the $K$ function, related to the integral of the two--point correlation function. It gives us straightforward information about the scale where clustering dominates and the scale at which homogeneity is reached. We evaluate the correlation dimension, $D_2$, as the local slope of the log--log plot of the $K$ function. We apply this statistic to several stochastic point fields, to three numerical simulations describing the distribution of clusters and finally to real galaxy redshift surveys. Four different galaxy catalogues have been analysed using this technique: the Center for Astrophysics I, the Perseus--Pisces redshift surveys (these two lying in our local neighbourhood), the Stromlo--APM and the 1.2 Jy {\it IRAS} redshift surveys (these two encompassing a larger volume). In all cases, this cumulant quantity shows the fingerprint of the transition to homogeneity. The reliability of the estimates is clearly demonstrated by the results from controllable point sets, such as the segment Cox processes. In the cluster distribution models, as well as in the real galaxy catalogues, we never see long plateaus when plotting $D_2$ as a function of the scale, leaving no hope for unbounded fractal distributions. | The standard cosmology is based on the assumption that the Universe must be homogeneous on very large scales. Several pieces of evidence support this assumption: the homogeneity and isotropy of the microwave background radiation \cite{cobe} and some aspects of the large scale distribution of matter \cite{peb89} seem to strongly advocate uniformity on scales bigger than about $200 \, h^{-1}$ Mpc (where $H_0= 100 h$ km s$^{-1}$ Mpc$^{-1}$). However the presence of very large features in the galaxy distribution like the Bootes void \cite{kir81} or the Great Wall \cite{gel89} which span a scale length of the order of $100 \, h^{-1}$ Mpc calls the actual scale of homogeneity into question. Moreover other authors consider the assumption of homogeneity just a theoretical prejudice not necessarily supported by the observational evidence quoted above. They defend the alternative idea of an unbounded fractal cosmology \cite{col92}. Guzzo (1997) argues against this interpretation on the basis of a careful handling of the data. The spatial two--point correlation function is the statistical tool mainly used to describe the clustering in the Universe (Peebles 1980, 1993). However, because of the integral constraint \cite{p80}, one cannot estimate it at very large distances from the currently available redshift surveys. In order to study clustering in the regime where it is not very strong, we have only two possibilities: either we extend the size of the redshift catalogues or we use alternative statistical descriptors. The approach described in this paper points in the latter direction. In the same line, other authors \cite{fis93,par94,tad96} have tried to measure the power--spectrum on large scales directly from galaxy catalogues. Einasto \& Gramman (1993) studied the transition to homogeneity by means of the power--spectrum and found a relation between the correlation transition scale and the spectral transition scale (turnover in $P(k)$). We introduce the quantity called $K(r)$, which is related to the correlation function $\xi(r)$. The novelty of our approach lies essentially in the fact that we shall use a cumulant quantity instead of a differential quantity such as $\xi(r)$. Although for a point process the functions $\xi(r)$ and $K(r)$ are well defined, what we measure from the galaxy catalogues are just estimators of those functions. One of our main claims is that the estimators for $K(r)$ are more reliable than the most currently used estimators for $\xi(r)$ and that makes its use recommendable (especially in three-dimensional processes and at large scales) despite its somewhat less informative character. | We should like also to comment briefly on the relation of $K$ with the correlation function $\xi(r)$. Both play their role in the analysis of the point pattern and, as Stoyan \& Stoyan (1996) say, their relation is similar to that between the distribution function and the probability density function in classical statistics. The use of a cumulative quantity such as $K$ avoids binning in distance, which is often a source of arbitrariness for $\xi$ \cite{rip92}. Let us explain why $\xi$ does suffer from the hindrance of splitting the information into disjoint bins. When one estimates $\xi(r)$ in $[r,r+dr]$, it is assumed that within that bin the correlation function is constant, and since this is obviously not true, the larger the bin the larger the error, but we cannot make arbitrarily small the size $dr$ of the bin, because in that case we would not find any pairs. In other words, $\xi(r)$ has an additional source of bias, not present in $K$, due to the smoothing caused by averaging over pairs of points close to but not exactly $r$ units apart of each other (Stein 1996). The correlation length ($r_0| \xi(r_0)=1$) is just the scale at which the density of galaxies is, on average, twice the mean number density. At smaller scales the pair correlations are due to non--linear perturbations, but homogeneity is not reached till $\xi(r_{\rm \tiny hom}) \sim 0$. The main interest of $K$ is that it permits us to study clustering precisely in that \lq\lq difficult'' range where $r_0 < r < r_{\rm \tiny hom}$, which cannot be reached by $\xi$ because in this range the errors on the estimates of $\xi$ are comparable with their values, while the difference $K-K_{\rm \tiny Pois}$ is still meaningful. As a concluding remark, we want to stress that an unbiased estimator of a quantity related with the correlation integral, known as the $K$ function, has been applied to cosmological simulations and galaxy samples. This function, extensively used in the field of spatial statistics, provides a nice measure of clustering. The border correction used here does not waste any data points and does not introduce spurious homogeneization, giving reliability to the evaluation of this function at large scales. Through the slope of $K$ we are able to calculate $D_2$, which is an indicator of a possible fractal behaviour of the point process at a given scale range. The clear physical meaning of $K$ and $D_2$ helps us easily interpret the clustering properties of different models of structure formation at different scales. Regarding the analysis of the galaxy redshift surveys, we have seen that the estimator of the $K$ function is robust in the sense that it does not depend on the shape of the study region and provides us with reliable information about the point patterns over a wide range of scales. The behaviour of the local dimension $D_2$ for the real galaxy samples is particularly interesting to proponents of various fractal models of large--scale structure. If a constancy of $D_2$ with the scale is a necessary condition for having a fractal point pattern (although it should not be sufficient as we have seen with the Cox process [see also Stoyan (1994) for more examples]), it is a neat conclusion of our analysis that the galaxy distribution does not even hold the necessary condition. The analysis presented here will provide a conclusive test to discover the scale at which the distribution of the matter in the Universe is really homogeneous when applied, in the near future, to the bigger and deeper galaxy catalogues which will be soon ready for common use. \subsection*{ACKNOWLEDGEMENTS} This work has been partially supported by an EC Human Capital and Mobility network (contract ERB CHRX-CT93-0129) and by the Spanish DGES project n. PB96-0707. We thank prof. Stoyan for bringing the Cox model to our attention and for useful conversations and comments. We thank R. Croft, S. Paredes, R. Trasarti--Battistoni and R. van de Weygaert for kindly allowing us to use their samples and programs, as well as T. Buchert, J. Schmalzing, M. Stein and specially M. Kerscher for very interesting discussions and comments. The authors want to thank the anonymous referee for his/her valuable comments and suggestions. | 98 | 4 | astro-ph9804073_arXiv.txt |
9804 | gr-qc9804034_arXiv.txt | Primordial black holes may form in the early Universe, for example from the collapse of large amplitude density perturbations predicted in some inflationary models. Light black holes undergo Hawking evaporation, the energy injection from which is constrained both at the epoch of nucleosynthesis and at the present. The failure as yet to unambiguously detect primordial black holes places important constraints. In this article, we are particularly concerned with the dependence of these constraints on the model for the complete cosmological history, from the time of formation to the present. Black holes presently give the strongest constraint on the spectral index $n$ of density perturbations, though this constraint does require $n$ to be constant over a very wide range of scales. | Black holes are tenacious objects, and any which form in the very early Universe are able to survive until the present, unless their Hawking evaporation is important. The lifetime of an evaporating black hole is given by \begin{equation} \frac{\tau}{10^{17} \, {\rm sec}} \simeq \left( \frac{M}{10^{15} \, {\rm grams}} \right)^3 \,. \end{equation} From this we learn that a black hole of initial mass $M \sim 10^{15}$g will evaporate at the present epoch, while for significantly heavier black holes Hawking evaporation is negligible. Another mass worthy of consideration is $M \sim 10^{9}$g, which leads to evaporation around the time of nucleosynthesis, which is well enough understood to tolerate only modest interference from black hole evaporation by-products. Several mechanisms have been proposed which might lead to black hole formation; the simplest is collapse from large-amplitude, short-wavelength density perturbations. They will form with approximately the horizon mass, which in a radiation-dominated era is given by \begin{equation} \label{hormass} M_{{\rm HOR}} \simeq 10^{18} \, {\rm g} \, \left( \frac{10^7 \, {\rm GeV}}{T} \right)^2 \,, \end{equation} where $T$ is the ambient temperature. This tells us that any black holes for which evaporation is important must have formed during very early stages of the Universe's evolution. In particular, formation corresponds to times much earlier than nucleosynthesis (energy scale of abut $1\,$MeV), which is the earliest time that we have any secure knowledge concerning the evolution of the Universe. Any modelling of the evolution of the Universe before one second is speculative, and especially above the electro-weak symmetry breaking scale (about $100 \,$GeV) many possibilities exist. Note also that although we believe we understand the relevant physics up to the electro-weak scale, the cosmology between that scale and nucleosynthesis could still be modified, say by some massive but long-lived particle. In this article we will consider the standard cosmology and two alternatives \cite{gl,glr}. We define the mass fraction of black holes as \begin{equation} \beta \equiv \frac{\rho_{{\rm pbh}}}{\rho_{{\rm tot}}} \,, \end{equation} and will use subscript `i' to denote the initial values. In fact, we will normally prefer to use \begin{equation} \alpha \equiv \frac{\rho_{{\rm pbh}}}{\rho_{{\rm tot}}-\rho_{{\rm pbh}}} = \frac{\beta}{1-\beta} \,, \end{equation} which is the ratio of the black hole energy density to the energy density of everything else. Black holes typically offer very strong constraints because after formation the black hole energy density redshifts away as non-relativistic matter (apart from the extra losses through evaporation). In the standard cosmology the Universe is radiation dominated at these times, and so the energy density in black holes grows, relative to the total, proportionally to the scale factor $a$. As interesting black holes form so early, this factor can be extremely large, and so typically the initial black hole mass fraction is constrained to be very small. The constraints on evaporating black holes are well known, and we summarize them in Table~\ref{massfrac}. This table shows the allowed mass fractions at the time of evaporation. An additional, optional, constraint can be imposed if one imagines that black hole evaporation leaves a relic particle, as these relics must then not over-dominate the mass density of the present Universe \cite{BCL:PBH}. For black holes massive enough to have negligible evaporation, the mass density constraint is the only important one (though in certain mass ranges there are also microlensing limits which are somewhat stronger). \begin{table}[t] \caption[massfrac]{\label{massfrac} Limits on the mass fraction of black holes at evaporation.} \begin{tabular}{|c|c|c|} \hline \hline Constraint & Range & Reason \\ \hline $\alpha_{\rm{evap}} < 0.04$ & $10^{9}$ g $< M < 10^{13}$ g & Entropy per baryon\\ & & at nucleosynthesis \cite{var:ent} \\ \hline $\alpha_{\rm{evap}} < 10^{-26} \frac{M}{m_{{\rm Pl}}}$ & $M \simeq 5\times10^{14}$~g & $\gamma$ rays from current\\ & & explosions \cite{var:gam} \\ \hline $\alpha_{\rm{evap}} < 6\times10^{-10} \left( \frac{M}{m_{{\rm Pl}}}\right)^{1/2}$ & $10^{9}$~g $ < M <10^{11}$~g & n$\bar{\rm{n}}$ production \\ & & at nucleosynthesis \cite{var:neu} \\ \hline $\alpha_{\rm{evap}} < 5\times10^{-29} \left( \frac{M}{m_{{\rm Pl}}}\right)^{3/2}$ & $10^{10}$~g $< M < 10^{11}$~g & Deuterium destruction \cite{lin:deu} \\ \hline $\alpha_{\rm{evap}} < 1\times10^{-59}\left( \frac{M}{m_{{\rm Pl}}}\right)^{7/2}$ & $10^{11}$~g $< M < 10^{13}$~g & Helium-4 spallation \cite{var:he4}\\ \hline \end{tabular} \vspace*{2pc} \end{table} We will study three different cosmological histories in this paper, all of which are currently observationally viable. The first, which we call the standard cosmology, is the minimal scenario. It begins at some early time with cosmological inflation, which is necessary in order to produce the density perturbations which will later collapse to form black holes. Inflation ends, through the preheating/reheating transition (which we will take to be short), giving way to a period of radiation domination. Radiation domination is essential when the Universe is one second old, in order for successful nucleosynthesis to proceed. Finally, radiation domination gives way to matter domination, at a redshift $z_{{\rm eq}} = 24\,000\,\Omega_0 h^2$ where $\Omega_0$ and $h$ have their usual meanings, to give our present Universe. The two modified scenarios replace part of the long radiation-dominated era between the end of inflation and nucleosynthesis. The first possibility is that there is a brief second period of inflation, known as thermal inflation \cite{ls}. Such a period is unable to generate significant new density perturbations, but may be desirable in helping to alleviate some relic abundance problems not solved by the usual period of high-energy inflation. The second possibility is a period of matter-domination brought on by a long-lived massive particle, whose eventual decays restore radiation domination before nucleosynthesis. For definiteness, we shall take the long-lived particles to be the moduli fields of superstring theory, though the results apply for any non-relativistic decaying particle. | Although black hole constraints are an established part of modern cosmology, they are sensitive to the entire cosmological evolution. In the standard cosmology, a power-law spectrum is constrained to $n < 1.25$, presently the strongest observational constraint on $n$ from any source. Alternative cosmological histories can weaken this to $n < 1.30$, and worst-case non-gaussianity \cite{BP} can weaken this by another 0.05 or so, though hybrid models giving constant $n$ give gaussian perturbations. Finally, we note that while the impact of the cosmological history on the density perturbation constraint is quite modest due to the exponential dependence of the formation rate, the change can be much more significant for other formation mechanisms, such as cosmic strings where the black hole formation rate is a power-law of the mass per unit length $G\mu$. After all, the permitted initial mass density of black holes does increase by many orders of magnitude in these alternative cosmological models. | 98 | 4 | gr-qc9804034_arXiv.txt |
9804 | astro-ph9804245_arXiv.txt | We describe a method for the extraction of spectra from high dispersion objective prism plates. Our method is a catalogue driven plate solution approach, making use of the Right Ascension and Declination coordinates for the target objects. In contrast to existing methods of photographic plate reduction, we digitize the entire plate and extract spectra off-line. This approach has the advantages that it can be applied to CCD objective prism images, and spectra can be re-extracted (or additional spectra extracted) without having to re-scan the plate. After a brief initial interactive period, the subsequent reduction procedure is completely automatic, resulting in fully-reduced, wavelength justified spectra. We also discuss a method of removing stellar continua using a combination of non-linear filtering algorithms. The method described is used to extract over 12,000 spectra from a set of 92 objective prism plates. These spectra are used in an associated project to develop automated spectral classifiers based on neural networks. | The MK classification of stellar spectra (Morgan, Keenan \& Kellman 1943\nocite{morgan_43a}) has been an important tool in the workshop of stellar and galactic astronomers for more than a century. While improvements in astrophysical hardware have enabled the rapid observation of digital spectra, our ability to efficiently analyze and classify spectra has not kept pace. Traditional visual classification methods are clearly not feasible for large spectral surveys. In response to this, we have been working on a project to develop automated spectral classifiers (von Hippel et~al.\ 1994; Bailer-Jones 1996; Bailer-Jones et~al.\ 1997, 1998). These classifiers, which are based on supervised artificial neural networks, can rapidly classify large numbers of digital spectra. The development of these classification techniques has required a large, representative set of previously classified spectra. The most suitable data has been the spectra from the Michigan Spectral Survey (Houk 1994)\nocite{houk_94a} and the accompanying MK spectral type and luminosity class classifications listed in the {\it Michigan Henry Draper} (MHD) catalogue (Houk \& Cowley 1975; Houk 1978, 1982; Houk \& Smith-Moore 1988). \nocite{houk_75a}\nocite{houk_78a}\nocite{houk_82a}\nocite{houk_88a} This paper describes the data reduction techniques we developed to extract and process these spectra. | \begin{figure} \centerline{ \psfig{figure=fig9.eps,width=0.5\textwidth,angle=0} } \caption{Distribution of spectral types for each luminosity class. The dotted line represent giants (III), the dashed line subgiants (IV) and the solid line dwarfs (V).} \label{dist_B} \end{figure} This paper has described a method for extracting spectra from objective prism images. The method has been developed for the reduction of a set of photographic objective prism plates, but because the spectral extraction and processing takes place entirely in software using the complete digitized plate, it can equally well be applied to CCD objective prism images. The extraction process is driven by a set of catalogue Right Ascension and Declination positions, so a direct image of each field is not required. After an initial interactive period taking one or two minutes, the subsequent reduction is automatic, taking approximately one hour on a modest-sized SUN Sparc IPX to process a single plate (i.e.\ extract about 150 spectra). The reduction method described in this paper has been used to extract a set of over 12,000 high-quality spectra. From this, a subset of over 5,000 normal spectra was selected which had reliable two-dimensional (spectral type and luminosity class) classifications listed in the MHD catalogue. The frequency distribution of the various stellar classes in this set is shown in Figure~\ref{dist_B}. This data set is used in accompanying papers to produce automated systems for classifying and physically parametrizing stellar spectra (Bailer-Jones et~al.\ 1997, 1998). In the interests of extending spectral classification to more distant stellar populations, spectra of stars fainter than B $\sim 12$ are required. This could be achieved with a CCD objective prism survey. Although the technique described can only extract objects with known Right Ascension and Declination coordinates, the HST Guide Star Catalogue (e.g.\ Lasker et al.\ 1990)\nocite{lasker_90a}, which lists 19 million objects brighter than 16$^{th}$ magnitude, could be used as a driver for extraction. However, Bailer-Jones (unpublished, 1996) has also modified the method to extract unwidened spectra from CCD objective prism images in the absence of any coordinates, using an algorithm to locate local flux peaks. The method can be applied to spectra at different spectral resolutions and wavelength coverages, provided a suitable line exists for the second plate solution. | 98 | 4 | astro-ph9804245_arXiv.txt |
9804 | astro-ph9804135_arXiv.txt | We have undertaken near-continuous monitoring of the Seyfert 1 galaxy NGC 7469 in the X-ray with \xte\ over a $\sim 30$~d baseline. The source shows strong variability with a root-mean-square (rms) amplitude of $\sim 16$~per cent, and peak--to--peak variations of a factor of order 2. Simultaneous data over this period were obtained in the ultraviolet (UV) using \iue, making this the most intensive X-ray UV/X-ray variability campaign performed for any active galaxy. Comparison of the continuum light curves reveals very similar amplitudes of variability, but different variability characteristics, with the X--rays showing much more rapid variations. The data are not strongly correlated at zero lag. The largest absolute value of the correlation coefficient occurs for an anticorrelation between the two bands, with the X-ray variations leading the UV by $\sim 4$~d. The largest positive correlation is for the ultraviolet to lead the X-rays by $\sim 4$~d. Neither option appears to be compatible with any simple interband transfer function. The peak positive correlation at $\sim 4$d occurs because the more prominent peaks in the UV light curve appear to lead those in the X-rays by this amount. However, the minima of the light curves are near-simultaneous. These observations provide new constraints on theoretical models of the central regions of active galactic nuclei. Models in which the observed UV emission is produced solely by re-radiation of absorber X-rays are ruled out by our data, as are those in which the X-rays are produced solely by Compton upscattering of the observed UV component by a constant distribution of particles. New or more complex models must be sought to explain the data. We require at least two variability mechanisms, which have no simple relationship. We briefly explore means by which these observations could be reconciled with theoretical models. | \label{Sec:Introduction} The origin of the continuum emission of Active Galactic Nuclei (AGN) -- which covers an extremely broad band -- is not well understood. In a number of high luminosity sources, it appears that this emission peaks in the ultraviolet (UV), the so-called ``Big Blue Bump'' (Shields 1978; Malkan \& Sargent 1982). Strong, and apparently non-thermal X-ray flux is also a persistent property of AGN (e.g., Marshall \etal\ 1981). The X-ray emission covers a wide band from at least 0.1-100~keV, and can be described by a power-law form (Mushotzky, Done \& Pounds 1993). The Big Blue Bump is often identified as the thermal output of an accretion disk (henceforth the accretion disk model e.g., Shakura \& Sunyaev 1973). Heat is generated by viscous dissipation in the disk, which then radiates in the optical/UV regime for black hole masses typical of AGN (e.g., Sun \& Malkan 1989). An alternative origin for the UV continuum emissions has been suggested by both observational and theoretical considerations. Guilbert \& Rees (1988) postulated that the UV need not be internally generated in the accretion disk, but could arise via absorption and thermal re-emission (hereafter referred to as ``thermal reprocessing'') of X-rays in optically thick gas close to the central engine. The material - which could be but does not necessarily have to be the disk -- would imprint features on the X-ray spectra (e.g., Lightman \& White 1988; George \& Fabian 1991; Matt, Perola \& Piro 1991). Such features have been found (e.g., Nandra \& Pounds 1994) and suggest that approximately half of the incident X-rays are absorbed in the optically thick material. Spectroscopic observations of strong gravitational and Doppler effects in the iron K$\alpha$ line profiles of Seyfert galaxies (e.g., Tanaka \etal\ 1995; Nandra \etal\ 1997) suggest that this material lies extremely close to the central black hole and is probably in the form of a disk (Fabian \etal\ 1995). The bulk of the continuum photons absorbed in the gas should then be re-emitted at the characteristic thermal temperature of the material. For dense gas close to the central engine, and particularly for ``standard'' accretion disks, this should be in the optical/UV. Most models for the X--ray continuum of AGN are based on the idea that lower-energy photons are Compton scattered by a population of hot electrons and/or pairs (which we refer to as ``upscattering'' models e.g., Sunyaev \& Titarchuk 1980; Svensson 1983; Guilbert, Fabian \& Rees 1983). The seed photons are often assumed to be those in the blue bump. Specific models differ primarily in their assumptions about, e.g., the geometry of the system (e.g. Haardt \& Maraschi 1991, 1993; Haardt, Maraschi \& Ghisellini 1994; Stern \etal\ 1995), the question of whether the electron population has a thermal or non-thermal distribution, the importance of pairs (e.g., Zdziarski \etal\ 1990, 1994). These models have been successful in explaining various observations. The goal underlying the exploration of the models is the discovery of the process responsible for the generation of the copious energy output of AGN. While the case for accreting supermassive black holes is becoming compelling, the method by which the rest-mass energy of the material is converted into radiative energy is still highly uncertain. Some specific questions which remain about the emission mechanisms include: \begin{enumerate} \item{How important is viscous dissipation in the generation of the UV?} \item{What proportion of the UV arises via thermal reprocessing of X--rays?} \item{What is the seed population for upscattering into the X--rays?} \item{What mechanism accelerates the particles which up-scatter these seed photons?} \end{enumerate} A powerful way of investigating these questions is by variability campaigns. These have already reaped rich rewards in the study of AGN emission lines via ``reverberation mapping'' (e.g. Peterson 1993, Netzer \& Peterson 1997, and references therein). These emission line campaigns, however, also had strong implications for the generation of the continua, which we shall discuss below. The models discussed above all imply strong connections between the continuum emission in different bands. For example, the accretion disk emission could cover an extremely broad band, depending on the temperature profile of the disk. The thermal reprocessing model predicts that the X-rays should be generating UV emission. The upscattering model suggests the converse. By observing the variability in these bands, therefore, we can make inferences as to which of the various processes is in operation and to what degree. In particular, simultaneous X-ray/UV data should be the most revealing. A number of AGN have been monitored simultaneously at optical/UV and X-ray energies. Leaving aside blazars, the best-studied sources are NGC~4151, NGC~5548 and NGC 4051. In the first two objects, there is evidence for a correlation between the two bands. The best-sampled (and therefore most reliable) case is NGC~4151, in which the 1455~\AA\ and 2--10~keV flux appears to correlate well on all time scales from hours to a year (Perola \etal\ 1986; Edelson \etal\ 1996). In NGC~5548, the flux in the two bands is also well correlated on time scales from days to 1 year (Clavel \etal\ 1992). In both sources however, the correlation appears to break down during one very large UV outburst. NGC~4051 shows different behavior, in that the X-ray emission showed large-amplitude (factor $\sim 2$) variability, while the optical emission remained steady to within a few per cent when observed over a $\sim 2$~d baseline (Done \etal\ 1990). For completeness, we also mention the results obtained for other non-blazar AGN, though their significance is marginal due to the small number of simultaneous observations and/or the short duration of the campaigns. In Fairall~9, the slow decline of the 2--10~keV flux mimics the secular fading of the UV and optical continuum from 1978 to 1985 (Morini \etal\ 1986). The UV-optical versus X-ray flux correlation seems to hold in NGC~4593 (Santos-Ll\'{e}o \etal\ 1995), whereas in MCG-8-11-11 (Treves \etal\ 1990), 3C120 (Maraschi \etal\ 1991) and 3C~273 (Courvoisier \etal\ 1990) the two wavebands appear to be independent of each other. These previous attempts at determining the relationship between the components have obviously left some ambiguity. This is perhaps not surprising as generally the sampling of the light curves has been rather poor. In order to provide an improved dataset, a campaign of near-continuous \xte\ and \iue\ monitoring of NGC 7469 was undertaken over a $\sim 1$~month baseline. The results of the campaign in terms of the relationship of the X-ray and UV variability are the subject of this paper. We have effectively divided the paper into two halves. Sections 2-4 discuss the observational results exclusively, which are then summarized in Section 5. Section 6 then investigates the implications of the observational results within the framework of the models discussed above and suggest possible ways of reconciling the data with models. | We have investigated the relationship between the X-ray and UV emission in NGC 7469 on time scales of hours-weeks. The poor correlation between the X-ray and UV light curves at zero lag may be considered a surprising result because, as mentioned in \S1, some previous experiments suggested a good correlation, and little if any time lag between the variations in the two bands (e.g. NGC 5548, NGC 4151). No other AGN, however, has been monitored as intensively and for such a long duration. Variability information has important implications for the physical mechanisms responsible for the production of the X-ray and UV emission in AGN. If the flux changes in two bands are correlated, this suggests some causal link between them. A time lag between the bands then shows which component drives the other. If X-ray variations lead those in the UV, this is strongly suggestive of thermal reprocessing. If the opposite is observed, this strongly favors upscattering. With a simple transfer function this delay should be similar for all ``events'' in the light curve. No such simple behavior was observed during our campaign and the interpretation is less straightforward. Our data require modifications to the simplest ideas about the emission mechanisms in the UV and X-ray. \subsection{Model ingredients} It is widely accepted that the radiative energy from AGN originates as the rest-mass energy of accreting material. The conversion process must include a mechanism to accelerate the particles which produce the X-rays as they are the highest-energy photons to carry an appreciable fraction of the luminosity. In principle, the remainder of the AGN spectrum could be produced by thermal reprocessing following absorption of some fraction of this X-ray continuum. Viscous dissipation in the accretion flow, however, could dominate the observed UV emission. In many scenarios there is also a radiative connection between the UV and X-ray emission regions: X-rays can be produced by upscattering of UV seed photons and UV emission can be produced by re-radiation of absorbed X-rays. In these circumstances, a substantial number of factors can affect the observed variability: \begin{enumerate} \item Changes in the physical properties (e.g. optical depth, temperature, geometry) of the particle distribution which produces the X-rays \item Instabilities in the accretion flow \item The geometry and size of the X-ray and UV emission regions \item Anisotropy of the radiation fields, which might include relativistic effects close to the black hole \item The temperature distribution of the absorbing medium \item The importance of feedback, in which variations in each band affect the other \item Changes in occulting/absorbing media \end{enumerate} \subsection{Implications of the NGC 7469 data} Our data have a number of implications for the processes which produce the UV--to--X-ray emissions of NGC 7469 and the observed variability, which we now discuss. First we consider our observations in the context of historical data and the spectral energy distribution of NGC 7469. \subsubsection{The broad-band perspective} The mean X-ray and UV fluxes observed for NGC 7469 during our campaign are very much typical of the respective historical means for this source. The mean flux in the 2--10 keV band, based on historical observations over the period 1979-1993, is $\sim 3 \times 10^{-11}$\,erg cm$^{-2}$ s$^{-1}$. Furthermore, the range of historical variability is very similar to that observed during our campaign. This suggests that we sampled a large fraction of source variability in NGC 7469 during our one-month campaign, although we observe no obvious flattening of the PDS (Fig.~\ref{fig:pds}) at the lowest frequencies. Chapman, Geller and Huchra (1985) derived a mean value of $4.6 \times 10^{-14}$\,erg~cm$^{-2}$~s$^{-1}$~\AA$^{-1}$ for the 1430--1460~\AA\ continuum flux of NGC~7469 from 10 \iue\ observations in 1979--1982. Similarly, Edelson, Pike and Krolik (1990), reported a mean value of $4.8 \times 10^{-14}$\,erg~cm$^{-2}$~s$^{-1}$~\AA$^{-1}$ for the continuum flux at 1450~\AA\ (rest wavelength) from 16 \iue\ observations in 1979--1985. With a mean UV flux at 1485~\AA\ (observed wavelength) of $4.0 \times 10^{-14}$\,erg~cm$^{-2}$~s$^{-1}$~\AA$^{-1}$ (W97), NGC~7469 was thus neither particularly faint nor exceptionally bright during our campaign. The optical--to--X-ray spectral index of NGC~7469, $\alpha_{ox}$~=~1.22, is not significantly different from that of, e.g., NGC~5548 ($\alpha_{ox}$~=~1.25), or the mean index for Seyfert galaxies (Kriss, Canizares and Ricker 1980). The spectral energy distribution of NGC 7469 is shown in Fig.~\ref{fig:sed}. The only real peculiarity of NGC~7469 is the presence of a circumnuclear starburst ring within $1.\!''5$ of its nucleus. Genzel \etal\ (1995) estimate that the starburst accounts for two-thirds of the source bolometric luminosity and it may dominate the IR emission. However, it only contributes 4 percent to the observed X-ray flux (Perez-Olea and Colina 1996). The starburst should be invariant on the time scales sampled here and have no effect on the X--ray/UV variability. \subsubsection{The X--ray continuum} The presence of rapid variations in the X-rays which are not seen in the UV implies that the particle distribution responsible for the X-rays is variable. The X-ray emission mechanism is highly uncertain, but as mentioned in the introduction, many models have concentrated on Compton upscattering of seed UV photons by a population of hot electrons and/or pairs (e.g. Haardt \& Maraschi 1991, 1993). Our observations show that if the observed UV photons are the seed population, then the rapid variations of the X-rays do not arise from variations in the seed. Either the optical depth, temperature or geometry of the upscattering region must be changing. In the latter case, changes in the distribution of active regions in the X-ray source, or kinematic effects can produce variability (e.g., Abramowicz \etal\ 1991; Haardt, Maraschi \& Ghisellini 1997). Longer-term variations are also observed in the X-ray flux and these could also arise from processes such as those just mentioned. The fact that the power spectrum shows no obvious features or break is supportive of this interpretation. The fluctuations have a similar amplitude to those in the UV and this suggests a connection between the bands. This is intriguing. One possibility is that the long time-scale variability in the X--rays is due to changes in the UV seed population. In the simplest such interpretation - where the upscattering region is point-like and lies in the line of sight to a point-like seed source we expect a 1:1 correlation between the two bands with no lag. This is ruled out by our data, although we note that the minima appear to be very close in time. In more complex geometries, there may be time lags which will be in the sense that the X-ray variations follow those in the UV. The delay of $\sim 4$~d between the peaks is superficial evidence in favor of upscattering. In that model, however, we expect the lag to be very short, being dependent primarily on the light travel time between the regions, modified by geometrical factors. A lag of $\sim 4$~d, the only plausible conclusion here, seems rather long to be associated with these processes. In addition, we are unable to envisage a purely geometrical modification of a linear process which accounts for {\it both} the relationship between the maxima and that of the minima. \subsubsection{The UV continuum} We now consider the alternative that the X-ray emission drives the UV in the thermal reprocessing scenario. The luminosities in the X-ray and UV bands are similar (Fig.~\ref{fig:sed}) and thermal reprocessing can therefore be energetically important. As stated above, the 2-10 keV observed flux of NGC 7469 at our epoch was $3.4\times 10^{-11}$\,erg cm$^{-2}$ s$^{-1}$. However, the X-ray emission of NGC 7469 covers a far wider band than this, with significant emission being observed down to $\sim 0.1$~keV with ROSAT (Brandt \etal\ 1993) and most likely up to at least 100~keV as seen by OSSE (Zdziarski \etal\ 1995; Gondek \etal\ 1996). Estimates of the underlying photon index of the continuum are in the range $1.9-2.0$ after accounting for the effects of Compton reflection (Piro \etal\ 1990; Nandra \& Pounds 1994). We estimate the mean X--ray luminosity of NGC 7469 at our epoch to be $1.8\times 10^{44}$\,erg s$^{-1}$ in the 0.1-100~keV band. In the canonical thermal--reprocessing scenario about half of this luminosity should be absorbed in the accretion disk or other material. After estimating that fraction which is Compton scattered rather than absorbed (George \& Fabian 1991) we conclude that the presence of the iron emission line and reflection hump in NGC 7469 indicate that a luminosity of $\sim8\times 10^{43}$\,erg s$^{-1}$ of the X-ray emission of NGC 7469 is reprocessed and re-emerges as thermal emission. Let us now suppose that all of this luminosity emerges in a single black body (which represents the narrowest physically-realistic spectrum) peaking close to 1315\AA\ ($kT \sim 2$~eV). Such a blackbody is almost sufficient to account for the continuum at 1315\AA\ (Figure~\ref{fig:sed}). Therefore, it is energetically possible that reprocessed X-rays produce some of the observed UV continuum and its variations. As in the case of upscattering, the simplest thermal reprocessing models predict a strong positive correlation between the bands, with any time lags being in the sense that the X-ray variations lead those in the UV. No such lag is observed. We do find that the strongest {\it anti}--correlation of the datasets occurs for the X-rays leading the UV, but the interpretation of such a result is far from obvious and we do not comment on it further. Even if aliasing has caused us to ``miss'' a positive correlation with a long ($\sim 14$~d) lag between the UV and X--rays, any lag longer than a day or so is very difficult to explain. Any single transfer function which related the two bands would smooth the light curve of the responding band and reduce its amplitude. We observe, however, that the amplitudes of variability on long time scales are very similar. We therefore reject such a possibility. In this case it is even more difficult to envisage a complex geometry which can reproduce the light curves. It therefore seems highly unlikely that any substantial proportion of the 1315\AA\ continuum of NGC 7469 arises from thermal reprocessing unless, for example, there is substantial anisotropy of the X-ray emission. W97 found that the variations at longer UV wavelengths followed those at shorter wavelengths, but with a time lag of a fraction of a day. Collier \etal\ (1998) have demonstrated that this trend continues into the optical, and Peterson \etal\ (1998) find this trend to be significant at no less than the 97~per cent confidence level. This, together with the rapidity of the variations in NGC 7469, is most easily explicable in terms of the thermal reprocessing hypothesis. However, our data essentially rule out models in which all the observed optical/UV flux is re-radiated X-ray emission. The optical/UV variability therefore requires either intra-band reprocessing, which is difficult from an energetics standpoint, or some other model. Should we therefore conclude that the UV/optical continuum in NGC 7469 arises from direct emission by the accretion disk? Our data offer no direct constraints on accretion disk models, as no explicit relationship between the X--ray and UV emissions is predicted by those models. Nonetheless, the fact that thermal reprocessing is strongly disfavored by our data has profound implications for the disk models. The rapid and wavelength-coherent variations in the optical and UV flux of AGN is difficult to reconcile with a standard $\alpha$-disk (e.g. Krolik \etal\ 1991; Molendi, Maraschi \& Stella 1992). Prior to our observations, it was conceivable that thermal reprocessing was responsible for these rapid variations. There is now a clear need for a revision of accretion disk theory to account for these wavelength-independent variations without resorting to reprocessing. \subsubsection{The extreme ultraviolet (EUV) continuum and UV emission lines} Given the presence of a typical iron K$\alpha$ line and reflection hump in this source (Piro \etal\ 1990; Nandra \& Pounds 1994) we are left with the question of where the putative reprocessed X-ray flux is emitted. One possibility is that the thermal reprocessing occurs in a molecular torus (Ghisellini, Haardt \& Matt 1994; Krolik, Madau \& Zycki 1994), in which case it might emerge in the infrared. Such an hypothesis would predict a narrow iron K$\alpha$ line in the X-ray spectrum, whereas in many Seyfert galaxies these lines are extremely broad. The case of NGC 7469 is unclear, with Guainazzi \etal\ (1994) finding no evidence for a broad component and Nandra \etal\ (1997) finding marginal evidence. A conclusive determination requires a longer exposure with \asca, but it seems highly likely that the iron K$\alpha$ line and Compton hump in Seyfert 1 galaxies in general are produced extremely close to the central black hole (e.g., Nandra \etal\ 1997). We would therefore expect the reprocessed emission to emerge at a higher energy. As shown above, however, the thermally reprocessed X-rays only make a strong contribution to the observed optical and UV wavebands if the emission is strongly peaked at those wavelengths. It seems more likely that the emission covers a range of temperatures, in which case the reprocessed flux would be difficult to detect when spread over a wide band. Alternatively, it could peak in the (unobserved) EUV band. Figure~\ref{fig:sed} shows that this can indeed be the case. A blackbody of luminosity $8\times 10^{43}$\,erg s$^{-1}$ contributes less than 5~per cent of the flux at 1315\,\AA\ as long as $kT>12$~eV. Intensity variations in such a component would be undetectable with \iue. Similarly, the \asca\ spectrum constrains $kT<60$~eV. If the spectral form is broader than a single blackbody, the range of allowed temperatures is correspondingly wider. Interestingly, Brandt \etal\ (1993) reported evidence for a soft excess in the \ros/PSPC data which can be modeled as a blackbody of $kT\sim 110$~eV and a luminosity of $10^{43}$\,erg s$^{-1}$. This can be identified with the high energy tail of such a broad, reprocessed component. The major UV emission lines are excited by unobservable EUV photons. An extrapolation of the X-ray spectrum observed by \asca\ (George \etal\ 1998) and of the UV spectrum into that band indicate roughly equal contributions at energies at which the lines are excited. With the two components being poorly correlated at zero lag, it is therefore difficult to determine which will be the dominant EUV component at any given time. We have suggested above that there may even be a third contributor to the EUV, the reprocessed X-rays. In other words, the shape of the ionizing continuum changes with time. This effect could account for certain difficulties which have been encountered in explaining the emission line responses to the observed UV continuum in reverberation mapping experiments. Our observations suggest that the unseen EUV continuum is not directly related to the observed \iue\ flux, which therefore cannot be assumed to be a perfect representation of the continuum driving the line emission. It is also interesting to note that the emission-line light curves show long term trends which are not apparent in the continuum bands. This is most clearly demonstrated by Fig.~\ref{fig:renorm_lc}, which shows the X-ray and UV continuum light curves, together with those of the Ly$\alpha$ and C{\sc iv} emission lines. These have all been renormalized to the $F_{\rm var}$ value and thus the y-axes crudely represent the number of standard deviations from the mean. Both emission lines clearly show a long-term reduction in their flux which is not seen in either continuum. \subsection{Steps towards a new model} In the light of the above, it is clear that new or more complex models must be sought to explain the data which have been obtained thus far, and particularly those described in this paper. Here we suggest some ways in which our new data might be reconciled with the existing paradigm by modification or extension. We emphasize that such a discussion is incomplete and {\it ad-hoc}. As we have stated above, it seems most likely that the X--ray flux which is absorbed when the iron K$\alpha$ lines is being generated emerges in a relatively weak, broad component, that may peak the EUV/soft X-ray band. The emission in this band may well provide the crucial connection between the higher and lower-energy components. A reasonable interpretation of the longer-timescale variability observed in our light curve is that the UV emission leads that in the X-rays, but with a variable lag. This suggests the dominant source of variations is in the seed population of an upscattering model. We do, however, bear in mind the caveat that the particle distribution of the upscattering medium must also be variable, to produce the most rapid variations. To explain the ``variable'' time lag, we suggest that there are multiple ``seed'' populations, which dominate at different times. In particular we suggest that the main source of 1315\AA\ photons is located at a distance of $\sim 4$~lt d from the X--ray source and they are the dominant seed population when the source is in a high flux state, thus introducing a 4d ``lag'' between the X-rays and UV. When the 1315\AA\ flux is observed to decline, however, this allows other emitting regions to dominate the seed distribution. In particular we suggest at these times that EUV/soft X-ray photons are the dominant seed population. They arise from closer in and are therefore observed to have little or no lag with the X-rays. As might be apparent from the above discussion, the primary X--rays, reprocessed X-rays and primary UV might well exist in a rather fine balance in the typical AGN. Future observational data on other objects of similar quality to that presented here, and preferably including the far-UV and soft X-ray bands, will be necessary for further progress and to establish the generality of the phenomena explored here. | 98 | 4 | astro-ph9804135_arXiv.txt |
9804 | astro-ph9804303_arXiv.txt | We report a tentative detection with the IRAM 30m telescope of the LiH molecule in absorption in front of the lensed quasar B0218+357. We have searched for the $J = 0 \rightarrow 1$ rotational line of lithium hydride at 444 GHz (redshifted to 263 GHz). The line, if detected, is optically thin, very narrow, and corresponds to a column density of N(LiH) = 1.6 10$^{12}$ cm$^{-2}$ for an assumed excitation temperature of 15 K, or a relative abundance LiH/H$_2 \sim$ 3 10$^{-12}$. We discuss the implications of this result. | Primordial molecules are thought to play a fundamental role in the early Universe, when stellar nucleosynthesis has not yet enriched the interstellar medium. After the decoupling of matter and radiation, the molecular radiative processes, and the formation of H$_2$, HD and LiH contribute significantly to the thermal evolution of the medium (e.g. Puy et al 1993, Haiman, Rees \& Loeb 1996). Even at the present time, it would be essential to detect such primordial molecules, to trace H$_2$ in the low-metallicity regions (e.g. Pfenniger \& Combes 1994, Combes \& Pfenniger 1997). Unfortunately, the first transition of HD is at very high frequency (2.7 THz), and the first LiH line, although only at 444 GHz, is not accessible from the ground at $z=0$ due to H$_2$O atmospheric absorption. This has to wait the launching of a submillimeter satellite. Although the Li abundance is low (10$^{-10}$-10$^{-9}$), the observation of the LiH molecule in the cold interstellar medium looks promising, because it has a large dipole moment, $\mu = 5.9$ Debye (Lawrence et al.~1963), and the first rotational level is at $\approx 21\,\rm K$ above the ground level, the corresponding wavelength is $0.67\,\rm mm$ (Pearson \& Gordy 1969; Rothstein 1969). The line frequencies in the submillimeter and far-infrared domain have been recently determined with high precision in the laboratory (Plummer et al 1984, Bellini et al 1994). Because of the great astrophysical interest of this molecule (e.g. Puy et al 1993), an attempt has been made to detect LiH at very high redshifts ($z \sim 200$) with the IRAM 30m telescope (de Bernardis et al 1993). It has been proposed that the LiH molecules could smooth the primary CBR (Cosmic Background Radiation) anisotropies, due to resonant scattering, or create secondary anisotropies, and they could be the best way to detect primordial clouds as they turn-around from expansion (Maoli et al 1996, but see also Stancil et al 1996, Bougleux \& Galli 1997). There has recently been some controversy about the abundance of LiH. The computations of Lepp \& Shull (1984) estimated the LiH/H$_2$ abundance ratio in primordial diffuse clouds to be as high as 10$^{-6.5}$. With H$_2$/H $\sim$ 10$^{-6}$, the primordial LiH/H ratio is $\sim$10$^{-12.5}$. More recently, Stancil et al. (1996) computed an LiH/H abundance of $< 10^{-15}$ in the postrecombination epoch, since quantum mechanical computations now predict the rate coefficient for LiH formation through radiative association to be 3 orders of magnitude smaller than previously thought from semi-classical methods (Dalgarno et al 1996). In very dense clouds, however, three-body association reactions must be taken into account, and a significant fraction of all lithium will turn into molecules. Complete conversion due to this process requires gas densities of the order $\sim 10^9$\,cm$^{-3}$, rarely found in the general ISM. However, taken other processes into account, such as dust grain formation, an upper limit to the LiH abundance is the complete conversion of all Li into molecular form, with LiH/H$_2$ $\la 10^{-10}-10^{-9}$. With a LiH column density of 10$^{12}$\,cm$^{-2}$, or N(H$_2$)$= 10^{22}$\,cm$^{-2}$, the optical depth of the LiH line will reach $\sim$1, in cold clouds of velocity dispersion of $2\,\rm km\,s^{-1}$. The line should then be easily detectable in dense dark clouds in the present interstellar medium (like Orion where the column density reaches 10$^{23}$-10$^{24}$ cm$^{-2}$). This is a fundamental step to understand the LiH molecule formation, in order to interpret future results on primordial clouds, although the primordial abundance of Li could be increased by about a factor 10 in stellar nucleosynthesis (e.g. Reeves 1994). Once the Li abundance is known as a function of redshift, it could be possible to derive its true primordial abundance, a key factor to test Big Bang nucleosynthesis (either homogeneous or not). Up to now, due to atmospheric opacity, no astrophysical LiH line has been detected, and the abundance of LiH in the ISM is unknown. The atmosphere would allow to detect the isotopic molecule LiD (its fundamental rotational line is at 251 GHz), but it has not been seen because of the low D/H ratio, and the expected insufficient optical depth of LiH \footnote{the LiD line at 251 GHz is not covered in the 247-263 GHz survey of Orion by Blake et al 1986, but was observed at the McDonald 5m-telescope, Texas, see Lovas 1992; we have ourselves checked with the SEST telescope that no line is detected towards Sagittarius-B2 at this frequency. The 3$\sigma$ upper limit to the LiD abundance towards SgrB2 is $1 \times 10^{11}$\,cm$^{-2}$.}. Another method to avoid atmospheric absorption lines is to observe a remote object, for which the lines are redshifted into an atmospheric window. Here we report about the first absorption search for a LiH line at high redshift: the latter allows us to overcome the earth atmosphere opacity, and thanks to the absorption technique we benefit of an excellent spatial resolution, equal to the angular size of the B0218+357 quasar core, of the order of 1 milli-arcsec (Patnaik et al 1995). At the distance of the absorber (redshift $z=0.68466$, giving an angular size distance of 1089 Mpc, for $H_0$=75 km/s/Mpc and $q_0$ =0.5), this corresponds to 5pc. We expect a detectable LiH signal, since the H$_2$ column density is estimated to be N(H$_2$)$ = 5 \times 10^{23}$ cm$^{-2}$. Menten \& Reid (1996) derive an N(H$_2$) value ten times lower than this, using the H$_2$CO($2_{11}-2_{12}$) transition at 8.6\,GHz. At this low frequency the structure and extent of the background continuum source may be quite larger than at 100--200 GHz and the source covering factor smaller. This means that their estimate of the column density is a lower limit. \section { Observations } The observations were made with the IRAM 30m telescope at Pico Veleta near Granada, Spain. They were carried out in four observing runs, in December 1996, March, July and December 1997. Table 1 displays the observational parameters. We observed at 263 GHz with an SiS receiver tuned in single sideband (SSB). The SSB receiver temperature varied between 400 and 450K, the system temperature was 600-1400K depending on weather conditions, and the sideband rejection ratio was 10dB (the image frequency is at 271.5 GHz, in a region where the atmospheric opacity increases rapidly due to water vapour). We used a 512x1MHz filterbank and an autocorelator backend, with 0.3 km/s resolution. We present here only the 1MHz resolution spectra, smoothed to a 2.3 km/s channels, to improve the signal to noise. \begin{table} \begin{flushleft} \caption[]{ Parameters for the tentative LiH line } \begin{tabular}{lccl} \hline \multicolumn{1}{l}{J$_u$--J$_l$ } & \multicolumn{1}{c}{1--0 } & \multicolumn{1}{c}{} \\ $\nu_{lab}$ GHz & 443.953 & \\ $\nu_{obs}$ GHz & 263.527 & \\ Forward eff. & 0.86 & \\ Beam eff. & 0.32 & \\ T$_A^*$ & 7 mK & depth of absorption line \\ T$_{\rm cont}$ & 15 mK & \\ FWHM & 3.2 km/s & \\ $\sigma$ & 1.8 mK & noise rms with $\Delta v$ 2.3 km/s \\ \hline \end{tabular} \, \\ \vskip 2truemm $\alpha$(1950) = 02h 18m 04.1s \\ $\delta$(1950) = 35$^\circ$ 42\amin \, 32\asec \\ \end{flushleft} \end{table} The observations were done using a nutating subreflector with a 1' beamthrow in azimuth. We calibrated the temperature scale every 10 minutes by a chopper wheel on an ambient temperature load, and on liquid nitrogen. Pointing was checked on broadband continuum sources, and was accurate to 3\asec \, rms. The frequency tuning and sideband rejection ratios were checked by observing molecular lines towards Orion, DR21 and IRC+10216. We integrated in total for 85 hours on the 263 GHz line, and obtained a noise rms level of 1.8 mK in the T$_A^*$ antenna temperature scale, with a velocity resolution of 2.3 km/s. The forward and beam efficiencies at the observed frequency are displayed in Table 1. The continuum level was estimated by observing in a rapid on--off mode using a special continuum backend. The switch frequency of the subreflector was increased from 0.5 Hz to 2 Hz. \begin{figure} \psfig{figure=Ad212_f1.ps,bbllx=3cm,bblly=65mm,bburx=11cm,bbury=195mm,width=8cm} \caption[]{ Spectrum of LiH in its fundamental line (1--0) at 444 GHz, redshifted at 263 GHz, in absorption towards B0218+357, compared to the highly optically thick CO(2--1) line previously detected. The tentative LiH line is slightly shifted from the center by about 5km/s, but is still comprised within the CO(2--1) velocity range. Its width is compatible with what is expected from an optically thin line. Spectra have been normalised to the absorbed continuum level and the velocity resolution is 2.3 km/s } \label{lih_f1} \end{figure} | Figure \ref{lih_f1} presents our LiH spectrum, compared to that of CO(2--1) previously detected with the IRAM 30m-telescope (Wiklind \& Combes 1995, Combes \& Wiklind 1995). There is only a tentative detection of LiH at $\sim$ 3 $\sigma$. The line is very narrow, but is compatible to what is expected from an optically thin line. The CO(2--1) is highly optically thick, with $\tau \sim$ 1500. This optical depth is determined from the detection of C$^{18}$O(2--1), which is moderately thick, and the non--detection of C$^{17}$O(2--1). The center of the tentative line is shifted by 5 km/s from the average center of other lines detected towards B0218+357. This shift cannot be attributed to uncertainties of the line frequency, since it has been measured in the laboratory (e.g. Bellini et al 1994), and the error is at most 0.24 km/s at 3$\sigma$, once redshifted. But the scatter of the line centers is $\sim$ 3 km/s, and the width of most of the lines is $\sim$ 15 km/s (cf Wiklind \& Combes 1998). The velocity shift is therefore insufficient to reject the line as real. Combining our own continuum data with that of lower frequencies (obtained from the NASA Extragalactic Database NED), we have previously found that the continuum spectra of B0218+357 can be fitted with a power law of slope --0.25 (Combes \& Wiklind 1997). This would imply a continuum level of 15.5 mK at 263 GHz, which is in accord with the measured level. Since only 70\% of the continuum is covered by molecular gas, the continuum level to be used for our LiH observations amounts to 11 mK. \smallskip We can write the general formula, concerning the total column density of the LiH molecule, observed in absorption between the levels $l \rightarrow u$ with an optical depth $\tau$ at the center of the observed line of width $\Delta v$ at half-power: $$ N_{LiH} = {{8\pi}\over{c^3}} f(T_x) {{\nu^3 \tau \Delta v} \over {g_u A_u} } $$ where $\nu$ is the frequency of the transition, $g_u$ the statistical weight of the upper level ($= 2 J_u+1$), $A_u$ the Einstein coefficient of the transition, $T_x$ the excitation temperature, and $$ f(T_x) = {{Q(T_x) exp(E_l/kT_x)} \over { 1 - exp(-h\nu/kT_x)}} $$ where $Q(T_x)$ is the partition function. For the sake of simplicity, we adopt the hypothesis of restricted Thermodynamical Equilibrium conditions, i.e. that the excitation temperature is the same for all the LiH ladder. Since the line is not optically thick, but the optical thickness reaches $\tau$ = 1.3 at the center of the line, we have derived directly from the spectrum, through a Gaussian fit of the opacity, the integrated $\tau \Delta v$ = 3.64 km/s. From the formulae above, and assuming an excitation temperature of $T_x$ = 15 K (see Table 2 for variation of this quantity), we derive a total LiH column density of 1.6 10$^{12}$ cm$^{-2}$ towards B0218+357. Compared to our previously derived H$_2$ column density of 5 10$^{23}$ cm$^{-2}$, this gives a relative abundance of LiH/H$_2$ $\sim$ 3 10$^{-12}$. Note that there is a possible systematic uncertainty associated with this measure, due to the velocity difference between the maximum opacity of the CO, HCO$^+$ and other lines with that of LiH. \begin{table} \begin{flushleft} \caption[]{Derived LiH column density } \begin{tabular}{lccccc} \hline & & & & & \\ \multicolumn{1}{c}{$T_x$ } & \multicolumn{1}{c}{(K)} & \multicolumn{1}{c}{5 } & \multicolumn{1}{c}{10 } & \multicolumn{1}{c}{15 } & \multicolumn{1}{c}{20 } \\ & & & & & \\ \hline & & & & & \\ N(LiH) & (10$^{12}$ cm$^{-2}$) & 0.4 & 0.9 & 1.6 & 2.4 \\ & & & & & \\ LiH/H$_2$ & (10$^{-12}$) & 0.8 & 1.8 & 3.2 & 5 \\ & & & & & \\ \hline \end{tabular} \end{flushleft} \end{table} \smallskip To interpret this result, comparison should be made with the atomic species. First, it is likely that the molecular cloud on the line of sight is dense and dark, and all the hydrogen is molecular, f(H$_2$) = 0.5. The Li abundance (main isotope $^7$Li) at $z=0.68466$ (i.e 5-10 Gyr ago) can be estimated at Li/H $\sim 10^{-9}$, since its abundance in the ISM increases with time. The primordial Li abundance must be similar to that in metal deficient unevolved Population II stars, Li/H = 1-2 10$^{-10}$ (Spite \& Spite 1982), but Li could be depleted at the stellar surface by internal mixing. In meteorites and unevolved, unmixed Pop I stars, Li/H $\sim 10^{-9}$, representative of the Li abundance some 4 Gyr ago. The present abundance in the ISM is estimated around 3 10$^{-9}$ (Lemoine et al 1993). We therefore deduce LiH/Li $\sim$ 1.5 10$^{-3}$. The uncertainty associated with the derived abundances are large, but the low LiH/Li ratio seems to exclude complete transformation of Li into LiH, as would be expected in very dense clouds (e.g. Stancil et al 1996, although the Li chemistry is not yet completely understood in dark clouds). However, it is likely that the cloud is clumpy, and in some of the more diffuse parts, LiH is photodissociated (e.g. Kirby \& Dalgarno 1978). Also, some regions of the cloud could have a higher excitation temperature, in which case our computation under-estimates the LiH abundance (although the absorption technique selects preferentially cold gas, and the black-body temperature at the redshift of the absorbing molecules is $T_{bg}$ = 4.6 K). The present observations suggest that the detection of LiH in emission towards dense clouds in the Milky Way should be easy with a submillimeter satellite, provided that the spatial resolution is enough to avoid dilution of the dense clumps. It is also interesting to observe the rarer molecule $^6$LiH, which in some clouds might be of same order of abundance as the main isotopic species. Through optical absorption lines Lemoine et al (1995) find towards two velocity components in $\zeta$-Oph, $^7$Li/$^6$Li = 8.6 and 1.4. Since $^6$Li is formed only in negligible amounts in the Big Bang, this ratio indicates that cosmic ray spallation has increased significantly the Li abundances. \vspace{0.25cm} | 98 | 4 | astro-ph9804303_arXiv.txt |
9804 | astro-ph9804186_arXiv.txt | We analysed 13 archival \R\ PSPC and HRI observations which included the position of a newly discovered 59\,s X--ray pulsar in the Small Magellanic Cloud, 1SAX J0054.9--7226 = \src. The source was detected three times between 1991 and 1996 at a luminosity level ranging from $\sim$8$\times$10$^{34}$ - 4$\times$ 10$^{35}$ erg s$^{-1}$ (0.1--2.4 keV). Highly significant pulsations at 59.072\,s were detected during the 1991 October 8--9 observation. The \R\ period, together with those measured by \RXTE\ and \BSAX\ yields a period derivative of \.P= -- 0.016 s yr$^{-1}$. A much more accurate source position (10$^{\prime\prime}$ uncertainty) was obtained through the \R\ HRI detection on 1996 April restricting to three m$_ V$ $>$ 15.5 stars the likely counterpart of 1SAX J0054.9--7226 = \src. | On 1998 January 20 during a \RXTE\ observation in the direction of the Small Magellanic Cloud (SMC), a previously unknown X--ray source, namely XTE J0055--724, was detected at a flux level (2--10 keV) of $\sim$ 6.0 $\times$ 10$^{-11}$ erg s$^{-1}$ cm$^{-2}$. The source showed pulsations at a period of $\sim$59\,s (Marshall \& Lochner, 1998a). A previous \RXTE\ observation of the same field performed on 1998 January 12 failed to detect the source. In response to these findings, simultaneous \BSAX\ and \RXTE\ observations of a region including the \RXTE\ error circle ($\sim$10$^{\prime}$ radius) of XTE J0055--724, were carried out on 1998 Jannuary 28. The results of these observations are reported elsewhere (Santangelo \etal 1998a; Marshall \etal 1998b). Thanks to the spatial capabilities of the imaging X--ray concentrators on board \BSAX, an improved position ($\sim$40$^{''}$ radius) was obtained for the pulsating source, named 1SAX J0054.9--7226 (Santangelo \etal 1998a,b). The new \BSAX\ error circle contains only the previously classified \R\ and \E\ X--ray sources 1WGA J0054.9--7226 and \src, which are likely the same source. In the following we adopt the earliest source name, i.e. \E's. \src\ is a variable X--ray source in the SMC, which was already considered a candidate High Mass X--ray Binary by Wang \& Wu (1992; source \#35), Bruhweiler et al. (1987; source \# 9) and by White \etal (1994; in the WGACAT), based on its high spectral hardness. We report in this letter on the results from the analysis of the Position Sensitive Proportional Counter (PSPC) and High Resolution Imager (HRI) observations from the \R\ public archive. | \src\ was detected three times between 1991 and 1996 in the \R\ archival data. Highly significant pulsations, at a period of 59.072\,s were detected on 1991 October 8--9. These findings, together with the \BSAX\ results, yield a mean period derivative of $\sim$ -- 0.016\, s yr$^{-1}$ between 1991 and 1998. \begin{figure}[tbh] \centerline{\psfig{figure=59s_dss.ps,width=8.cm,height=8.cm}} \caption{ESO plate including the position of \src. The X--ray error circles obtained from different instruments and satellites are shown} \end{figure} In one case a spectral analysis could be performed. The spectrum was found to be consistent with a relatively flat low absorbed power--law model that is typical of accreting X--ray pulsars in this energy range. The 0.1--2.4 keV luminosity of \src\ as observed with \R\ ranges between $\sim$4.2$\times$10$^{35}$ erg s$^{-1}$ (1991 October 8--9) and $\sim$8.5$\times$10$^{34}$ erg s$^{-1}$ (1996 April 26 -- June 10). Moreover \RXTE\ detected \src\ at a luminosity level of $\sim$3$\times$10$^{37}$ erg s$^{-1}$ in the 2--10 keV energy band. Extrapolating to the \R\ energy range the luminosity measured by \RXTE\ on 1998 January 20, a 0.1--2.4 keV luminosity of $\sim$2.5$\times$10$^{36}$ erg s$^{-1}$ is derived, implying a pronounced long--term variability of \src\ (a factor of $>$30). This indicates that the source is probably a transient X--ray pulsar in a high--mass binary containing a Be star. A 10$^{\prime\prime}$ accurate position was obtained thanks to a \R\ HRI observation during which the source was detected (1996 April; 0.1--2.4 keV luminosity of $\sim$8.5$\times$10$^{34}$ erg s$^{-1}$). The \R\ HRI error circle of contains only three stars in the ESO plates with m$_V$ $>$ 15.5, the likely optical counterpart of \src (see Fig.\,3). Assuming a B--V = --0.2 and a distance modulus of 19 mag, these optical counterpart candidates are consistent with main sequence A9 -- B2 stars. We note that a similar spectral--type star (B1.5Ve; m$_V$ = 16) is the companion of the nearby X--ray source SMC X--2. Future optical follow--up observations of these candidates should determine the counterpart of \src\ and its probable Be star X--ray transient nature. The optical and/or infrared activity brightening of the counterpart will allow further X--ray triggers and studies. | 98 | 4 | astro-ph9804186_arXiv.txt |
9804 | astro-ph9804009_arXiv.txt | The interstellar cloud surrounding the solar system regulates the galactic environment of the Sun, and determines the boundary conditions of the heliosphere. Both the Sun and interstellar clouds move through space, so these boundary conditions change with time. Data and theoretical models now support densities in the cloud surrounding the solar system of n(H$^{\circ}$)=0.22$\pm$0.06 cm$^{-3}$, and n(e$^{-}$)$\sim$0.1 cm$^{-3}$, with larger values allowed for n(H$^{\circ}$) by radiative transfer considerations. Ulysses and Extreme Ultraviolet Explorer satellite He$^{\circ}$ data yield a cloud temperature of {\mbox 6,400 K}. Nearby interstellar gas appears to be structured and inhomogeneous. The interstellar gas in the Local Fluff cloud complex exhibits elemental abundance patterns in which refractory elements are enhanced over the depleted abundances found in cold disk gas. Within a few parsecs of the Sun, inconclusive evidence for factors of 2--5 variation in Mg$^{+}$ and Fe$^{+}$ gas phase abundances is found, providing evidence for variable grain destruction. In principle, photoionization calculations for the surrounding cloud can be compared with elemental abundances found in the pickup ion and anomalous cosmic ray populations to model cloud properties, including ionization, reference abundances, and radiation field. Observations of the hydrogen pile-up at the nose of the heliosphere are consistent with a barely subsonic motion of the heliosphere with respect to the surrounding interstellar cloud. Uncertainties on the velocity vector of the cloud that surrounds the solar system indicate that it is uncertain as to whether the Sun and $\alpha$ Cen are or are not immersed in the same interstellar cloud. | The physical conditions of the surrounding interstellar cloud establish the boundary conditions of the solar system and heliosphere. The abundances and ionization states of elements in the surrounding interstellar cloud determine the properties of the parent population of the anomalous cosmic ray and pickup ion components. In addition, the history of the interstellar environment of the heliosphere appears to be partially recorded by radionucleotides such as $^{10}$Be and $^{14}$C in geologic ice core records (\cite{sonett,fr97}). Because the solar wind density decreases as $R^{-2}$ ($R$=distance to Sun), the solar wind and interstellar densities are equal at about 5 AU (the orbit of Jupiter), in the absence of substantial ``filtration'' \footnote{``Filtration'' refers to the deflection of interstellar H$^{\circ}$ around the heliopause due to the coupling between interstellar protons and H$^{\circ}$ resulting from charge exchange}. Approximately 98\% of the diffuse material in the heliosphere is interstellar gas (\cite{gruntman}). Thus, the physical properties of the outer heliosphere are dominated by interstellar matter (ISM). Were the Sun to encounter a high density interstellar cloud, it is anticipated that the physical properties of the inner heliosphere would also be ISM-dominated. Zank and Frisch (1998) have shown that if the space density of the interstellar cloud which surrounds the solar system were increased to $\sim$10 cm$^{-3}$, the properties of the inner heliosphere at the 1 AU position of the Earth would be dramatically altered. The accuracy with which the physical properties of the surrounding cloud can be derived from observations of stars within a few parsecs of the Sun (1 pc$\sim$200,000 AU) depends on the homogeneity and physical parameters of nearby ISM. Observations of nearby stars gives sightlines which probe the ensemble of nearby clouds constituting the ``Local Fluff'' cloud complex. Conclusions based on observations of nearby stars, however, must be qualified by the absence of detailed data pertaining to the small scale structure of the local ISM (LISM). More distant cold diffuse interstellar gas is highly structured, replete with dense ($\sim 10^{4}-10^{5}$ cm$^{-3}$), small (20--200 AU) inclusions occupying in some cases less than 1\% of the cloud volume (\cite{frail,falgarone,falpug,heiles}). Small scale structures are ubiquitous in interstellar gas, and individual velocity components exhibiting column densities as low as N(H$^{\circ}$)$\sim$3$\times 10 ^{18}$ cm$^{-3}$ are found in cold clouds (\cite{frail,heiles}). The presence of dense low column density wisps near the Sun is allowed by currently available data. The Sun has a peculiar motion with respect to the ``Local Standard of Rest'' (LSR\footnote{The LSR is the velocity frame of reference in which the vector motions of a group of nearby comparison stars are minimized. Stars in the LSR corotate around the galactic center with a velocity of $\sim$250 km s$^{-1}$}); the Sun moves through the LSR with a velocity V$\sim$16.5 km s$^{-1}$ towards the apex direction l=53$^{\circ}$, b=+25$^{\circ}$ (\cite{mihalas}). Uncertainties on the relative solar-LSR motion appear to be less than 3 km s$^{-1}$ and $\pm$5$^{\circ}$. This motion corresponds to $\sim$17 pc per million years. Note that the solar path is tilted by $\sim25^{\circ}$ with respect to the galactic plane. The Sun oscillates about the galactic plane, crossing the plane every 33 Myrs, reaching a maximum distance from the plane of $\sim$77 pc. The last galactic plane ``crossing'' was about 21 Myrs ago (\cite{bash}). This amplitude of oscillation can be compared to scale heights on the order of $\sim$50-80 pc for cold H$_{2}$ and CO, $\sim$100 pc for cold H$^{\circ}$ and infrared cirrus, $\sim$250 pc for warm H$^{\circ}$, and $\sim$1 kpc for warm H$^{+}$ (the ``Reynolds Layer''). There are three time scales of interest in understanding the environmental history of the solar galactic milieu -- $\sim 10^{6}$ years, $\sim 10^{5}$ years, and $\sim 10^{4}$ years. Prior to entering into the Local Fluff complex of interstellar clouds, the Sun traveled through a region of the galaxy between the Orion spiral arm and the spiral arm spur known as the Local Arm. On the order of a million years ago, the Sun was displaced $\sim$17 pc in the anti-apex direction, which is towards the present day location of the junction of the borders of the constellations of Columba, Lepus and Canis Major. The motions of the Sun and surrounding interstellar cloud with respect to interstellar matter within 500 pc, projected onto the plane, are illustrated in Figure 1. Note that the velocity vectors of the Sun and interstellar cloud surrounding the solar system are nearly perpendicular in the LSR, implying that the surrounding cloud complex is sweeping past the Sun (see section \ref{velocity}). When the morphology of the Local Fluff complex is considered, it is apparent that sometime during the past $\sim$200,000 years the Sun appears to have emerged from a region of space with virtually no interstellar matter (densities n(H$^{\circ})<0.0005$ cm$^{-3}$, n(e$^{-})<0.02$ cm$^{-3}$) and entered the Local Fluff complex of clouds (average densities n(H$^{\circ}$)$\sim$0.1 cm$^{-3}$) outflowing from the Scorpius-Centaurus Association of star-forming regions. One model for the morphology of the cloud surrounding the solar system predicts that sometime within the past 10,000 years, and possibly within the past 2,000 years, the Sun appears to have entered the interstellar cloud in which it is currently situated (\cite{fr94}, Frisch 1997). The cloud surrounding the solar system will be called here the ``Local Interstellar Cloud'' (LIC\footnote{This cloud surrounding the solar system is also referred to as the ``surrounding interstellar cloud'', or SIC, which unambiguously defines the cloud feeding interstellar matter into the solar system. For the sake of uniformity of notation, however, the term LIC is used here.}). \begin{figure} \begin{center} \plotone{fig1.eps} \end{center} \vspace{0.5in} \caption[]{{\small The distribution of interstellar molecular clouds (traced by the CO 1-$>$0 115 GHz rotational transition) and diffuse gas (traced by E(B-V) color excess due to the reddening of starlight by interstellar dust) within 500 pc of the Sun are shown. The round circles are molecular clouds, and the shaded material is diffuse gas. The horizontal bar (lower left) illustrates a distance of 100 pc. Interstellar matter is shown projected onto the galactic plane, and the plot is labeled with galactic longitudes. The distribution of nearby interstellar matter is associated with the local galactic feature known as ``Gould's Belt'', which is tilted by about 15--20$^{\circ}$ with respect to the galactic plane. ISM towards Orion is over 15$^{\circ}$ below the plane, while the Scorpius-Centaurus material (longitudes 300$^{\circ}$--0$^{\circ}$) is about 15--20$^{\circ}$ above the plane. Also illustrated are the space motions of the Sun and local interstellar gas, which are nearly perpendicular in the LSR velocity frame. The three asterisks are three subgroups of the Scorpius-Centaurus Association. The three-sided star is the Geminga Pulsar. The arc towards Orion represents the Orion's Cloak supernova remnant shell. The other arcs are illustrative of superbubble shells from star formation in the Scorpius-Centaurus Association subgroups. The smallest (i. e. greatest curvature) shell feature represents the Loop I supernova remnant.}} \label{fig1} \end{figure} | One new conclusion presented here is that in principle the {\it in situ} pickup ion data can help resolve the outstanding question of whether the correct reference abundances for the LIC are given by solar versus B-star abundances. A second new result is that the uncertainties on the LIC velocity vector indicate that it is not yet clear whether the Sun and $\alpha$ Cen are immersed in the same interstellar cloud. Based on the discussions in this paper, the best values for LIC properties are given by n(H$^{\circ}$)=0.22$\pm$0.06 cm$^{-3}$, n(e$^{-}$)=n(H$^{+}$)=0.1 cm$^{-3}$, T=6,900 K and a relative Sun-cloud velocity of 25.8$\pm$0.8 km s$^{-1}$. However, radiative transfer considerations in the LIC suggest that the quoted neutral density is a lower limit. Ulysses and EUVE observations of He$^{\circ}$ indicate a cloud temperature of T=6,400 K. The magnetic field strength is weakly constrained to be in the range of 2--3 $\mu$G. Models of the Ly$\alpha$ absorption line towards $\alpha$ Cen are consistent with an Alfven velocity of 20.9 km s$^{-1}$, which in turn is consistent with an interstellar magnetic field of 3 $\mu$G in the absence of additional unknown contributions to the interstellar pressure. Ulysses and EUVE observations of interstellar He$^{\circ}$ within the solar system give an upwind direction for the ``wind'' of interstellar gas through the solar system, in the rest frame of the Sun, of V=--25.9$\pm$0.6 km s$^{-1}$ arriving from the galactic direction l=4.0$^{\circ}$$\pm$0.2$^{\circ}$, b=15.4$^{\circ}$$\pm$0.6$^{\circ}$. Removing solar motion from this vector gives an upwind direction for the LIC cloud in the LSR of V=--18.7$\pm0.6$ km s$^{-1}$ arriving from the direction l=327.3$^{\circ}$$\pm$1.4$^{\circ}$, b=0.3$^{\circ}$$\pm$1.0$^{\circ}$. Through a combination of observations and theory, uncertainties in the LIC electron density are narrowing. Radiative transfer in the sightlines towards nearby stars require that cloud models must be combined with data in order to deduce properties at the cloud location. Radiative transfer models of ionization in the LISM show interesting results, but additional understanding of the input radiation fields is needed. The Local Fluff complex is structured and inhomogeneous. Striking progress would be made in understanding this structure if interstellar absorption lines could be observed at resolutions of $\sim$1 km s$^{-1}$ in the ultraviolet. The most glaring uncertainty is the absence of detailed knowledge about the interstellar magnetic field. Many of the most abundant elements in the LIC are ionized, and densities of neutral atoms with FIPs less than 13.6 eV are typically down by 1--3 orders of magnitude from the dominant ions. The current approach of trying to understand the interaction of the ISM with the heliopause, from both the outside in and the inside out, is finally bearing fruit. | 98 | 4 | astro-ph9804009_arXiv.txt |
9804 | astro-ph9804192_arXiv.txt | The nuclei of a wide class of active galaxies emit broad emission lines with widths at half maximum (FWHM) in the range $10^{3}-10^{4}$ km s$^{-1}$. This spread of widths is not solely a consequence of the range of the luminosities of these sources since a plot of width versus luminosity shows a large scatter. We propose that the broad line emission region (BLR) is axially symmetric and that this scatter in line width arises from an additional dependence on the angle of the line of sight to the axis of the emission region. Such a relation is natural in unified models of active nuclei which link a variety of observed properties to viewing angle. Adopting a simple form for the line width as a function of luminosity and angle, and convolving this with the observed luminosity function, allows us to predict a line width distribution consistent with the available data. Furthermore, we use the relation between the equivalent width of a line and the luminosity in the continuum (the `Baldwin Effect') to predict an observed correlation between line width and equivalent width. The scatter on this correlation is again provided by angular dependence. The results have applications as diagnostics of models of the broad line emission region and in cosmology. | In unified models of active galactic nuclei with spherically symmetric BLR the width distribution of the broad emission lines cannot be accounted for by luminosity dependence alone. Plots of line width versus continuum luminosity have a large scatter and show no significant correlation \cite{W93,P97}. There is, however, growing evidence that the broad line region (BLR) is not spherical, but axisymmetric. \begin{enumerate} \item Observed samples of AGN \cite{Wills86,W93,Brotherton94} suggest relations between line widths and R, the ratio of core to lobe dominance. Other samples \cite{P97} find relations between line widths and $\alpha_{\rm ox}$, the continuum slope parameter from the optical to X--ray bands. Both of these parameters have some viewing angle dependence. \item The continuum and line light curves of some active nuclei, eg 3C390.3 \cite{Wamsteker97}, are most naturally interpreted in terms of a disc-like line emission region. It has also been suggested that some double peaked line profiles arise from discs (eg Arp 102b) although the interpretation in these cases is not so clear when time variability is taken into account. \item Several axisymmetric disc-wind models, such as those of Cassidy \& Raine \shortcite{Cassidy96}, Chaing \& Murray \shortcite{Chaing96} and Emmering, Blandford \& Shlosman \shortcite{Emmering92} have been proposed and models of this type are gaining support from evidence for winds \cite{Pasadena}. These will naturally predict some viewing angle dependence of line width. \end{enumerate} It should be noted that Osterbrock \shortcite{Osterbrock77} showed that a deficit of systems with narrower lines ruled out pure disc models, but such objections do not necessarily apply to axisymmetric models in general. In this paper we shall adopt a simple dependence of line width on both viewing angle and luminosity. Then: \begin{enumerate} \item We obtain a reasonable fit to the distribution of line widths. \item Given the Balwin relation between line and continuum luminosity we predict a relation between line width and equivalent width compatible with the observed trend. The scatter on this relation is attributed to angular dependence. \item We discuss how the width distribution can be used to test models of the BLR. \item If the BLR is indeed axisymmetric, we show how the line width distribution can be used, in principle, to determine cosmological parameters. \end{enumerate} | We conclude that the simple picture we have presented here accounts for the scatter in FWHM versus luminosity, accounts for the distribution of FWHM, and relates the trend in the EW with FWHM to the Baldwin relation. This may be useful as a diagnostic tool in discriminating between disc-wind models. The analysis has applications as a cosmological tool particularly as measurement of line widths is independent of any cosmological model. | 98 | 4 | astro-ph9804192_arXiv.txt |
9804 | astro-ph9804317_arXiv.txt | Collimated outflows from Young Stellar Objects (YSOs) can be seen as tracers of the accretion powered systems which drive them. In this paper I review some theoretical and observational aspects of YSO outflows through the prism of questions relating to the protostellar source. The issue I address is: can collimated outflows be used as ``fossils'' allowing the history of protostellar evolution to be recovered? Answering this question relies on accurately identifying where theoretical tools and observational diagnostics converge to provide unique solutions of the protostellar physics. I discuss potential links between outflow and source including the time and direction variability of jets, the jet/molecular outflow connection, and the the effect of magnetic fields. I also discuss models of the jet/outflow collimation mechanism. | The issues cited in this paper are associated with outflows. How do these issues specifically relate to questions inherent to the physics of accretion? The time variability of jets relates to the time-dependence of accreation, the FU Ori outbursts being a notable example. The direction variability of jets relates to the global dynamics and stability of accretion disks. Livio \& Pringle 1997, for example, have shown that radiation induced warping of disks may lead to precession in magneto-centrifugal jets. The presence and structure of magnetic forces in jets relates to the existence and form of large scale fields in the disks. If nose-cones do not occur in real YSO jets then perhaps mechanisms which rely on strong toroidal fields are excluded. Thus YSO jets and outflows offer a unique opportunity for the study of accretion powered systems. Protostellar outflows can be observed with exquiste detail in a variety of wavelengths including diagnostic spectral lines. The quality of the data combined with the long lookback time inherent to the outflows offers the possibilty that a large fraction of individual protostar's history might be recovered if we learn were and how to look. We are a long way from this now but the prospect of having such capabilities is very exciting. | 98 | 4 | astro-ph9804317_arXiv.txt |
|
9804 | astro-ph9804121_arXiv.txt | A long way has been run from the first views developped to explain the formation of galaxies. In 1962, Eggen, Lynden-Bell \& Sandage designed the collapse scenario, where all galaxies are created with their morphological type, according to their angular momentum. Their potentials remained axisymmetric, so that no angular momentum could be redistributed through gravity torques; the total mass and gas content was already there at first collapse. For elliptical galaxies, the violent/single collapse picture still remains in some modified form, although the most developped and adopted scenario is through agglomeration of a large number of clumps (e.g. van Albada 1982, Aguilar \& Merritt 1990), that produces de Vaucouleurs profiles in $r^{1/4}$. The merger picture (Toomre 1977, Schweizer 1990), where ellipticals are formed by progressive interaction and coalescence of many parent galaxies, is favored in hierarchical cosmogonies. For spiral galaxies, the scenario involves now much more internal dynamical evolution. Due to gas dissipation and cooling, gravitational instabilities are continuously maintained in spiral disks, and they drive evolution in much less than a Hubble time. Spiral galaxies are open systems, that accrete mass regularly, and their morphological type evolves along the Hubble sequence. Non-axisymmetric perturbations, such as bars or spirals, produce gravity torques that drive efficient radial mass flows; vertical resonances thicken disks and form bulges, and the mass central concentration can destroy bars. Accretion of small companions can also disperse bars and enlarge the bulge. A major merger can destroy disks entirely and form an elliptical. The first role of galaxy interactions is to trigger internal evolution, that we will consider now, in the next section. Specific aspects of galaxy interactions and mergers will then be detailed in section \ref{envir}. | 98 | 4 | astro-ph9804121_arXiv.txt |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.