subfolder
stringclasses 367
values | filename
stringlengths 13
25
| abstract
stringlengths 1
39.9k
| introduction
stringlengths 0
316k
| conclusions
stringlengths 0
229k
| year
int64 0
99
| month
int64 1
12
| arxiv_id
stringlengths 8
25
|
|---|---|---|---|---|---|---|---|
1206
|
1206.0777_arXiv.txt
|
The inner structure of AGNs is expected to change below a certain luminosity limit. The big blue bump, footprint of the accretion disk, is absent for the majority of low-luminosity AGNs (LLAGNs). Moreover, recent simulations suggest that the torus, a keystone in the Unified Model, vanishes for nuclei with $L_{bol} \lesssim 10^{42}\, \rm{erg\, s^{-1}}$. However, the study of LLAGN is a complex task due to the contribution of the host galaxy, which light swamps these faint nuclei. This is specially critical in the IR range, at the maximum of the torus emission, due to the contribution of the old stellar population and/or dust in the nuclear region. Adaptive optics imaging in the NIR (VLT/NaCo) together with diffraction limited imaging in the mid-IR (VLT/VISIR) permit us to isolate the nuclear emission for some of the nearest LLAGNs in the Southern Hemisphere. These data were extended to the optical/UV range (\emph{HST}), radio (VLA, VLBI) and X-rays (\emph{Chandra}, \emph{XMM}-Newton, \emph{Integral}), in order to build a genuine spectral energy distribution (SED) for each AGN with a consistent spatial resolution ($< 0\farcs5$) across the whole spectral range. From the individual SEDs, we construct an average SED for LLAGNs sampled in all the wavebands mentioned before. Compared with previous multiwavelength studies of LLAGNs, this work covers the mid-IR and NIR ranges with high-spatial resolution data. The LLAGNs in the sample present a large diversity in terms of SED shapes. Some of them are very well described by a self-absorbed synchrotron (e.g. NGC~1052), while some other present a thermal-like bump at $\sim 1\, \rm{\mu m}$ (NGC~4594). All of them are significantly different when compared with bright Seyferts and quasars, suggesting that the inner structure of AGNs (i.e. the torus and the accretion disk) suffers intrinsic changes at low luminosities.
|
The majority of AGNs spend their lives in a low state, characterized by a low accretion rate and a modest luminosity. These are known as low-luminosity AGNs (LLAGNs), the faintest but also the most numerous members of this family, including $\sim 1/3$ of all galaxies in the Local Universe \cite{2008ARA&A..46..475H}. However, LLAGNs are not just scale-down versions of the bright counterparts, Seyfert galaxies and quasars. Their main characteristics are: \textit{i)} absence of the \textbf{big blue bump} \cite{1996PASP..108..637H}, footprint of the accretion disk in the Spectral Energy Distribution (SED), \textit{ii)} LLAGNs are \textbf{``radio-loud''}, usually showing compact cores and parsec-scale radio jets \cite{2005A&A...435..521N}, and \textit{iii)} they are frequently associated with \textbf{low ionization} nuclear emission-line regions (LINERs, \cite{1980A&A....87..152H}). The differences between high-luminosity objects and LLAGNs might be associated with the structural changes predicted at low luminosities. Both the broad-line region and the ``torus'', keystones of the Unified Model \cite{1993ARA&A..31..473A}, are expected to vanish at $L_{bol} \lesssim 10^{42}\, \rm{erg/s}$ \cite{2003ApJ...590...86L,2007MNRAS.380.1172H}. Instead of the classical scheme, the nucleus of LLAGNs is described in terms of a \textbf{radiatively inefficient} structure in which most of the gravitational energy is not released in the form of electromagnetic radiation \cite{2005Ap&SS.300..177N,2008ARA&A..46..475H}. Its main components are: \textit{i)} a radiatively inefficient accretion flow (RIAF), \textit{ii)} a truncated accretion disk, and \textit{iii)} a jet or an outflow. These components contribute to the total energy output and compete at certain wavelength ranges, but the relative importance of each one of them is still not clear (e.g. \cite{2011ApJ...726...87Y}). Specifically, RIAF-dominated models predict that most of the energy is released at radio and X-ray wavelengths, while high-spatial resolution data suggest an important contribution in the IR (see Fig.~2). On the other hand, some LLAGNs still show broad lines in polarized light \cite{1999ApJ...525..673B} together with a high absorption column ($\gtrsim 10^{23}\, \rm{cm^{-2}}$, \cite{2009ApJ...704.1570G}), which claim for the presence of a torus in their nuclei. In brief, LLAGNs permit to explore the boundaries of unified model, the luminosity radiated is not high enough to support the torus and the structure of the accretion disk also seems to be altered at low accretion ratios.
|
The importance of LLAGNs lies in the fact that they permit to explore the limits of the classical picture for active nuclei. At very low luminosities ($L_{bol} \lesssim 10^{42}\, \rm{erg\, s^{-1}}$) the broad-line region and the torus are expected to vanish, giving way to a radiatively inefficient structure. As a consequence of the changes in the inner structure, the SED of LLAGNs is expected to present differences with regard to the bright class. A high-spatial resolution study has been performed for a sample of 6 nearby LLAGNs. This includes sub-arcsec resolution data in the radio, IR, optical, UV and X-ray ranges and allows us to disentangle the nuclear light from that of the host galaxy. For the first time, the mid-IR to NIR range is sampled at high-spatial resolution, a range in which LLAGNs radiate most of their luminosity. The shape of the SED in LLAGNs suggests that non-thermal processes can dominate the emission from radio to UV wavelengths. Self-absorbed synchrotron emission in a jet is a possible mechanism to explain the radio-to-UV continuum \cite{2008ApJ...681..905M}. Still, in the case of the Sombrero galaxy this synchrotron continuum is not present in the IR-to-UV range, and the thermal-like emission could be associated to a truncated accretion disk. Overall, the high-spatial resolution SEDs of the LLAGNs in the sample are intrinsically different when compared with bright AGNs. The big blue bump is absent, even for unobscured nuclei. The strong thermal IR component associated with the torus in Seyfert nuclei is not detected for LLAGNs, in line with numerical simulations that predicts this structure to receed at low-luminosities. If it is still present, the torus does not seem to dominate the emission in faint nuclei. Nevertheless, there are still some similarities with the brighter counterparts. At radio wavelengths, LLAGNs show a similar spectral index as ``radio-loud'' quasars. Moreover, the behaviour of faint Seyferts (e.g. NGC~1386) is more similar to that of LLAGNs rather than the case of bright Seyferts, suggesting a smooth transition between both classes of AGNs.
| 12
| 6
|
1206.0777
|
1206
|
1206.2402_arXiv.txt
|
\label{sec:introduction} The tremendous scientific payoff from studies of the cosmic microwave background radiation (CMB) has driven researchers to develop new detectors and new detection techniques. For the most part, CMB measurements have been made with dedicated instruments in which the optical elements are designed specifically to mate with the detectors (rather than in facility-type telescopes). The instruments run the gamut of radio and mm-wave detection techniques: heterodyne receivers, direct power receivers, correlation receivers, interferometers, Fourier transform spectrometers, and single and multi-mode bolometric receivers. The quest for ever more sensitive measurements of the CMB, including its polarization, has led to the development of arrays of hundreds to thousands of detectors, some of which are polarization sensitive. These arrays are coupled to unique, large-throughput optical systems. In this article we will focus primarily on optical systems for instruments that are used to measure the temperature anisotropy and polarization of the CMB. In other words, instruments that are designed to measure only the temperature difference or polarization as a function of angle on the sky. Before explicitly discussing the optical systems, we introduce in this section the celestial emission spectrum at CMB frequencies, discuss how the instrument resolution is determined, and present the angular power spectrum. We then introduce the concepts of throughput and modes and end with a discussion of the limits imposed by system noise because it is one of the driving considerations for any optical design. In Section~\ref{sec:groundballoon} we review the various choices available for a CMB optics designer, and the main optical systems that have been used to date. We also discuss more recent developments with the introduction of large focal plane arrays and the efforts to characterize the polarization of the CMB. The ACT and SPT instruments are the highest resolution telescopes dedicated to CMB measurements to date. They are also good examples for the state-of-the-art in CMB optical design at the time of their design, mid-decade 2000. They are described and compared in Section~\ref{sec:largeground}. CMB Interferometers are briefly presented in Section~\ref{sec:interferometers}, and the optical systems of the four CMB satellites to date are reviewed in Section~\ref{sec:satellites}. \subsection{Celestial emission at CMB frequencies} \label{sec:celestialemission} Figure~\ref{fig:spectrum_tant} shows the antenna temperature of the sky from 1 to 1000 GHz for a region at a galactic latitude of roughly 20 degrees. Ignoring emission from the atmosphere, synchrotron emission dominates celestial emission at the low frequency end and dust emission dominates at high frequencies. These galactic emission components may be different by an order of magnitude depending on galactic longitude. The CMB radiation dominates emission between about 20 and 500~GHz. The experimental challenge is, however, to measure spatial fluctuations in the CMB at parts in $10^{6}$ or $10^{7}$ of the level, a couple of orders of magnitude below the bottom of the plot. The polarization signals are lower than the temperature anisotropy by a factor of ten and they too beckon to be measured to percent-level precision. The instrumental passbands, typically 20-30\%, are chosen to avoid atmospheric emission lines or to help identify and subtract the foreground emission. \begin{figure}[htb] \begin{center} \vspace{-0.18in} \includegraphics[width=4.5in]{spectrum_tant5_sh.pdf} \vspace{-0.1in} \caption{\small Sources of sky emission between 1 to 1000 GHz for a region of sky near a galactic latitude of roughly $20^\circ$. The flat part of the CMB spectrum (solid black), roughly below 30~GHz, is called the Rayleigh-Jeans portion. A Rayleigh-Jeans source with frequency independent emissivity would be indicated by a horizontal line on this plot. The synchrotron emission (long dash, green) is from cosmic rays orbiting in galactic magnetic fields and is polarized. Free free emission (short dash, blue) is due to ``breaking radiation" from galactic electrons and is not polarized. The amplitude of the spinning dust (dash dot, black), is not well known. This particular spinning model comes from \citet{Ali-Haimoud2009}. The standard spinning dust emission is not appreciably polarized. Emission from dust grains (brown, solid), which is more intense than the CMB above $\sim$700~GHz, is partially polarized. The atmospheric models are based on the ATM code (\cite{pardo/etal:2001}), they use the US standard atmosphere, and are for a zenith angle of $45^\circ$. The Atacama/South Pole spectrum (solid, red) is based on a precipitable water vapor of 0.5~mm. The difference between the two sites is inconsequential for this plot. The atmospheric spectra have been averaged over a 20\% bandwidth. The pair of lines at 60 and 120 GHz are the oxygen doublet and are also prominent at balloon altitudes (solid, purple). The lines at 19 and 180 GHz are vibrational water lines. The finer scale features are from ozone. \label{fig:spectrum_tant} } \vspace{-0.3in} \end{center} \end{figure} The basic picture in Figure~\ref{fig:spectrum_tant} has remained the same for over thirty years \citep{weiss:1980} though over the past decade there has been increasing evidence for a new component of celestial emission in the 30 GHz region (e.g., \citet{kogut/etal:1996, deOliveira1997, leitch/etal:1997}). This new component is spatially correlated with dust emission and has been identified with emission by tiny grains of dust that are spun up to GHz rotation rates, hence it has been dubbed ``spinning dust." A variety of mechanisms have been proposed for spinning up the grains~\citep{draine/lazarian:1998b,Draine1999}. Still, though, it is not clear that the source is predominantly spinning dust. Understanding this emission source is an active area of investigation. \subsection{Instrument Resolution} \label{sec:angularresolution} The resolution of a CMB telescope is easiest to think about in the time reversed sense. We imagine that a detector element emits radiation. The optical elements in the receiver direct that beam to the sky or onto the primary reflector. The size of the beam at the primary optic determines the resolution of the instrument. Such a primary optic can be a feed horn that launches a beam to the sky, a lens, or a primary reflector. The connection between the spatial size of the beam at the primary optic and the resolution can be understood through the Fraunhofer's diffraction relation (e.g., \citet{born/wolf,hecht:OPTICS}), \begin{equation} \psi(\theta) \propto \int_{apt} \psi_a(r) e^{kr\sin{\theta}\cos{\phi}} rdrd\phi, \end{equation} where $\psi_a(r)$ is the scalar electric field (e.g., one component of the electric field) in the aperture or on the primary optic, $\psi(\theta)$ is the angular distribution of the scalar electric field in the far field ($d>>2D^2/\lambda$) and $k=2\pi/\lambda$. The integral is over the primary reflector or, more generally, the aperture. For simplicity we have taken the case of cylindrically symmetric illumination with coordinates $r$ and $\phi$ for a circular aperture of diameter $D$, although a generalization is straightforward. The normalized beam profile is then given by $B(\theta)=|\psi(\theta)|^2/|\psi(0)|^2$. That is, if the telescope scanned over a point source very far away, the output of the detector as measured in power would have this profile as a function of scan angle $\theta$. Equation 1 gives an excellent and sometimes sufficient estimate of the far field beam profile. To be more specific, let us assume that the aperture distribution has a Gaussian profile so that the integrals are simple. That is, $\psi_a(r)=\psi_0e^{-r^2/2\sigma_r^2}$. We also assume that $\psi_a$ is negligible at $r \geq D/2$ so that we can let the limit of integration go to infinity. This is the ``large edge taper'' limit. In reality, no aperture distribution can be Gaussian and some are quite far from it. The integral evaluates to $\psi_0\sigma_r^2e^{-\sigma_r^2k^2\sin^2(\theta )/2}$. For small angles, $\theta\sim\sin(\theta)$, and we find from the above that $B(\theta)=e^{-\theta^2/2\sigma_B^2}$ where $\sigma_B=\lambda/\sqrt{8}\pi\sigma_r$. In angular dimensions, the beam profile is most often characterized by a full width at half maximum, or twice the angle at which $B(\theta)= 1/2$. We denote this as $\theta_{1/2}$ and find $\theta_{1/2}=\sqrt{8\ln(2)}\sigma_B=\sqrt{\ln (2)}\lambda/\sigma_r\pi$. We see the familiar relation that the beam width is proportional to the wavelength and inversely proportional to the size of the illumination pattern on the primary reflector with a pre-factor that depends on the geometry. For this far-field Gaussian profile the beam solid angle is $\Omega_B=\int B d\Omega = 2\pi\sigma_B^2$. The natural ``observable" for anisotropy measurements is the angular power spectrum for the following reason. When the distribution of the amplitudes of the fluctuations is Gaussian, as it apparently is for the primary CMB, {\it all} information about the sky is contained in the power spectrum. If there are correlations in the signal, for example if the cooler areas had a larger spatial extent than the warmer areas or discrete sources of emission were clustered together, then higher-order statistics would be needed to fully describe the sky. Even in this case, the power spectrum is the best first-look analytic tool for assessing the sky. Searches for ``non-Gaussianity" are an active area of research. While there are many possible sources of non-Gaussianity, the primary CMB anisotropy appears to be Gaussian to the limits of current measurements (e.g., \citet{komatsu/etal:2011}). A snapshot of the latest measurements of the power spectrum is shown in Figure~\ref{fig:pspec}. \begin{figure}[htb] \begin{center} \includegraphics[width=5.0in]{ps_mar_12_psss.pdf} \caption{\small Current best published measurements of the CMB temperature power spectrum (data points, \cite{komatsu/etal:2011,shirokoff/etal:2011,das/etal:2011, keisler/etal:2011}) and a $\Lambda$CDM cosmological model (solid red, up to $\ell=3000$). The model power spectrum for $\ell>3000$ is due Poisson noise from confusion-limited dusty star forming galaxies (DSFGs) at 150~GHz. The x-axis is scaled as $\ell^{0.45}$ to emphasize the middle part of the anisotropy spectrum. Gaussian approximations to the window functions are shown for COBE ($7^{\circ}$), WMAP ($12^\prime$), Planck ($5^\prime$), ACT ($1.4^\prime$, \cite{swetz/etal:2011}), and SPT ($1.1^\prime$, \cite{schaffer/etal:2011}). The large size of the WMAP error bars near $\ell=2$ and 1000 are due to ``cosmic variance" and finite beam resolution, respectively. \label{fig:pspec} } \end{center} \end{figure} The instrument resolution as expressed in the power spectrum is obtained from the Legendre transform of $B^2(\theta)$. To appreciate this, we take a step back and describe the connection between the observable, that is the angular power spectrum, and the antenna pattern of the instrument. Because the CMB covers the full sky, it is most usefully expressed as an expansion in spherical harmonics. The monopole term ($\ell=0$) has been determined by COBE/FIRAS to be $T_{CMB}=2.725\pm0.001$~K (\citet{fixsen/mather:2002}, plotted in Figure~\ref{fig:spectrum_tant}). The dipole term ($\ell=1$) is dominated by the peculiar velocity of the solar system with respect to the cosmic reference frame. As we are primarily concerned with cosmological fluctuations, we omit these terms from the expansion and we write the fluctuations as \begin{equation} \delta T(\theta,\phi)=\sum_{\ell\geq 2, -\ell\leq m \leq\ell} a_{\ell m}Y_l^m(\theta,\phi). \label{eq:dt} \end{equation} To the limits of measurement, the CMB fluctuations appear to be statistically isotropic (e.g., \cite{basek/etal:2006}): they are the same in all directions and thus have no preferred $m$ dependence. The overall variance of the CMB fluctuations is then given by \begin{equation} <\delta T^2(\theta,\phi)>= \sum_{\ell\geq 2}\frac{2\ell+1}{4\pi} <|a_{\ell m}|^2> = \sum_{\ell\geq 2} \frac{2\ell+1}{2\ell(\ell+1)} \frac{\ell(\ell+1)}{2\pi}C_\ell \equiv \sum_{\ell\geq 2} \frac{2\ell+1}{2\ell(\ell+1)} \cal{B}_\ell, \label{eq:var} \end{equation} where the factor of $2\ell+1$ comes from the sum of the $m$ values, all of which have the same variance, and the $4\pi$ comes from averaging over the full sky. Generally $C_\ell$ is called the power spectrum, but in cosmology the term is just as frequently used for $\cal{B}_\ell$. These quantities are the primary point of contact between theory and measurements. Cosmological models provide predictions for $C_\ell$; experiments measure temperatures on a patch of the sky and provide an estimate of $C_\ell$. The quantity most often plotted is $\cal{B}_\ell$ \footnote{The factor of $\ell(\ell+1)/2\pi$ \citep{bond/efstathiou:1984}, as opposed to the possibly more natural $\ell(2\ell+1)/4\pi$ \citep{peebles:1994}, is derived from the observation that the cold dark mater model, without a cosmological constant, approaches $\ell(\ell+1)$ at small $\ell$ for a scalar spectral index of unity. Needless to say, the model that gave rise to the now-standard convention does not describe Nature. Another choice would be $(\ell+1/2)^2$ because the wavevector $k\rightarrow\ell+1/2$ at high $\ell$. There is not a widely agreed upon letter for the plotted power spectrum. We use $\cal{B}$ for both ``bandpower" and J. R. $\cal{B}$ond who devised the convention. The term bandpower refers to averaging the $\cal{B}_\ell$ over a band in $\ell$.}. It is the fluctuation power per logarithmic interval in $\ell$. The x-axis of the power spectrum is the spherical harmonic index $\ell$. As a rough approximation, $\ell\approx 180/\theta$ with $\theta$ in degrees. The process of measuring the CMB with a beam of finite size acts as a convolution of the intrinsic signal (eq.~\ref{eq:dt}) with the beam function, $B(\theta)$. The finite resolution averages over some of the smaller angular scale fluctuations and thereby reduces the variance given in eq.~\ref{eq:var}. By Parseval's theorem, a convolution in one space corresponds to a multiplication in the Fourier transform space. In our case, because we are working on a sphere with symmetric beams, Legendre transforms, as opposed to Fourier transforms, are applicable. We may think of the square of the Legendre transform of $B(\theta)$, $B_\ell^2$, as filtering the power spectrum. (There is one power of $B$ associated with one temperature map.) The transform of $B(\theta)$ is given by \begin{equation} B_\ell = 2\pi \int B(\theta) P_\ell(\cos\theta)d\cos(\theta) = B_0e^{\ell(\ell+1/2)/2\sigma_B^2 }, \label{eq:bl} \end{equation} where $P_\ell$ is a Legendre polynomial and $B_0$ is a normalization constant. A Gaussian random field is fully described by the two-point correlation function, $C(\theta )$ (the Legendre transform of the power spectrum), which gives the average variance of two pixels separated by an angle. The variance given in eq.~\ref{eq:var} is the angular correlation function evaluated for zero angular separation between pixels. The general relation, including the effects of measuring with a beam of finite resolution, is \begin{equation} C_{meas}(\theta) = <\delta T_{meas}(\theta_1,\phi_1)\delta T_{meas}(\theta_2,\phi_2) > = \sum_{\ell\geq 2} \frac{2\ell+1}{4\pi}C_\ell P_\ell(\theta) W_\ell . \label{eq:ctheta} \end{equation} Here $W_\ell$ is the ``window function.'' In this expression, the angle $\theta$ goes between directions ``1'' and ``2''.~\footnote{We follow common notation but note that in eq.~\ref{eq:dt}, $\theta$ is a coordinate on the sky; in eq.~\ref{eq:bl}, $\theta$ is the angular measure of the beam profile with a $\theta=0$ corresponding to the beam peak; and in eq.~\ref{eq:ctheta}, $\theta$ is the angular separation between two pixels on the sky. } The window function encodes the effects of the finite resolution. For a symmetric beam, $W_\ell=B_\ell^2$. Figure~\ref{fig:pspec} shows approximations to the window functions, assuming Gaussian shaped beams, for COBE, ($\theta_{1/2}=7^\circ$), WMAP($\theta_{1/2}=12^\prime$), Planck($\theta_{1/2}=5^\prime$), ACT($\theta_{1/2}=1.4^\prime$), and SPT($\theta_{1/2}=1.1^\prime$). One immediately sees the relation between the resolution and how well one can determine the power spectrum. For example, COBE, which we discuss in more detail below, was limited to large angular scales (low $\ell$) because of its relatively low angular resolution. \subsection{Throughput and Modes} \label{sec:throughput} One of the key characteristics for any optical system is the ``throughput" or \'etendue (or ``A-Omega") of the system. It is a measure of the total amount of radiation that an optical system handles. Using Liouville's theorem, which roughly states that the volume of phase space is conserved for a freely evolving system, one can show that $A\Omega$ is conserved for photons as long as there is no loss in the system. This means that at each plane of the system the integral of the product of areal ($dA$) and angular distribution ($d\Omega$) of the radiation is constant. For example, let's assume that the effective angular distribution of a beam launched from a primary optic of radius $r$ is a top hat in angular extent with an apex angle defined as $\theta_{1/2} = 2 \theta_0$, analogous to the $\theta_{1/2}$ definition for Gaussian distributions above. Then one obtains a throughput of \begin{equation} \label{eqn:aomegatop} A \Omega = 2 \pi^{2}r^{2} (1 - \cos \theta_0) \simeq \frac{\pi^{2}D^{2}\theta_{1/2}^{2}}{16}, \end{equation} where the approximation holds for $\sin\theta_0 \simeq \theta_0$. If the angular distribution is a Gaussian with width $\sigma_{B}$ ($\theta_{1/2} = \sqrt{8\ln 2} \sigma_{B}$) then \begin{equation} \label{eqn:aomegagauss} A \Omega \simeq 2\pi^{2} r^{2} \sigma_B^2 \simeq \frac{\pi^{2}D^{2}\theta_{1/2}^{2} }{16 \ln 2} \approx \frac{\pi^{2}D^{2}\theta_{1/2}^{2} }{11}. \end{equation} Here we also assume that $\sigma_{B}$ is small, so that the integration over the angular pattern gives appreciable contributions only for $\sin \sigma_{B} \simeq \sigma_{B}$. If the primary reflector has an effective diameter of 1~m and the beam has $\theta_{1/2} = 0.15^\circ$ then the throughput is 0.04~cm$^{2}$sr (assuming Equation~\ref{eqn:aomegatop}). Let's say this radiation is focused down to a feed with an effective collecting area of 1~cm$^2$. Conservation of throughput implies that now $\theta_{1/2}=13^\circ $. In other words, as you squeeze down the area the radiation has to go through by focussing or concentrating, the solid angle increases. While $A\Omega$ is conserved for any lossless optical system, it has a specific value for a system that supports only a ``single mode" of propagating radiation: \begin{equation} \label{eqn:lambda2} A\Omega=\lambda^2, \end{equation} where $\lambda$ is the wavelength of the radiation and $A$ is the effective area of the aperture. We discuss modes below. This relation, which may be derived from the Fraunhofer integral and conservation of energy (as in \citet{born/wolf}, Section 8.3.3.), is a generalization of familiar results from diffraction theory. For example, Airy's famous expression that the angular diameter of the spot size from a uniformly illuminated aperture is $2.44\lambda/D$, where $D$ is the aperture diameter, is equivalent to Equation~\ref{eqn:lambda2}\footnote{The Airy beam profile is given by $B_n(\theta )= [2J_1(x)/x]^2$ where $x=\pi D\sin (\theta )/\lambda$ and $J_1$ is a Bessel function. The value of $1.22\lambda/D$ is the angular separation between the maximum and the first null. For small angles, $\theta_{1/2}=1.03\lambda/D$. The total solid angle is $2\pi \int B_n(\theta )\sin(\theta )d\theta$. To make the integral simple and avoid considering the difference between projecting onto a plane versus a sphere, we consider the limit of small $\theta$. Then, $\Omega= 8\lambda^2/\pi D^2\int_0^\infty[J_1(\pi Dx/\lambda)]^2x^{-1}dx=\lambda^2/A$. }. The relations above give a handy conversion between the system's effective aperture, the angular extent of the beam and the frequency of interest for single mode optical systems. Combining Equations~\ref{eqn:aomegagauss} and~\ref{eqn:lambda2} we obtain \begin{equation} \label{eqn:beamsize} \theta_{1/2, rad} = 1.06 \lambda/D, \end{equation} where $D$ is the effective illumination on the primary reflector. In the above treatment we brought in the concept of a single propagating mode of radiation. A mode is a particular spatial pattern of the electromagnetic field. Radiation propagation in a rectangular waveguide of height $a/2$ and width $a$ gives a familiar example. For frequencies less than a cutoff, $\nu_c<c/2a$, no electromagnetic radiation can propagate down a waveguide of length longer than a few $\lambda$. For $c/2a<\nu_c<\sqrt{5/4}c/a$ only the TE$_{10}$ mode of radiation propagates; at frequencies just above just above $c/a$ the TE$_{10}$ and TE$_{01}$ can propagate. Above $\sqrt{5/4}c/a$ the TE$_{10}$, TE$_{11}$, and TM$_{11}$ modes are free to propagate. With the geometry of a cylindrical waveguides of diameter $d$ the lowest frequency mode is the TE$_{11}$ (which supports two polarizations) with a cut-off frequency $\nu_c = c/1.7d$, and the next modes are TM$_{01}$ and TE$_{21}$ which turn on at frequencies that are 1.31 and 1.66 higher, respectively, than the lowest. Experimentally, the selection for operating in a single mode is typically achieved by having a waveguide somewhere along the light path, typically at the entrance to the detecting element\footnote{In a close packed array this may be approximated having the pixel size smaller than $\lambda$. Such a spatial mode would support two polarizations.}. The waveguide is essentially a high-pass filter, selecting the lowest frequency that can pass through the system. An additional low-pass filter then rejects frequencies at which the second and higher modes are propagating. Experimenters have been using single modes because these systems have particularly well behaved and calculable beam patterns. If a second mode were added, say by operating at a higher frequency so that both the TE$_{11}$ and TM$_{01}$ propagated (for a cylindrical waveguide), one would receive more signal, an advantage, but the beam pattern of the combination of modes would be different, likely more complex compared to the single mode illumination, and there would likely be increased spill over the edge of the primary. Consider a radio receiver that observes a diffuse Planckian source of temperature $T$ through a telescope. The surface brightness is given by \begin{equation} S_\nu(T) = \frac{2h\nu^3}{c^2(e^{h\nu/kT}-1)}\rightarrow \frac{2\nu^2}{c^2} kT, \end{equation} where $h$ is Planck's constant, $k$ is Boltzmann's constant, and $S_\nu$ is measured in ${\rm W/m^2srHz}$. The expression on the right is the surface brightness in the Rayleigh-Jeans limit. The power that makes it through to the detector is given by \begin{equation} P=\frac{1}{2}\int_{\Omega}\int_{\nu} \epsilon (\nu , \theta , \phi) A_e(\nu)S_\nu(\theta ,\phi )B(\nu ,\theta ,\phi ) d\Omega d\nu, \end{equation} where the factor of 1/2 comes from coupling to a single polarization, $A_e$ is the effective area and $\epsilon$ is the transmission efficiency of the instrument. For clarity of discussion we will henceforth assume that the transmission efficiency is unity. If $S_\nu$ is uniform across the sky, we are in the Rayleigh-Jeans limit ($h\nu<<kT$), and $A_e$ and $B_n$ are relatively independent of frequency over a small bandwidth (commonly achieved), then \begin{equation} P=\frac{1}{2}\int_{\nu} A_e(\nu) 2\frac{\nu^2}{c^2} k T \int_{\Omega}B_n(\nu, \theta, \phi ) d\Omega d\nu= kT\int_{\nu} \frac{A_e(\nu) \Omega }{\lambda^2} d\nu = kT\int_{\nu} d\nu = kT\Delta\nu. \end{equation} Thus, each mode of radiation delivers $kT\Delta\nu$ of power to the detector. If there is a second mode in the system that is supported in this bandwidth then it also contributes $kT\Delta\nu$ of power. It is possible, even likely, that different modes are supported over different but overlapping bandwidths. Increasing the amount of celestial power on one's detector is an advantage when trying to detect a faint signal like the CMB. The trade off is between control of the optical properties of the system and collecting power onto the detector. Note that using a larger telescope does not increase the detected power if one detects only a single mode. A larger telescope merely increases the resolution. In a bolometric system, one can to a certain extent control the number of modes that land on the detector. For example, one can place the absorbing area at the base of a ``light collector" or Winston cone \citep{welford/winston:1978}. An approximation to the number of modes in the system is then found by beam mapping to determine $\Omega_B$, measuring the band pass to find the average wavelength $\lambda_a$, and dividing by the collecting area of the input optics. This gives the number of modes as $\alpha_m=A\Omega/\lambda_a^2$. This is only an approximation because it assumes knowledge of the aperture distribution (for the collecting area) and that all modes couple to the detector with the same efficiency. We use $\alpha$ because often this quantity is not an integer. Although formally modes come in integer sets, not all modes couple equally to the detector output. In the early days of CMB bolometry muti-moded systems were often used. As detectors became more sensitive, the field moved toward single-moded bolometric systems as pioneered in the White Dish experiment \citep{tucker/etal:1993}. This led to more precise knowledge of the beams. To a good approximation, the current generation of bolometric CMB instruments all operate single-moded (with the first mode of propagation). However, there are modern examples of multi-moded systems though they are not used for the primary CMB bands. They include the 345~GHz band on Boomerang \citep{jones:2005} and Planck's 545 and 857 GHz bands \citep{ade/etal:2010, maffei/etal:2010} where there are just a few modes. In these cases, the coupling of the radiation in the bolometer's integrating cavity is in practice not possible to compute accurately. Interest in multi-moded systems has returned with at least one satellite proposal for an instrument called PIXIE for measuring the CMB polarization in a massively over-moded system \citep{kogut/etal:2011}. The PIXIE concept is based on the observation that the signal improves as the number of modes, $n_m$, but the noise degrades only as $\sqrt{n_m}$ in the photon limited noise regime (see below). Thus, S/N improves as $\sqrt{n_m}$. \subsection{Noise} \label{sec:noise} \begin{figure}[tbh] \begin{center} \includegraphics[width=4.0in]{noise_sh.pdf} \caption{\small Photon noise from 1 to 1000 GHz for a single mode of radiation for a 20\% bandwidth in frequency. The CMB noise (solid, thick black) is for a region of sky without any other foreground emission. It sets a fundamental limit over most of this frequency range (the far infrared background, not shown, sets the limit near 1 THz). Noise from atmosphere (Chile or South Pole, zenith angle of 45$^\circ$) for bolometers (dash, green) is lower than for coherent receivers (solid, purple) except below $\sim$20~GHz. Bolometers on balloon (dot, blue) are limited by CMB noise between 80 and 200~GHz. The atmospheric noise shown is due to thermal emission and does not include contributions from turbulence, changes in column density, or water vapor, which can increase the noise many-fold. Also shown are reported coherent receiver (square) and bolometer (star) noise for the Planck satellite, both adjusted as discussed in the text and both for total intensity. The Planck bolometers are close to the fundamental noise limit. \label{fig:noise} } \end{center} \end{figure} The choice of an optical system and its location is intimately connected with the desired noise performance. There are a number of contributing noise sources that depend on the type of detector, how it is biased, and on its environment (see e.g., \cite{mather:1982,pospieszalski:1992}). For this review we concern ourselves primarily with the photon noise from the sky because it sets the ultimate detection limit. We first consider bolometric or ``direct'' detectors which detect the total power and destroy all phase information in the incident field. Equation \ref{eq:photonnoise} gives the photon noise power on the detectors per mode (e.g., \citet{zmuidzinas:2003}) as: \begin{equation} N^2(\nu)\tau = \frac{\Delta\nu}{ \eta(\nu) (k\Delta\nu )^2} (h\nu )^2n(\nu )[1+\eta(\nu )n(\nu )], \label{eq:photonnoise} \end{equation} where $\tau$ is the integration time, $\Delta\nu$ is the bandwidth, $n(\nu )$ is the occupation number (power in a mode divided by $h\nu$), and $\eta(\nu )$ is the quantum efficiency which we take to be unity. We have approximated integrals by multiplying by a bandwidth of $\Delta\nu$. Because each mode of radiation delivers a power of $kT\Delta\nu$ one may convert from the ${\rm Ws}^{1/2}$ to ${\rm Ks}^{1/2}$ by dividing by $k\Delta\nu$. The left-hand term in the expression is the Poisson term and the right-hand term accounts for the correlations between the arrival times of the photons. When there are multiple modes in the system, one cannot simply assume that the above holds for each mode. One must take into account the correlations between the photon noise in each mode (\citet{lamarre:1986, richards:1994, zmuidzinas:2003}). For coherent detector systems, one first amplifies the incident electric field while retaining phase information. After multiple stages of amplification, mixing, etc. one at last records the power in the signal. Because the amplitude and phase are measured simultaneously, and these quantities do not commute, quantum mechanics sets a fundamental noise limit of $N\sqrt{\tau}=h\nu/k\sqrt{\Delta\nu}$. In practice, the best systems achieve three times the quantum limit over a limited bandwidth and in ideal conditions. A good estimate of the noise limit is: \begin{equation} N(\nu)\sqrt{\tau} = \frac{3(h\nu/k) + T_{sky}}{\sqrt{\Delta\nu}}, \end{equation} where $T_{sky}$ is the antenna temperature of the incident radiation. In Figure~\ref{fig:noise} we show the noise limit for a single-moded detector with 20\% bandwidth at a high altitude ground-based site (e.g., the South Pole or the Atacama Desert) and at a typical balloon altitude of 36~km. We also show the noise level for the current generation of detectors on the Planck satellite. The quoted sensitivities for the bolometric detectors \citep{planck-HFI-inst:2011} (two polarizations combined and all noise terms in the high frequency limit) are adjusted up to a 20\% reference bandwidth. The quoted sensitivities for the coherent detectors \citep{planck-LFI-inst:2011} (all noise terms in the high frequency limit) have been adjusted down to a 20\% reference bandwidth. Advances in bolometric detectors have reached the point where the intrinsic noise is near the noise limit set by the photon noise. Thus, to improve sensitivity, one wins more quickly by adding detectors as opposed to improving the detector noise. This is one of the motivations behind large arrays of detectors, and their associated large fields of view. In both cases one can also win by increasing the number of modes. \subsection{Polarization Terminology} \label{sec:polarizationterminology} There is a well developed terminology for describing polarization. Imagine a telescope beam that points at a single position on the sky and feeds a detector that can measure the amplitude and phase of a partially coherent electric field. Because the electric field is a vector in a plane, it can be completely specified by measuring its horizontal, $E_x(t)$, and vertical, $E_y(t)$, components at each instant. To measure the intensity of the field, one averages the detector outputs over time. The polarization properties of the field, assuming they are relatively constant, are completely specified with the coherency matrix: \begin{equation} \twobytwo{< E_xE_x^*>} {< E_xE_y^*>}{< E_yE_x^*>} {< E_yE_y^*>} \propto \frac{1}{2} \twobytwo{I}{0}{0}{I} + \frac{1}{2} \twobytwo{Q}{U}{U}{-Q} + \frac{i}{2} \twobytwo{0}{V}{-V}{0} \label{eq:stokes} \end{equation} where the ``*'' denotes a complex conjugate and the average is taken over time. The coherency matrix can also be represented by means of Stokes Parameters $I,\, Q,\, U$ and $V$, as shown in the right hand side of Equation~\ref{eq:stokes}. The polarization has the symmetries of a spin-two field. The proportional sign indicates that the Stokes parameters, which represent intensities, are reported in Kelvins. The total intensity in the radiation is the trace of the matrix. Stokes $Q$ is the intensity of the horizontal polarization minus the vertical. Stokes $U$ is the in-phase correlation between the two components of the field minus the $180^\circ$ out-of-phase correlation. If the incident radiation was pure Stokes $Q$, and one rotated the field by $45^\circ$ in the x-y plane, then the output would be pure $U$. Stokes $V$ measures circular polarization. However, the CMB is expected to be only linearly polarized. (See \cite{zal+seljak:1997} and \cite{kamionkowski/etal:1997} for a discussion and formalism.) Although deviations from this prediction are of great interest, they are beyond the scope of this article. As the beam is scanned across the sky, one makes maps of $I$, $Q$, and $U$. Of course, the values of $Q$ and $U$ depend on the specification of a coordinate system. However, the $Q$ and $U$ maps may be transformed into `E modes and B modes'. The advantage of these modes is that they are independent of the coordinate system and, for the CMB, are directly related to different physical processes in the early universe \citep{kamionkowski/etal:1997,zal+seljak:1997} . The E-modes correspond to a spin-two field with no curl and originate primarily from density perturbations in the early Universe. This E-mode signal has been detected by a number of instruments. The B-modes correspond to a spin-two field with no divergence and can originate from tensor-type physical processes such as gravity waves that are predicted to have been generated by an inflationary epoch as close at $10^{-35}$ seconds after the big bang. To date the B-mode signal has not yet been discovered. If the B-modes are of sufficient amplitude to be detected, their impact on cosmology and physics would be enormous. Not only would the discovery significantly limit the number of models that could describe the early universe, but it would mark the first observational evidence of gravity operating on a quantum scale. At large angular scales, $\ell \ltsim 100$, B-modes may result from inflationary gravitational waves, and from galactic foreground emission. At higher $\ell$ multipoles the primary contribution to the B-mode spectrum is from E-modes being gravitationally lensed so that they produce a B-mode component. The level of primordial (or inflationary) B-modes, is quantified in terms of a parameter $r$, the ratio of the variance of density perturbations to tensor perturbations. Predictions for $r$ vary over many orders of magnitude. Currently observations give $r<0.21$ (95\%)~\citep{keisler/etal:2011}, a limit coming from {\it temperature} anisotropy and other cosmological probes (rather than polarization)~\footnote{The inflation-generated gravity waves also contribute to the temperature anisotropy and thus can be constrained by such measurements.}. When translated into temperature units in Figure~\ref{fig:pspec}, this becomes a faint ${\cal B} \ltsim 150 \times 10^{-9}$~K for $\ell \sim 90$. The experimental challenge is to make accurate and precise polarization measurements at the level of few tens of nano-K.
| 12
| 6
|
1206.2402
|
||
1206
|
1206.2919_arXiv.txt
|
In this paper we present the Clustering-Labels-Score Patterns Spotter (CLaSPS), a new methodology for the determination of correlations among astronomical observables in complex datasets, based on the application of distinct unsupervised clustering techniques. The novelty in CLaSPS is the criterion used for the selection of the optimal clusterings, based on a quantitative measure of the degree of correlation between the cluster memberships and the distribution of a set of observables, the \emph{labels}, not employed for the clustering. CLaSPS has been primarily developed as a tool to tackle the challenging complexity of the multi-wavelength complex and massive astronomical datasets produced by the federation of the data from modern automated astronomical facilities. In this paper we discuss the applications of CLaSPS to two simple astronomical datasets, both composed of extragalactic sources with photometric observations at different wavelengths from large area surveys. The first dataset, CSC+, is composed of optical quasars spectroscopically selected in the SDSS data, observed in the X-rays by Chandra and with multi-wavelength observations in the near-infrared, optical and ultraviolet spectral intervals. One of the results of the application of CLaSPS to the CSC+ is the re-identification of a well-known correlation between the $\alpha_{\mathrm{OX}}$ parameter and the near ultraviolet color, in a subset of CSC+ sources with relatively small values of the near-ultraviolet colors. The other dataset consists of a sample of blazars for which photometric observations in the optical, mid and near infrared are available, complemented for a subset of the sources, by Fermi $\gamma$-ray data. The main results of the application of CLaSPS to such datasets have been the discovery of a strong correlation between the multi-wavelength color distribution of blazars and their optical spectral classification in BL Lacs and Flat Spectrum Radio Quasars (FSRQs) and a peculiar pattern followed by blazars in the WISE mid-infrared colors space. This pattern and its physical interpretation have been discussed in details in other papers by one of the authors.
|
\label{sec:introduction} The advancement of discovery in astronomy, from the statistical point of view, can be described as the successful application of several distinct Knowledge Discovery (KD) techniques to increasingly larger data samples. These techniques include: the classification of sources according to one or more observational quantities; pattern recognition for the discovery of correlations among observable quantities; outlier selection for highlighting rare and/or unknown sources; regression, for the estimation of derived empirical properties from observed quantities. The discovery of new or unexpected correlations between observable quantities at different wavelengths, for example, has propelled the understanding of the nature of astronomical sources and their physical modeling (see, for example, the discovery of the fundamental plane of elliptical galaxies~\citep{djorgovski1987}), and the discovery of the link of the galaxy X-ray emission with different stellar populations~\citep{fabbiano1985,fabbiano2002}. The effectiveness of pattern recognition techniques for the determination of correlations in low dimensional spaces (two or three dimensions) has usually relied on the ability of the astronomers to visualize the distribution of data and make informed guesses about the nature of these patterns, based on theoretical models, reasonableness and intuition. However, this approach becomes more and more ineffectual with the increase in complexity and size of the explored datasets. This difficulty has led to the introduction of KD techniques in the astronomical context. Such techniques are based on statistical and computational methodologies capable of automatically identifying useful correlations among parameters in a N-dimensional dataset without any \emph{a priori} assumption on the nature of both data and the sought out patterns. Using these techniques, the focus of the astronomer can shift to the definition of the general problem to be investigated, the selection of the interesting patterns and their physical interpretation. In this paper, we present CLaSPS, a new methodology based on KD techniques for the exploration of complex and massive astronomical datasets and the detection of correlations among observational parameters. While CLaSPS is designed for datasets containing very large number of sources, it is also well suited to handle small datasets, as will be shown in this paper. The adoption of KD methodologies in astronomy has only recently surged, due to the increasing availability of massive and complex datasets that would be almost intractable if tackled with the knowledge extractions techniques classically employed in astronomical research. A review of the advantages and most interesting applications of KD to astronomical problems can be found in \citep{ball2010}. The main reasons for the delay in the adoption of such methods in astronomy are: a) datasets for which KD has an edge over classical methods (because of their size and complexity) have become frequent only in the last $\!\sim$15 years; b) slow transition from model-driven to data-driven research; c) lack of interdisciplinary expertise required for the application of KD techniques. Other disciplines for which the problem of dealing with massive datasets arose earlier, instead, have seen a steadier and faster growth of the number and importance of the KD tools employed on a regular basis. For example, the study of financial markets and complex networks and systems (applied to the WWW, advertisement placement, epidemiology, genetics, proteomics and security) have been on the forefront of application and development of KD techniques. Thorough reviews of the applications of KD methodologies to specific financial topics, i.e. customer management and financial fraud detection, can be found in~\citep{ngai2009} and~\citep{ngai2011} respectively, while a general review of the role of KD in bio-informatics is provided in~\citep{natarajan2005}. Even if a certain degree of inter-disciplinary expertise is desired, domain-specific knowhow is crucial to narrow down the types and number of techniques that can be used to address the specific problems encountered in each field, and to interpret correctly the results of the application of such techniques to the data. Furthermore, KD is only one of the skills necessary to tackle the new problems arising with the onset of data-driven astronomy, the other being astrostatistics~\citep[e.g.][]{babu2007}, visualization techniques~\citep{comparato2007,way2011,hassan2011} and advanced signal processing~\citep{scargle2003,protopapas2006}. All these fields are currently the subject of a new discipline: the Astroinformatics~\citep{borne2011}. In this paper, we have focused our attention on the broad question of how efficiently the physical nature of astronomical sources can be characterized by multi-wavelength photometric data. We have applied CLaSPS to two datasets representing specific cases where such assumption can be tested and verified. CLaSPS assumes that low dimensional patterns in data are associated with aggregations (clusters) in the structure of the data in the high-dimensional ``\emph{feature} space" generated by all the observables of the source\footnote{In general, any source with N measured observables can be represented as a point in a N-dimensional \emph{feature} space, where the coordinates are the numerical values of the observables (or derived quantities).}. These clusters are defined by the degree of correlation between the distribution of \emph{features} (i.e., the observables used to build the \emph{feature} space where clusters have been selected) and a set of external quantities, usually observables, metadata or \emph{a priori} constraints that have not been used for clustering. The CLaSPS method, based on the KD techniques for unsupervised clustering and the use of external information to label the clusters members, has been designed to tackle the problem of the extraction of information from two distinct classes of datasets: a) inhomogeneous large area datasets. The advancements in the Virtual Observatory (VO) technology are facilitating the access to datasets obtained by the combination of multiple observations from different surveys with different observational features (e.g., depth, spatial coverage and resolution, spectral resolution). Such datasets are, by construction, inherently incomplete and are affected by the inconsistency of the observational features of each set of observations used to create them. We expect these datasets to grow in complexity as new data becomes available. KD techniques can facilitate the extraction of the available knowledge contained in these ``federated" inhomogeneous samples. b) Large homogeneous datasets from multi-wavelength surveys of well-defined areas of the sky observed with similar depths at different wavelengths. These surveys typically yield large samples of sources, complete to a given flux. These datasets span limited but well characterized regions of the N-dimensional observable \emph{feature} space. The exploration of the structure of the multi-dimensional distribution of sources in the \emph{feature} space may lead to the discovery of high dimensional correlations and patterns in the data that have been overlooked (or, simply, could not be established) in lower dimensional studies. This paper is organized as follows: in Sec.~\ref{sec:method} we describe the CLaSPS method, in Sec.~\ref{sec:experiment1} its application to the CSC+ dataset, and in Sec.~\ref{sec:experiment2} its application to a sample of blazars with multi-wavelength photometry available. We discuss the future developments of CLaSPS in Sec.~\ref{sec:results}.
|
\label{sec:results} In this paper we have presented CLaSPS, a new method for the determination of correlations in complex astronomical datasets, based on KD techniques for unsupervised clustering supplemented by the use of external information to label and characterize the content of the clusters (Sec.~\ref{sec:method}). We have introduced the \emph{score} (Sec.~\ref{subsec:scores}) and shown the reliability of the \emph{score} as a measure of the degree of correlation among the membership distribution of sources in a clustering and the distribution of a quantitative or categorial \emph{label} in distinct classes, using simulated clusterings (Sec.~\ref{subsec:choice}). We have also discussed the applications of CLaSPS to two different samples composed of extragalactic sources with multi-wavelength photometry used as \emph{features}: the first dataset, CSC+ (Sec.~\ref{sec:experiment1}), is composed of spectroscopically confirmed quasars from the SDSS DR8 with multi-wavelength observations in the near-infrared, optical and ultraviolet, and detected (or with reliable upper limits) in the Chandra X-ray CSC catalog; the second dataset (Sec.~\ref{sec:experiment2}) is composed of optically confirmed blazars with mid-infrared, near-infrared and optical observations, complemented, for a subset of the sources, by $\gamma$-ray data from the 2FGL. The main result of the application of CLaSPS to the CSC+ dataset has been the confirmation of a well known correlation~\citep[see, for example][]{lusso2010} between the near-ultraviolet/blue optical luminosity of optically selected radio-quiet quasars and the spectral index $\alpha_{\mathrm{OX}}$ (Sec.~\ref{subsec:application1}) in a subset of the highly inhomogeneous CSC+ sample. CLaSPS has narrowed the CSC+ sample to three specific clusters that show significant correlation between the $L_{\mathrm{opt}}(2500\AA)$ mono-chromatic luminosity and the $\alpha_{\mathrm{OX}}$ spectral index, based on the clustering of the CSC+ sample in the \emph{feature} space generated by the near-infrared, optical and ultraviolet photometric data. Further analysis of the results have shown that the correlation for the subset of sources contained in the correlated clusters is driven by the values of the $nuv-u$ color, as an indicator of the presence of the ``big-blue-bump" component in the SEDs of the sources. In the case of the experiments performed on the blazars sample, CLaSPS has revealed an unknown correlations between the spectral classification of the blazars in BZQs, BZBs and BZUs (Sec.~\ref{subsec:dataset2}) and their distribution in the \emph{feature} space generated by mid-infrared, near-infrared and optical colors. Further investigation has shown that the correlation is almost entirely attributable to the peculiar pattern followed by BZCat and $\gamma$-ray detected blazars follow in the WISE mid-infrared color space (Sec.~\ref{subsec:application2}). The implications of this pattern on the modeling of blazars emission mechanism and a novel method for the selection of candidate blazars from mid-infrared survey photometric data based on such pattern have been investigated in other works by some of the authors~\citep{massaro2011,dabrusco2012,massaro2012}. While in this paper we have described applications of CLaSPS to inhomogeneous samples obtained by federating data from general purpose large area surveys, we plan to apply the method to large homogeneous samples of extragalactic sources, like the Chandra-{\it COSMOS} dataset~\citep{elvis2009,civano2012}. CLaSPS selects the optimal clustering based on the \emph{scores}, a measure of the correlation between the clustering membership and a given partition of one external observable used as \emph{label}. For this reason, as discussed in Sec.~\ref{subsec:comparison}, CLaSPS differentiate itself from ``cluster ensembles'' techniques. Nonetheless, three different aspects of the current CLaSPS method could be improved by the application of cluster ensembles techniques: 1) the limited number of clustering techniques used may bias the exploration of the clusterings towards particular aspects of the \emph{feature} distribution of the dataset considered. Moreover, CLaSPS does not to take into account the properties and, potentially, weaknesses of each distinct clustering techniques; 2) the choice of the optimal clustering is based on a single \emph{label} at the time. Correlations between a given set of clusterings and multiple \emph{labels} cannot be captured by CLaSPS, but are left to the interpretation of multiple distinct \emph{label} experiments; 3) the choice of the optimal clusterings in CLaSPS is based on a single ``view" of the dataset, i.e. on clusterings obtained using a single set of sources and/or \emph{features}. The first point could be easily addressed by widening the portfolio of clustering methods used by CLaSPS. Then, cluster ensembles methods could be applied to subsets of clusterings (grouped by total number of clusters or by type of clustering method) to determine the ``consensus clustering'' of each subset of clusterings. The \emph{scores} would then be evaluated on the set of consensus clusterings determined in this way. The second point could be similarly addressed by searching for the ``consensus clusterings'' of the set of optimal clusterings selected through the \emph{scores} values for different \emph{labels}. The third point is particularly important for astronomy, because most astronomical datasets present different number of \emph{features} available for different members of the dataset. In its current implementation, CLaSPS can be applied only to clusterings obtained with a fixed given subset of sources and \emph{features}. CLaSPS, in this scenario, can be applied separately to distinct groups of sources in the dataset with a set of common \emph{features}. In order to overcome this limitation, distinct sets of clusterings could be obtained for different ``views'' of the dataset, i.e. different subsets of the datasets with the same set of \emph{features} available. Then, the multiple clusterings obtained with on the different views of the dataset with different clustering techniques could be consolidated into a single set of clusterings through the application of cluster ensembles technique on the groups of clustering obtained with the same clustering technique on distinct views of the dataset. This approach is similar to ``\emph{features} distributed clustering'' and ``object distributed clustering" scenarios typical of practical application of clustering ensemble \citep{strehl2003}. A further improvement to the CLaSPS method is related to the choice of the classes of the \emph{labels}. In the frequent case of quantitative continuous \emph{labels}, the choice of the binning is crucial for the evaluation of the \emph{scores} and, in turn, for the determination of the correlations among \emph{features} and \emph{labels}, if any. While the astronomer deciding the binning of the \emph{labels} on the basis of \emph{a priori} knowledge of the specific topic considered is a viable option for most cases where the astronomer tries to generalize an already known correlation or a generic problem (e.g. the characterization of astronomical sources based on their photometric parameters for this paper) is investigated, this can be a limitation to the generality of the method when the aim of the experiments is a ``blind" exploration of multi-dimensional astronomical datasets. In order to improve this aspect of the CLaSPS method, we are exploring the possibility of complementing the astronomer's definition of classes of \emph{labels} with spontaneous classes that can be determined from the intrinsic distribution of the \emph{labels} themselves by the application of non-parametric KD techniques.
| 12
| 6
|
1206.2919
|
1206
|
1206.5222_arXiv.txt
|
We present high--resolution, long--slit spectroscopic observations of five compact ($\leq$ 10 arcsec) planetary nebulae located close to the galactic bulge region and for which no high spatial resolution images are available. The data have been drawn from the San Pedro M\'artir kinematic catalogue of galactic planetary nebulae (L\'opez et al. 2012). The central star in four of these objects ( M 1--32, M 2--20, M 2--31 and M 3--15) is of WR--type and the fifth object (M 2--42) has a wels type nucleus. These observations reveal the presence in all of them of a dense and thick equatorial torus-like component and high--speed, collimated, bipolar outflows. The code SHAPE is used to investigate the main morpho--kinematic characteristics and reproduce the 3--D structure of these objects assuming a cylindrical velocity field for the bipolar outflows and a homologous expansion law for the torus/ring component. The deprojected expansion velocities of the bipolar outflows are found to be in the range of 65 to 200 km $\rm{s^{-1}}$, whereas the torus/ring component shows much slower expansion velocities, in the range of 15 to 25 km $\rm{s^{-1}}$. It is found that these planetary nebulae have very similar structural components and the differences in their emission line spectra derive mostly from their different projections on the sky. The relation of their morpho--kinematic characteristics with the WR--type nuclei deserves further investigation. \noindent
|
Low to intermediate mass stars (0.8 to 8 $\rm{M_\odot}$) undergo spectacular structural changes during the last phases of their evolution. According to the interacting stellar wind model (ISW; Kwok Purton and Fitzgerald 1978), the spherically symmetric PNe are formed by the interaction of two isotropic stellar winds, a slow and dense one from the Asymptotic Giant Branch (AGB) phase and a fast and tenuous one during the PN phase. The generalized ISW model considers in addition the contribution of an equatorial density enhancement at the exit of the AGB phase that produces a density contrast leading to the formation of axisymmetric shapes (e.g. Balick 1987) that may range from mildly elliptical to bipolar. In fact, the majority of planetary nebulae (PNe) and proto--PNe (PPNe) show axisymmetric morphologies. In some cases highly collimated, high--speed bipolar outflows are also found. The causes of the equatorial density enhancement and the jet-like outflows are still under debate (e.g. Balick \& Frank 2002) and the two most likely being the presence of magnetic fields (e.g. Garcia--Segura \& L\'opez 2000, Frank \& Blackman 2004) and post-common envelope, close binary nuclei (e.g. Soker \& Livio 1994, De Marco 2009). Sahai and Trauger (1998) proposed as a shaping mechanism for the bipolar and multi--polar PNe, the presence of highly collimated outflows developed during the post--AGB or PPNe phase. All these elements represent the main considerations in recent morphological classification studies of PNe (e.g. Parker et al. 2006, Miszalski et al. 2008, Sahai et al. 2011, Lagadec et al. 2011). However, imaging alone can be in some cases deceiving in describing the real shape of a PN due to the inherent uncertainty introduced by the effects of the projection on the plane of the sky for a three dimensional nebula. The simplest example is that of an axisymmetric nebula, such as a bipolar, with a thick waist observed pole-on, in which case the nebula appears as a round doughnut projected on the sky. In these cases spatially resolved, high spectral resolution spectroscopy becomes an ideal tool to explore the three dimensional structure of the nebula by examining the Doppler shifts in the emission line profile and assuming in a first approximation a homologous expansion for the nebula. Most of these morpho-kinematic studies have been performed on relatively large, spatially resolved PNe (e.g. L\'opez et al. 2012, Clark et al. 2010, Garc\'{\i}a--D\'{\i}az et al. 2009) but they can also be very revealing when studying spatially unresolved, compact PNe, as we show here. \begin{table*} \centering \caption[]{Source Properties} \label{table5} \begin{tabular}{llllllllllll} \hline Name & PNG & RA & Dec & distance (kpc)$\rm ^a$ & V$_{\rm{sys}}($km $\rm{s^{-1}}$) & central star$\rm ^b$ \\ \hline M 1--32 & 011.9+04.2 & 17 56 20.1 & -16 29 04.6 & 4.8$\pm$1.0 & $-105$ & [WO4]pec \\ M 2--20 & 000.4--01.9 & 17 54 25.4 & -29 36 08.2 & 9.5$\pm$1.9 & $+60$& [WC5-6] \\ M 2--31 & 006.0--03.6 & 18 13 16.1 & -25 30 05.3 & 6.3$\pm$1.3 & $+ 155$& [WC4] \\ M 2--42 & 008.2--04.8 & 17 01 06.2 & -34 49 38.6 & 9.5$\pm$1.9 & $+ 105$& wels \\ M 3--15 & 006.8+04.1 & 17 45 31.7 & -20 58 01.8 & 6.8$\pm$1.4 & $+ 100$& [WC4 ] \\ \hline \end{tabular} \medskip{} \begin{flushleft} ${\rm ^a}$ Stanghellini \& Haywood 2010\\ ${\rm ^b}$ Acker \& Neiner 2003 \end{flushleft} \end{table*} In this work, we perform a morpho--kinematic study of five, relatively bright, compact PNe with no discernable structure and with seeing limited angular sizes ranging from 5 to 10 arcsec. No high spatial resolution images for these objects were found in the literature or the usual repositories of images of PNe. These objects were chosen from the the San Pedro Martir kinematic catalogue of Galactic Planetary Nebulae (L\'{o}pez et al. 2012) on the basis of their line emission spectra that show the presence of fast, collimated bipolar outflows. The objects selected are: M 1--32, M 2--20, M 2--31 and M 2--42 and M 3--15. Based on their galactic coordinates, distances and systemic velocities they seem located in the Galactic bulge or close to it, see Table 1. The central stars for four of them have been classified as Wolf-Rayet type (Acker \& Neiner 2003) and the fifth one as a weak emission--line star or wels (Tylenda, Acker \& Stenholm 1993). As mentioned above, the long-slit, spectroscopic observations reveal the presence of highly collimated, fast, bipolar outflows surrounded by a thick equatorial enhancement, as a torus or a ring. We combine these data with the 3--D morpho--kinematic code SHAPE (Steffen \& Lopez 2006, Steffen et al. 2011) to analyze the 3--D structure of these outflows and the relation of their appearance with different projection on the sky. In Section 2, the observation and data reduction are presented. In Section 3, we describe the parameters used in the morpho--kinematic code SHAPE as well as the modelling results. We finish by summing up the results of this work in Section 4.
|
In this work, we carried out a morpho--kinematic study of five compact PNe. The data were drawn from the Kinematic Catalogue of Galactic Planetary Nebulae (L\'opez et al. 2012). The sample was selected on the basis of compact PNe with no discernible or apparent structure from their ground--based images, no high spatial resolution imaging available (e.g. {\it HST}) and showing the presence of collimated, high-speed, bipolar outflows in their emission line spectra. Fortuitously, the five PNe selected under the criteria described above turned out to have WR--type central stars. It is worth pointing--out that the SPM catalogue lists at present 84 PNe with WR--type nuclei, 92 spatially resolved bipolar out of which 43 are labeled as showing fast projected outflows and 152 PNe from the galactic bugle region. It is our intention to conduct next a thorough search in the catalogue to select from the lists mentioned above additional PNe that match similar criteria adopted here. In this work, we demonstrate that with practically no morphological information and limited spectral information consisting of long--slit, spatially resolved, echelle spectra combined with the reconstruction code SHAPE, a fair morpho-kinematic representation of the nebulae can be achieved. The process yields a first order description of the main morphological elements of the nebula, the inclination of the main outflow symmetry axis with respect to the line of sight and with it an estimate of the true expansion velocity for the bipolar outflow. We find that these planetary nebulae have very similar structural components and the differences in their emission line spectra derive mostly from their different projection on the sky and degree of collimation of the bipolar outflows. All cases studied here are characterized by the presence of an equatorial density enhancement or toroid, a collimated bipolar outflow and an outer shell. The bipolar expansion velocities range from 70 to 200 km $\rm{s^{-1}}$ whereas the tori are found to expand rather slowly, in the order of 15 to 25 km $\rm{s^{-1}}$. The results of this work strongly indicate the importance of evaluating the true structure of compact PNe for statistical studies that consider the different morphological classes for evolution and population synthesis studies. Given the distance to the galactic bulge, there is a large population of unresolved PNe there that need to be investigated in detail. Whether the fact that all the compact PNe studied here turned out to have a WR-type nucleus has an statistical significance for the bulge PNe with collimated bipolar outflows or not deserves further investigation. Likewise a detailed reinvestigation of the kinematics of PN with WR--type nucleus such as the work of Medina et al. (2006) with spatially resolved, long slit spectra of higher spectral resolution should turn fruitful results and possibly establish a stronger link between PNe with WR--type nuclei and fast, collimated, bipolar outflows. It is also interesting to note that Lopez et al. (2011) associate toroidal structures with close binary central stars and Miszalski (2011) also associates bipolar outflows with close binary nuclei in PNe. Given the characteristics of the present sample, it would be interesting to search for the presence of binary nuclei in these objects. \\ \\ \\ We acknowledge the financial support of DGAPA-UNAM through grants IN110011 and IN100410. S. A. gratefully acknowledges a postdoctoral scholarship from DGAPA-UNAM. The authors are grateful to the anonymous referee for his valuable comments and suggestions.
| 12
| 6
|
1206.5222
|
1206
|
1206.5708_arXiv.txt
|
Jets and outflows are an integral part of the star formation process. While there are many detailed studies of molecular outflows towards individual star-forming sites, few studies have surveyed an entire star-forming molecular cloud for this phenomenon. The 100 square degree FCRAO CO survey of the Taurus molecular cloud provides an excellent opportunity to undertake an unbiased survey of a large, nearby, molecular cloud complex for molecular outflow activity. Our study provides information on the extent, energetics and frequency of outflows in this region, which are then used to assess the impact of outflows on the parent molecular cloud. The search identified 20 outflows in the Taurus region, 8 of which were previously unknown. Both \co\ and \coa\ data cubes from the Taurus molecular map were used, and dynamical properties of the outflows are derived. Even for previously known outflows, our large-scale maps indicate that many of the outflows are much larger than previously suspected, with eight of the flows (40\%) being more than a parsec long. The mass, momentum and kinetic energy from the 20 outflows are compared to the repository of turbulent energy in Taurus. Comparing the energy deposition rate from outflows to the dissipation rate of turbulence, we conclude that outflows by themselves cannot sustain the observed turbulence seen in the entire cloud. However, when the impact of outflows is studied in selected regions of Taurus, it is seen that locally, outflows can provide a significant source of turbulence and feedback. The L1551 dark cloud which is just south of the main Taurus complex was not covered by this survey, but the outflows in L1551 have much higher energies compared to the outflows in the main Taurus cloud. In the L1551 cloud, outflows can not only account for the turbulent energy present, but are probably also disrupting their parent cloud. We conclude that for a molecular cloud like Taurus, a L1551-like episode occuring once every $10^5$ years is sufficient to sustain the turbulence observed. Five of the eight newly discovered outflows have no known associated stellar source, indicating that they may be embedded Class 0 sources. In Taurus, 30\% of Class I sources and 12\% of Flat spectrum sources from the Spitzer YSO catalogue have outflows, while 75\% of known Class 0 objects have outflows. Overall, the paucity of outflows in Taurus compared to the embedded population of Class I and Flat Spectrum YSOs indicate that molecular outflows are a short-lived stage marking the youngest phase of protostellar life. The current generation of outflows in Taurus highlights an ongoing period of active star-formation, while a large fraction of YSOs in Taurus has evolved well past the Class I stage.
|
From the emergence of a hydrostatic core, the collapse of a proto-stellar core is accompanied by winds and mass loss \citep{lada1985} probably driven by magnetospheric accretion \citep[eg.][]{koenigl1991,edwards1994,hartmann1994}. Integrated over time, the effect of a wind from a young star is to blow away the placental material left over from its birth and which shrouds it during its earliest evolution. The discovery of bipolar molecular outflows has been a key to the understanding of this process \citep[eg.][]{snell1980}. These molecular outflows have dimensions of up to several parsecs, masses comparable or more than than their driving sources, and tremendous kinetic energies, typically 10$^{45}$ ergs \citep{bally1983,snell1987}. Such massive flows must represent swept-up material as the winds, emerging from the star and/or its circumstellar disk, interact with their ambient medium. While jets and outflows are an integral part of the star formation process, only a few studies out of the many detailed studies of molecular outflows towards individual star-forming sites, have a molecular cloud-wide view of this phenomenon. One of these is the recent study of outflows in Perseus \citep{arce2010}. The Taurus molecular cloud with its proximity (140 pc) and displacement from the Galactic Plane (b$\sim -19^\circ$) affords high spatial resolution views of an entire star forming region with little or no confusion from background stars and gas. The most complete inventory of the molecular gas content within the Taurus cloud is provided by \citet{ungerechts1987}, who observed \co\ J=1-0 emission from 750 deg$^2$ of the Taurus-Auriga-Perseus regions. They estimate the molecular mass resident within the Taurus-Auriga cloud to be $3.5\times 10^4$~\Msun. However, the $30^\prime$ angular resolution of this survey precludes an examination of the small scale structure of the cloud. The recently completed 100 square degree FCRAO CO survey with an angular resolution of 45$^{\prime\prime}$, sampled on a 20$^{\prime\prime}$ grid, and covered in \co\ and \coa\ simultaneously, reveals a very complex, highly structured cloud morphology with an overall mass of $2.4\times 10^4$~\Msun\ \citep{narayanan2008,goldsmith2008}. This survey provides an excellent opportunity to perform an unbiased survey of a large, nearby, molecular cloud complex for molecular outflow activity. \citet{goldsmith2008} divide the Taurus cloud into eight regions of high column density that include the L1495, B213, L1521, Heiles Cloud 2, L1498, L1506, B18, and L1536, and tabulate the masses and areas of these well-known regions. Of these, the L1495 and B213 clouds have been recently studied using JCMT HARP \co\ J=3-2 observations, searching for molecular outflows from young stars \citep{davis2010}, where they have detected as many as 16 outflows. Targeted studies with higher angular resolution of \coa\ and C$^{18}$O emission from individual sub-clouds of Taurus reveal some of the relationships between the molecular gas, magnetic fields, and star formation but offer little insight to the coupling of these structures to larger scales and features, nor do they provide an unbiased search for molecular outflows \citep{schloerb1984,heyer1987,mizuno1995,onishi1996}. A list of $\sim 300$ Young Stellar Objects (YSOs) derived from multi-wavelength observations have been compiled by \citet{kenyon1995}, and this list was compared to the column density distribution of molecular gas by \citet{goldsmith2008}. This list of young stars has been recently complemented by observations from the Spitzer Space Telescope \citep{rebull2010,luhman2010}. A comprehensive review of the entire Taurus region is provided by \citet{kenyon2008}. In this paper, we use the up-to-date list of young stars from the Spitzer observations, and the list from \citet{kenyon2008}. A list of Molecular Hydrogen emission-line Objects (MHOs) that includes the Taurus region has also been recently compiled \citep{davis2010b}, and this list is also compared against our data in this paper. It should be noted that the FCRAO Taurus Molecular Cloud survey has an overall {\it rms} uncertainty (in T$_A^*$~K units) of $\sim 0.58$~K and $\sim 0.26$~K in in \co\ and \coa\ transitions respectively \citep{narayanan2008}. Detecting all outflow sources in Taurus was never a goal of this survey, nevertheless, the {\it rms} sensitivity levels reached is indeed lower than the original goals of the project, and this allows us to use this survey to probe for outflows. In this paper, we provide an unbiased search for outflows in the FCRAO Taurus survey, identify their driving sources, evaluate outflow properties, and compare these properties with those of the driving sources, and associated molecular cloud environments. The structure of this paper is as follows. In \S\ref{observations}, we summarise the observations and describe our data processing and analysis with respect to the search criteria to detect outflows, and subsequent analysis of their properties. In \S\ref{results}, we summarise the main results, and tabulate the properties of the detected outflows. In \S\ref{discussion}, we compare the outflows against the energetics of the driving sources, and the cloud environment. In \S\ref{conclusions}, we summarise our results.
|
\label{conclusions} An unbiased study of the entire 100 square degrees of the \co\ and \coa\ data in the FCRAO Taurus Molecular Cloud Survey was performed in order to identify high-velocity features that could be associated with molecular outflows from YSOs. The FCRAO survey of the Taurus Molecular Cloud was not designed to exhaustively detect all the outflows in Taurus, but the sensitivity reached in \co\ and \coa\ in the survey were better than the original goals of the project, and allows an unbiased search for outflows in this nearby star-forming region. Our procedure for identifying outflows in an unbiased way utilises a combination of integrated intensity maps, position velocity images, average spectra inside polygonal areas representing presumed outflow regions. Using our search strategy we identify 20 outflows in Taurus, of which 8 are new detections. Our survey fails to detect three other outflows in the region that have been previously reported and confirmed in literature. The weak nature of these three outflows as reported in previous studies are consistent with them not being detected to the limits of the sensitivity reached in our survey. Eight of these 20 (40\%) outflows are parsec-scale in length, and of them four are new detections. Even amongst the known outflows that are seen to be parsec-scale in our survey, the outflow lengths are much larger than previously suspected. We detect outflows in 30\% of Class I sources, and 12\% of Flat Spectrum sources from the subset of embedded YSOs in the Taurus Spitzer catalogue. Five of the outflows reported in this study have driving sources which have no known counterparts in the Spitzer catalogue, indicating that they are likely Class 0 objects. Our study detects outflows in 75\% of known Class 0 objects in Taurus. Based on dynamical timescales derived for our outflows, and non-detection of outflows towards a large number of Spitzer Class I and Flat Spectrum sources, we conclude that outflows are a very short-lived phase in protostellar evolution, and that most embedded YSOs in Taurus are past this stage of their lives. We compare the combined energetics of the detected outflows to the observed cloud-wide turbulence in Taurus, and conclude that in the main 100 square degree region of Taurus covered in the survey, outflows lack the energy needed to feed the observed turbulence. But if we include the very active L1551 star-forming region that is just south of the main Taurus complex, which also features some very powerful outflows, we determine that energy from outflows are only a factor of 20 lower than the energy present in turbulence. However, when comparing the net luminosity of outflows from the Taurus region studied and include the L1551 dark cloud, the luminosity in the outflows is able to sustain the turbulent dissipation rate seen in Taurus. The energetics of smaller sub-regions of Taurus and the L1551 dark cloud region are compared to the repository of turbulent and gravitational energies in these sites. Our comparison region, the L1551 dark cloud is anomalous in that the outflows in that region are powerful enough not only to easily account for the turbulence in that cloud, but it's outflows are also unbinding the cloud. The regions with active outflows in the 100 square degrees of Taurus that we studied do not have enough energy to disrupt the cloud, but do have enough luminosity to be a major player in sustaining the turbulence in their parent clouds. We also conclude that for molecular clouds like Taurus, an L1551-like outflow episode occurring once every $10^5$ years is sufficient to sustain the observed turbulence in the cloud.
| 12
| 6
|
1206.5708
|
1206
|
1206.3154_arXiv.txt
|
{Debris disks are commonly considered to be a by-product of planet formation. Structures in debris disks induced by planet-disk interaction are promising to provide valuable constraints on the existence and properties of embedded planets.} {We investigate the observability of structures in debris disks induced by planet-disk interaction with future facilities in a systematic way. High-sensitivity, high angular resolution observations with large \mbox{(sub-)mm} interferometers and large space-based telescopes operating in the near- to mid-infrared wavelength range are considered.} {The observability of debris disks with the Atacama Large Millimeter/submillimeter Array (ALMA) is studied on the basis of a simple analytical disk model. Furthermore, $N$-body simulations are used to model the spatial dust distribution in debris disks under the influence of planet-disk interaction. From these simulations, images at optical scattered light to millimeter thermal re-emission are computed. Available information about the expected capabilities of ALMA and the {\it James Webb} \normalfont Space Telescope (JWST) are used to investigate the observability of characteristic disk structures with these facilities through spatially resolved imaging.} {Our simulations show that planet-disk interaction can result in prominent structures in the whole considered wavelength range. The exact result depends on the configuration of the planet-disk system and on the observing wavelength which provides the opportunity of detecting and characterizing extrasolar planets in a range of masses and radial distances from the star that is not accessible to other techniques. Facilities that will be available in the near future at both considered wavelength ranges are shown to provide the capabilities to spatially resolve and characterize structures in debris disks that arise because of planet-disk interaction. Limitations are revealed and suggestions for possible instrument setups and observing strategies are given. In particular, ALMA is limited by its sensitivity to surface brightness, which requires a trade-off between sensitivity and spatial resolution. Space-based mid-infrared observations will be able to detect and spatially resolve regions in debris disks even at a distance of several tens of AU from the star, where the emission from debris disks in this wavelength range is expected to be low.} {Both ALMA and the planned space-based near- to mid-infrared telescopes will provide unprecedented capabilities to study planet-disk interaction in debris disks. In particular, a combination of observations at both wavelengths will provide very strong constraints on the planetary/planetesimal systems.}
|
\label{intro} The dust detected in debris disks is thought to be removed from those systems by the stellar radiation on time scales that are short compared to their ages. This means that the dust must be transient, or more likely continuously replenished by ongoing collisions of bigger objects such as planetesimals left over from the planet formation process \citep[for a recent review see, e.g.,][]{kri10}. The presence of planets and debris disks is hence thought to be correlated. In a system with a debris disk and one or more planets, one would expect gravitational interaction between the dust grains and the planet, trapping them into resonance \citep{wya06, wol07, sta08, sta09b}. This results in structures in the disk that may be observable, which in turn provides a method to infer and characterize planets that are in a regime of masses, brightnesses, and radial distances from the star that is not accessible through other techniques such as radial velocity measurements or direct imaging \citep[e.g.,][]{udr08,mar08,kal08}. Clumpy structures in debris disks have been observed in several cases (e.g., $\epsilon$\,Eri, \citealt{gre98}; AU\,Mic, \citealt{liu04}; HD\,107146, \citealt{cor09, hug11}). In this work, we investigate the observability of structures in debris disks in a systematic way. We set up several initial conditions for the planetary mass and orbit as well as for the dust distribution in the disk from which we simulate the spatial dust distribution using $N$-body simulations. Our results from the dynamical simulations are consistent with earlier works \citep[e.g.,][]{hol03,wya06,sta08}. In contrast to these works which focused mostly on the phenomenological and theoretical description of the structures as well as on the processes of planet-disk interaction, the goals of the present work are the evaluation of the observability and the development of strategies for observations of planet disk interaction. Thus, we use the $N$-body simulations as a tool to produce realistic structures in debris disks, but discuss these structures only briefly emphasizing on the consequences of different planet-disk configurations on the observability. To make predictions on the feasibility to detect and spatially resolve characteristic structures, available information on the capabilities of the Atacama Large Millimeter/submillimeter Array (ALMA) and {\it James Webb} Space Telescope (JWST) are used representative for near-future facilities. We describe the approach used in our $N$-body simulations and for the image generation in Sect.~\ref{modust}. The results from the dynamical simulations are presented and briefly discussed in Sect.~\ref{dyn_res}. In Sects.~\ref{obs_ALMA} to~\ref{obs_JWST}, we evaluate how large-scale disk structures may be observed with different facilities. Conclusions are drawn in Sect~\ref{conc}.
|
\label{conc} We demonstrated that planet-disk interaction may produce detectable structures in the dust distribution of debris disks. This depends on the configuration of the planet-disk system and on the observing wavelength. The detected structures enable one to infer and characterize extrasolar planets in a range of mass and radial distance from the star unattainable by other techniques. In particular, detailed modeling of a combination of high-sensitivity, high spatial resolution observations at mid-infrared wavelengths and \mbox{(sub-)mm} wavelengths is able to put strong constraints on the configuration of the planet-disk system. We demonstrated that HST scattered-light observations are in most cases unable to unambiguously detect such structures, in particular in debris disks seen face-on. In contrast, both ALMA and the JWST will provide the sensitivity and resolution to detect and spatially resolve the spatial dust distribution in debris disks at a level of sensitivity and resolution that allow one to distinguish between different planet-disk configurations. However, we also demonstrated limitations of the instruments. ALMA is unable to reach both high sensitivity to surface brightness and high spatial resolution simultaneously, requiring a sophisticated observing strategy to optimize the outcome of planned observations. For a debris disk with typical shape of the SED, intermediate observing wavelengths around 1\,mm and small to intermediate array extents are ideally suited to reach a high S/N and reasonable spatial resolution with ALMA. The highest sensitivity at reasonable spatial resolution is reached if the resolution of the observations is similar to the scale of the structures expected, as long as these structures are bright enough. Mid-infrared observations of debris disks with the JWST will be able to detect and spatially resolve dust in debris disks even at a distance of several tens of AU, where the emission from debris disks in this wavelength range is expected to be low. For such observations, stellar PSF subtraction with an accuracy of a few percent is necessary to unequivocally detect structures in the spatial distribution of the dust.
| 12
| 6
|
1206.3154
|
1206
|
1206.6547_arXiv.txt
|
\label{intro} Source finding reduces a dataset to a manageable abstract representation that is a collection of objects with physically meaningful properties. When a dataset becomes too large the dataset is virtually impossible to work with directly, and the catalogue is the only method of data exploration. This is the case for the two Australian Square Kilometre Array Pathfinder (ASKAP) HI surveys, Wide-field ASKAP Legacy L-band All-sky Blind surveY (WALLABY) (\citet{Koribalski_2009}; Koribalski, B., Staveley-Smith, L. et~al., in preparation) and Deep Investigation of Neutral Gas Origins (DINGO). Individual ASKAP spectral line observations will be at least 2,048 by 2,048 by 16,384 voxels, which is 256GB (512GB) in float (double) precision and only directly accessible using supercomputing facilities. The sheer size of the ASKAP spectral line observations combined with the number of observations required to carry out the WALLABY survey ($\sim$ 1200) necessitates automated source finding. An additional benefit is the reproducibility of automated source finders, which allows their performance to be incorporated into existing and future simulations of the WALLABY survey. The WALLABY team has been investigating existing source finding techniques as well as developing novel methods. The essential metrics for assessing automated source finding are reliability and completeness. Completeness is the fraction of sources that it recovers, and reliability is the fraction of detections that are actual sources. All automated source finders can be characterised by a `performance curve', which describes the combination of reliability and completeness that a source finder achieves on a given dataset. It is a common practise to pre-process a dataset before applying a source finding method. The goal of pre-processing is to improve both the completeness and reliability of the source finder. This is achieved by `correcting' the dataset. In a `corrected' dataset the noise behaves as your source finder assumes, the dataset is free from background structure and sources have maximised signal-to-noise ratios. It should be noted that the term `signal-to-noise ratio' in this context does not account for the Jy/beam units of radio observations. Technically a radio observation should be re-scaled for the new beam size when an observation is smoothed. This involves reversing the initial beam scaling, which in some circumstances increases the noise level more than it is minimised by smoothing. For the purposes of pre-processing and source finding though units are irrelevant. The signal-to-noise ratios discussed here therefore refer to the unscaled signal-to-noise ratios of radio observations, which are always enhanced by smoothing. In this paper we will compare four pre-processing methods for ASKAP HI datacubes: iterative median smoothing, mathematical morphology subtraction, wavelet de-noising and linear smoothing. We will compare these pre-processing methods by examining the effect they have on the performance curve of a simple intensity threshold source finder. We analyse the effect on an intensity threshold performance curve, because intensity thresholding is at the core of most source finders eg. {\sc SExtractor} \citep{1996A&AS..117..393B} and {\sc SFind} \citep{2002AJ....123.1086H}. We are including linear smoothing and wavelet de-noising in our comparison, because they are among the most commonly used pre-processing methods. These two methods take contrasting approaches. Linear smoothing uses averaging or convolution to re-distribute the flux within the datacube so that noise fluctuations are reduced more than source signal, which results in increased source signal-to-noise ratios. Wavelet de-noising however tries to directly subtract noise from the datacube. A wavelet transform decomposes a datacube into signal on different scales at all positions within the datacube. Signal on scales smaller and larger than the expected size of sources can then be removed. Alternatively, the noise level on different scales can be measured from the wavelet transform, and only `significant' signal (as defined in some way by a user) on each scale is retained. We use the {\sc Duchamp} source finder \citep{2008glv..book..343W,Duchamp2} to implement both, because {\sc Duchamp} is not only a commonly used state-of-the-art source finder, but it is also the default ASKAP source finder. We are including iterative median smoothing in our comparison, because \citet{2009TAS.37.3.1172} has shown that iterative median smoothing produces a larger gain in source signal-to-noise ratio than linear smoothing methods. The key is that calculating a median is a non-linear process that preserves source `edges'. Source edges are preserved because median calculations are insensitive to sample outliers. Crucially, \citet{2009TAS.37.3.1172} found that only two iterations are required, so long as the first iteration uses the smallest smoothing kernel possible. This minimal number of iterations results in a reasonable computational load even when large smoothing kernels are used for the second iteration. We chose to test mathematical morphology subtraction, because it is a proven technique for size filtering images. We can use mathematical morphology to filter out the small-scale information in a dataset to identify large scale structure in the image \citep{2002PASP..114..427R}. Subtracting this large scale structure can potentially improve reliability by re-normalising the dataset noise properties, so that the mean of the noise distribution is constant throughout the dataset. There are distinct disadvantages common to all of these pre-processing methods. First, a poor choice of smoothing kernel can actually decrease source signal-to-noise ratios when using linear smoothing, iterative median smoothing and mathematical morphology subtraction. This is dealt with by using multiple smoothing kernels. This increases the computational load though, and the results of the multiple smoothing kernels need to be combined intelligently. Second, all of these pre-processing methods need to account for datasets having different types of dimensions. \citet{2011arXiv1112.3807F} is a good example of a wavelet transform for HI datacubes that accounts for the difference between the RA, Dec angular dimensions and the frequency dimension. There are two additional disadvantages of wavelet de-noising. Computing a wavelet transform can be much more computationally expensive than the other pre-processing methods. Additionally, a requirement of all wavelet transform kernels is that the integral of any kernel is zero. This prevents any individual wavelet transform kernel from matching an astronomical source, which is either a positive feature (emission) or a negative feature (absorption). Consequently, any astronomical source will necessarily exist on multiple scales (and probably multiple locations). Depending on how the wavelet transform information is filtered to de-noise a datacube, this can have a negative effect on source finder performance. We will carry out our comparison of these source finder pre-processing methods using the Westerbork Telescope (WSRT) test datacube in \citet{2011arXiv1112.3162S}. The WSRT test datacube was created by injecting WHISP sources \citep{2002ASPC..276...84V} into a datacube of real WSRT spectral datacube noise. This test datacube not only contains real sources embedded in real noise, but the resolution (both angular and spectral) and noise level closely matches that expected of the APERTIF and ASKAP telescopes. In particular, the 30$^{\prime\prime}$ Gaussian beam, 10$^{\prime\prime}$ pixels, 3.86 $km$ $s^{-1}$ channels and 1.86 $mJy/beam$ noise level of this test datacube, is designed to match WALLABY observations. This allows us to test the `real world' performance of the various pre-processing methods. We illustrate the scale of the WHISP sources and the test datacube using a channel map in Figure \ref{cmap}. \begin{figure}[h] \hspace*{-5mm} \includegraphics[angle=270,width=0.55\textwidth]{Fig1.eps} \caption{This is a channel map of the WSRT test datacube used in this paper. The source in the centre of this channel map is one of the most spatially resolved sources.} \label{cmap} \end{figure} The rest of this paper is organised in the following way. We begin by presenting the implementation of iterative median smoothing and mathematical morphology that we used for our comparison in Section \ref{ppimps}. Next we compare and analyse the performance impact of the various source finder pre-processing methods in Section \ref{analysis}. Then we finish in Section \ref{conclusion} with our conclusions and recommendations.
|
\label{conclusion} We have used the WSRT test datacube of \citet{2011arXiv1112.3162S} to test the real world performance improvement of linear smoothing, {\sc Duchamp}'s 3-D wavelet de-noising, iterative median smoothing and mathematical morphology subtraction, when using intensity thresholding to find sources in ASKAP HI spectral line observations. We generated completeness-reliability performance curves for each pre-processing method and an unprocessed datacube, which we used as a reference, to investigate the effect of each pre-processing method for a range of input parameters. We found that the iterative median smoothing and linear smoothing produce the greatest improvement in source finder performance. We recommend that iterative median smoothing be used over linear smoothing though, because iterative median smoothing is less affected by merging and fragmentation. In our tests however the effect of iterative median smoothing on source finder performance proved to be more highly dependent upon the pre-processing parameters than the other methods. The performance improvement offered by {\sc Duchamp}'s 3-D wavelet de-noising was the least sensitive to the choice of input parameters. It is the safest pre-processing method. It should be noted however that using a smoothing kernel with a spatial extent smaller than or equal to the datacube's beam size, will in general improve source finder performance. We think the Gaussian nature of the WSRT noise is the reason that the linear smoothing and iterative median smoothing produce the greatest source finder performance improvement, because it approximates matched filtering. For this reason, we do not expect our results to be applicable to images or datacubes with non-Gaussian noise. We do however think that the edge-preserving nature of the iterative median transform is the reason that it does not suffer from fragmentation and merging as badly as linear smoothing, and that this result is applicable to all images and datacubes.
| 12
| 6
|
1206.6547
|
|
1206
|
1206.0011_arXiv.txt
|
We use hydrodynamic simulations with detailed, explicit models for stellar feedback to study galaxy mergers. These high resolution ($\sim 1\,{\rm pc}$) simulations follow the formation and destruction of individual giant molecular clouds (GMCs) and star clusters. We find that the final starburst is dominated by in situ star formation, fueled by gas which flows inwards due to global torques. The resulting high gas density results in rapid star formation. The gas is self gravitating, and forms massive ($\lesssim10^{10}\,M_\odot$) GMCs and subsequently super-starclusters (with masses up to $10^8\,M_\odot$). However, in contrast to some recent simulations, the bulk of new stars which eventually form the central bulge are not born in superclusters which then sink to the center of the galaxy. This is because feedback efficiently disperses GMCs after they turn several percent of their mass into stars. In other words, most of the mass that reaches the nucleus does so in the form of gas. The Kennicutt-Schmidt law emerges naturally as a consequence of feedback balancing gravitational collapse, independent of the small-scale star formation microphysics. The same mechanisms that drive this relation in isolated galaxies, in particular radiation pressure from IR photons, extend, with no fine-tuning, over seven decades in star formation rate (SFR) to regulate star formation in the most extreme starburst systems with densities $\gtrsim 10^{4}\,\msun\,{\rm pc^{-2}}$. This feedback also drives super-winds with large mass loss rates; however, a significant fraction of the wind material falls back onto the disks at later times, leading to higher post-starburst SFRs in the presence of stellar feedback. This suggests that strong AGN feedback may be required to explain the sharp cutoffs in star formation rate that are observed in post-merger galaxies. We compare the results to those from simulations with no explicit resolution of GMCs or feedback (``effective equation of state'' [EOS] models). We find that global galaxy properties are similar between EOS and resolved-feedback models. The relic structure and mass profile, and the total mass of stars formed in the nuclear starburst are quite similar, as is the morphological structure during and after merger (tails, bridges, etc.). Disk survival in sufficiently gas-rich mergers is similar in the two cases, and the new models follow the same scalings derived for the efficiency of disk re-formation after a merger as derived from previous work with the simplified EOS models. While the global galaxy properties are similar between EOS and feedback models, sub-galaxy scale properties and the star formation rates can be quite different: the more detailed models exhibit significantly higher star formation in tails and bridges (especially in shocks), and allow us to resolve the formation of super star-clusters. In the new models, the star formation is more strongly time variable and drops more sharply between close passages. The instantaneous burst enhancement can be higher or lower, depending on the details of the orbit and initial structural properties of the galaxies; first passage bursts are more sensitive to these details than those at final coalescence.
|
\label{sec:intro} A wide range of observed phenomena indicate that gas-rich mergers are important to galaxy evolution and star formation. In the local Universe, the population of star-forming galaxies appears to transition from ``quiescent'' (non-disturbed) disks (which dominate the {\em total} star formation rate/IR luminosity density) at the luminous infrared galaxy (LIRG) threshold $10^{11}\,\lsun$ ($\dot{M}_{\ast}\sim 10-20\,\msun\,{\rm yr^{-1}}$) to violently disturbed systems at a few times this luminosity. The most intense starbursts at $z=0$, ultraluminous infrared galaxies (ULIRGs; $L_{\rm IR}>10^{12}\,\lsun$), are invariably associated with mergers \citep[e.g.][]{joseph85,sanders96:ulirgs.mergers, evans:ulirgs.are.mergers}, and are fueled by compact, central concentrations of gas \citep{scoville86,sargent87} which provide material to feed black hole (BH) growth \citep{sanders88:quasars}, and to boost the concentration and central phase space density of merging spirals to match those of ellipticals \citep{hernquist:phasespace,robertson:fp}. With central densities as large as $\sim1000$ times those in Milky Way giant molecular clouds (GMCs), these systems provide a laboratory for studying star formation under the most extreme conditions. In addition, various studies have shown that the mass involved in these starburst events is critical for explaining the relations between spirals, mergers, and ellipticals, and has a dramatic impact on the properties of merger remnants \citep[e.g.,][]{LakeDressler86,Doyon94,ShierFischer98,James99, Genzel01,tacconi:ulirgs.sb.profiles,dasyra:mass.ratio.conditions,dasyra:pg.qso.dynamics, rj:profiles,rothberg.joseph:kinematics,hopkins:cusps.ell,hopkins:cores}. Even a small mass fraction of a few percent formed in these nuclear starbursts can have dramatic implications for the mass profile structure \citep{mihos:cusps}, phase space densities \citep{hernquist:phasespace}, rotation and higher-order kinematics \citep{cox:kinematics}, kinematically decoupled components \citep{hoffman:dissipation.and.gal.kinematics,hoffman:mgr.orbit.structure.vs.fgas}, stellar population gradients \citep{soto:ssp.grad.in.ulirgs, kewley:2010.gal.pair.metal.grad.evol,torrey:2011.metallicity.evol.merger}, and growth of the central BH \citep{dimatteo:msigma,hopkins:qso.all,hopkins:seyfert.limits}. At higher redshifts ($z\sim1-3$), galaxies with the luminosities of LIRGs and ULIRGs may be more ``normal'' galaxies, in the sense that they are relatively undisturbed disks rather than mergers \citep{yan:z2.sf.seds,sajina:pah.qso.vs.sf, dey:2008.dog.population,melbourne:2008.dog.morph.smooth, dasyra:highz.ulirg.imaging.not.major}. However, at those same redshifts, yet more luminous systems appear, including large populations of Hyper-LIRGs (HyLIRGs; $L_{\rm IR}>10^{13}\,\lsun$) and bright sub-millimeter galaxies \citep[e.g.][]{chapman:submm.lfs,younger:highz.smgs, younger:sma.hylirg.obs,casey:highz.ulirg.pops}. These hyperluminous objects exhibit many of the traits associated with merger-driven starbursts, including morphological disturbances, and may be linked to the emergence of massive, quenched (non star-forming), compact ellipticals at times as early as $z\sim2-4$ \citep{papovich:highz.sb.gal.timescales, younger:smg.sizes,tacconi:smg.maximal.sb.sizes, schinnerer:submm.merger.w.compact.mol.gas, chapman:submm.halo.clustering,tacconi:smg.mgr.lifetime.to.quiescent}. Reproducing their abundance and luminosities remains a challenge for current models of galaxy formation \citep{baugh:sam, swinbank:smg.counts.vs.durham, narayanan:smg.modeling, younger:warm.ulirg.evol,hayward:2010.smg.counts,hayward:2011.smg.merger.rt}. Modeling the consequences of these mergers for star formation and the nuclear structure of galaxies requires following the highly non-linear, resonant, and chaotic interplay between gas (with shocks, cooling, star formation, and feedback), stars, and dark matter over a very large dynamic range. High-resolution numerical hydrodynamic simulations are the method of choice. However, until recently, computational limitations have made it impossible to resolve the $\sim$pc (and $<1000\,$yr) scales of structure in the ISM relevant for star formation and stellar feedback while following the four or five orders-of-magnitude larger global evolution of a galaxy or merger. Perforce, most previous models have adopted a variety of sub-grid approaches to describe ``effective'' ISM properties below the resolution scales of the simulations. All numerical galaxy simulations require some sort of ``sub-grid'' model; for example, even with box sizes of order one parsec, current star formation simulations treat radiative feedback in a very approximate manner, while it is clear that such feedback is crucial to determining, e.g., the inital mass function. Unfortunately, without a physically well founded sub-grid model, the predictive power of numerical simulations is limited. If the average star formation properties are put in by hand (and they are in most galaxy scale simultions to date) the global star formation rate clearly cannot be predicted. Similarly, if the effects of stellar feedback are put in by hand, e.g., by invoking an ``effective pressure'' or turbulent dispersion, in order to suppress runaway cooling and clumping, the detailed physics and implications for star formation cannot be determined. Moreover, it is by no means clear if commonly used prescriptions capture the key physics, nor whether they can be extrapolated from ``quiescent'' systems (which are typically used to calibrate the models) to the very different ISM conditions in mergers. Typically subgrid models also have several adjustable parameters, which further limit their predictive power; it is often unclear whether observational constraints imply differences in the merger dynamics or more subtle tweaks in the sub-grid ISM assumptions. This has occasionally led to contradictory conclusions in the literature. Numerical simulations of isolated galaxies and galaxy mergers can now achieve the dynamic range required to resolve the formation of GMCs and ISM structure, $\sim 1-10$\,pc \citep[see e.g.][]{saitoh:2008.highres.disks.high.sf.thold,tasker:2009.gmc.form.evol.gravalone,bournaud:2010.grav.turbulence.lmc,dobbs:2011.why.gmcs.unbound}. However, improving resolution alone -- beyond the implicit ``averaging'' scales for the ISM model -- has no clear meaning if the physics that govern star formation and ISM structure on these scales are not included. Generally, models have not attempted to include these physics or have included a very limited sub-set of the relevant processes. In fact, a large number of feedback mechanisms may drive turbulence in the ISM and help disrupt GMCs, including: photo-ionization, stellar winds, radiation pressure from UV and IR photons, proto-stellar jets, cosmic rays, supernovae, and gravitational cascades from large scales \citep[e.g.][and references therein]{mac-low:2004.turb.sf.review}. In \citet{hopkins:rad.pressure.sf.fb} (\paperone) and \citet{hopkins:fb.ism.prop} (\papertwo) we developed a new set of numerical models to incorporate feedback on small scales in GMCs and star-forming regions, into simulations with pc-scale resolution.\footnote{\label{foot:url}Movies of these simulations are available at \movieurl} These simulations include the momentum imparted locally (on sub-GMC scales) from stellar radiation pressure, radiation pressure on larger scales via the light that escapes star-forming regions, HII photoionization heating, as well as the heating, momentum deposition, and mass loss by SNe (Type-I and Type-II) and stellar winds (from O and AGB stars). The feedback is tied to the young stars, with the energetics and time-dependence taken directly from stellar evolution models. Our method also includes cooling to temperatures $<100\,$K, and a treatment of the molecular/atomic transition in gas and its effect on star formation. We showed in Papers I \& II that in isolated disk galaxies these feedback mechanisms produce a quasi-steady ISM in which giant molecular clouds form and disperse rapidly, after turning just a few percent of their mass into stars. This leads to an ISM with phase structure, turbulent velocity dispersions, scale heights, and GMC properties (mass functions, sizes, scaling laws) in reasonable agreement with observations. In \citet{hopkins:stellar.fb.winds} (\paperthree), we showed that these same models of stellar feedback {\em predict} the elusive winds invoked in almost all galaxy formation models; the {\em combination} of multiple feedback mechanisms is critical to give rise to massive, multi-phase winds having a broad distribution of velocities, with material both stirred in local fountains and unbound from the disk. Papers I \& II showed that the global star formation rate found in the simulations did not depend on the sub-grid model for star formation, over a broad range of sub-grid model parameters and even model types. The conclusion drawn was that the rate of star formation in the simulations was controlled by the amount of feedback required to maintain the gas disk in hydrostatic equilibrium, and with a Toomre Q parameter of order unity. If this is true in real galaxies, it will simplify the task of understanding galaxy formation greatly. In this paper, we extend these models to idealized major mergers between galaxies. In this first exploration, we compare some of the global properties of the remnants -- particularly those tied to star formation and ISM physics -- to the results of simulations using more simplified sub-grid ISM/feedback models. Specifically, we investigate the consequences of a more detailed and explicit treatment of the ISM for the star formation histories, spatial distribution of stellar mass and starburst stars, survival and re-formation of gas disks, and origins of the global Kennicutt-Schmidt law. All these properties can be compared to the results from previous sub-grid models in a straightforward manner. We also test the proposition that the star formation rate is controlled by feedback, in mergers as well as in quiescent galaxies, since we do not adjust our star formation prescription. In companion papers, we examine in more detail some of the phenomena that could not be predicted with previous models: the phase structure of the ISM and properties of starburst ``super-winds'' driven by stellar feedback, as well as the physics of star cluster formation in tidally shocked and starburst regions. \begin{figure} \centering \scaleup \plotonesize{hiz_demo_15g.pdf}{1} \caption{Morphology of the gas in a standard simulation (high-resolution with all explicit feedback mechanisms included) of a merger of the HiZ disk model: a massive, $z\sim2-4$ starburst disk merger. The time is near apocenter after first passage. This color projection emphasizes the cold, star-forming gas. Brightness encodes projected gas density (light-to-dark with increasing density; logarithmically scaled with a $\approx4\,$dex stretch); color encodes gas temperature with the blue/violet material being $T\lesssim1000\,$K molecular and atomic gas, pink/red $\sim10^{4}-10^{5}$\,K warm ionized gas, and yellow/white $\gtrsim10^{6}\,$K hot gas. Gravitational collapse forms giant molecular clouds and proto-star cluster complexes throughout the gas. The outflows present include a component in dense clumps. Images of the other simulations at various times are in Appendix~\ref{sec:appendix:images}. \label{fig:morph.1}} \end{figure} \begin{figure} \centering \plotonesize{hiz_demo_15s.pdf}{1} \caption{The stars at the same time as Fig.~\ref{fig:morph.1}. The image is a mock $ugr$ (SDSS-band) composite, with the spectrum of all stars calculated from their known age and metallicity, and dust extinction/reddening accounted for from the line-of-sight dust mass. The brightness follows a logarithmic scale with a stretch of $\approx2\,$dex. Young star clusters are visible throughout the system as bright white pixels. The nuclei contain most of the star formation, evident in their saturated brightness. Fine structure in the dust gives rise to complicated filaments, dust lanes, and patchy obscuration of star-forming regions. A few super-starclusters are apparent as the brightest young stellar concentrations. Images of the other simulations at various times are in Appendix~\ref{sec:appendix:images}. \label{fig:morph.3}} \end{figure} \begin{figure*} \centering \scaleup \begin{tabular}{cc} \includegraphics[width={1.01\columnwidth}]{hiz_demo_18g.pdf} & \includegraphics[width={0.95\columnwidth}]{highz_extdsk_hr_e_s018_t000_gas.pdf} \end{tabular} \caption{{\em Left:} Gas, as Fig.~\ref{fig:morph.1}, at a slightly later time, with a color projection here chosen to emphasize the ionized and hot gas. Brightness now increases with surface density; color encodes temperature in a similar manner (blue/pink/yellow representing cold/warm/hot phases). Violent outflows are present, emerging both from the complexes and the system as a whole, driven by the massive starburst. The volume-filling ``hot'' component is now visible, with multiple ``bubbles'' driven by separate local events. {\em Right:} Same, but with the (projected in-plane) velocity vectors plotted. The vectors interpolate the gas velocities evenly over the image; their length is linearly proportional to the magnitude of the local velocity with the longest plotted corresponding to $\approx500\,{\rm km\,s^{-1}}$. \label{fig:morph.2}} \end{figure*} \begin{footnotesize} \ctable[ caption={{\normalsize Galaxy Models}\label{tbl:sim.ics}},center,star ]{lcccccccccccccc}{ \tnote[ ]{Parameters describing our (isolated) galaxy models, used as the initial conditions for all of the mergers: \\ {\bf (1)} Model name: shorthand for models of an isolated SMC-mass dwarf ({\bf SMC}); local gas-rich galaxy ({\bf Sbc}); MW-analogue ({\bf MW}); and high-redshift massive starburst ({\bf HiZ}). {\bf (2)} $\epsilon_{g}$: gravitational force softening. Higher-resolution tests of the isolated galaxies use half this value (\paperone). {\bf (3)} $M_{\rm halo}$: halo mass. {\bf (4)} $c$: halo concentration. Values lie on the halo mass-concentration relation at each redshift ($z=0$ for SMC, Sbc, and MW; $z=2$ for HiZ). {\bf (5)} $V_{\rm max}$: halo maximum circular velocity. {\bf (6)} $M_{\rm bary}$: total baryonic mass. {\bf (7)} $M_{\rm b}$: bulge mass. {\bf (8)} $a$: \citet{hernquist:profile} profile scale-length for bulge. {\bf (9)} $M_{d}$: stellar disk mass. {\bf (10)} $r_{d}$: stellar disk scale length. {\bf (11)} $h$: stellar disk scale-height. {\bf (12)} $M_{g}$: gas disk mass. {\bf (13)} $r_{g}$: gas disk scale length (gas scale-height determined so that $Q=1$). {\bf (14)} $f_{\rm gas}$: average gas fraction of the disk. inside of the stellar $R_{e}$ ($M_{\rm g}[<R_{e}]/(M_{\rm g}[<R_{e}]+M_{\rm d}[<R_{e}])$). The total gas fraction, including the extended disk, is $\sim50\%$ larger. {\bf (15)} $Z$: initial metallicity (in solar units) of the gas and stars. } }{ \hline\hline \multicolumn{1}{c}{Model} & \multicolumn{1}{c}{$\epsilon_{\rm g}$} & \multicolumn{1}{c}{$M_{\rm halo}$} & \multicolumn{1}{c}{$c$} & \multicolumn{1}{c}{$V_{\rm max}$} & \multicolumn{1}{c}{$M_{\rm bary}$} & \multicolumn{1}{c}{$M_{\rm b}$} & \multicolumn{1}{c}{$a$} & \multicolumn{1}{c}{$M_{\rm d}$} & \multicolumn{1}{c}{$r_{d}$} & \multicolumn{1}{c}{$h$} & \multicolumn{1}{c}{$M_{\rm g}$} & \multicolumn{1}{c}{$r_{g}$} & \multicolumn{1}{c}{$f_{\rm gas}$} & \multicolumn{1}{c}{$Z$} \\ \multicolumn{1}{c}{\,} & \multicolumn{1}{c}{[pc]} & \multicolumn{1}{c}{[$\msun$]} & \multicolumn{1}{c}{\,} & \multicolumn{1}{c}{[${\rm km\,s^{-1}}$]} & \multicolumn{1}{c}{[$\msun$]} & \multicolumn{1}{c}{[$\msun$]} & \multicolumn{1}{c}{[kpc]} & \multicolumn{1}{c}{[$\msun$]} & \multicolumn{1}{c}{[kpc]} & \multicolumn{1}{c}{[pc]} & \multicolumn{1}{c}{[$\msun$]} & \multicolumn{1}{c}{[kpc]} & \multicolumn{1}{c}{\,} & \multicolumn{1}{c}{[$Z_{\sun}$]} \\ \hline {\bf SMC} & 1.0 & 2.0e10 & 15 & 46 & 8.9e8 & 1e7 & 0.25 & 1.3e8 & 0.7 & 140 & 7.5e8 & 2.1 & 0.56 & 0.1 \\ % {\bf Sbc} & 3.1 & 1.5e11 & 11 & 86 & 1.05e10 & 1e9 & 0.35 & 4e9 & 1.3 & 320 & 5.5e9 & 2.6 & 0.36 & 0.3 \\ % {\bf MW} & 4.0 & 1.6e12 & 12 & 190 & 7.13e10 & 1.5e10 & 1.0 & 4.73e10 & 3.0 & 300 & 0.9e10 & 6.0 & 0.09 & 1.0 \\ % {\bf HiZ} & 7.0 & 1.4e12 & 3.5 & 230 & 1.07e11 & 7e9 & 1.2 & 3e10 & 1.6 & 130 & 7e10 & 3.2 & 0.49 & 0.5 \\ % \hline\hline\\ } \end{footnotesize} \begin{figure*} \centering \scaleup \plotsidesize{merger_stile_sbcn_hr_f.pdf}{1.0} \caption{Morphology of the {\bf f} (retrograde) merger of the Sbc galaxy (a dwarf starburst); each shows the optical as Fig.~\ref{fig:morph.3} at different times during the merger. \label{fig:stile.sbc.e}} \end{figure*} \begin{figure*} \centering \scaleup \plotsidesize{mergertile_veos.pdf}{1.0} \caption{Comparison of the morphology in gas and stars in the {\bf e} (prograde) merger of the Sbc (dwarf starburst) galaxy; each shows the optical and gas as Figs.~\ref{fig:morph.1}-\ref{fig:stile.sbc.e} at different times during the merger (top to bottom: before first passage, after first passage, before final coalescence, and after coalescence, respectively). {\em Left:} Simulations with explicit stellar feedback models (stars and gas). {\em Right:} Simulations with simplified sub-grid treatment of the ISM and stellar feedback; the lack of small-scale structure (molecular clouds, star clusters, and dust lanes) and galactic winds is apparent. A complete set of images for all the simulations (with additional panels) is given in Appendix~\ref{sec:appendix:images}. \label{fig:morph.explicit.vs.subgrid}} \end{figure*} \vspace{-0.5cm}
|
\label{sec:discussion} We have studied major galaxy mergers with explicit models for stellar feedback that can follow the formation and destruction of individual GMCs and star clusters. Our models include star formation only at extremely high densities inside GMCs, and the mass, momentum, and energy flux from SNe (Types I \&\ II), stellar winds (both ``fast'' O-star winds and ``slow'' AGB winds), and radiation pressure (from UV/optical and multiply-scattered IR photons), as well as HII photoionization heating and molecular cooling. As a first study, we focus on simple, global properties, and compare them to those obtained from previous generations of simulations which did not follow these processes explicitly but instead adopted a simplified ``effective equation of state'' sub-grid model of the ISM. We find that most global consequences of mergers are similar, in models with explicit feedback and resolved ISM phase structure, and simplified EOS models. Some details, however, can be rather different. \vspace{-0.5cm} \subsection{Galaxy Morphologies} The relic structure and mass profile, and the presence of tidal tails in the merger, are predominantly determined by global gravitational torques, and so depend little on feedback details. This includes the integral properties (mass and effective radius) of the stellar relic of the kpc-scale starburst. \citet{mihos:cusps} argued that merger remnants are really two-component systems, and \citet{hopkins:cusps.mergers,hopkins:cusps.fp,hopkins:cusps.evol,hopkins:cusps.ell} later demonstrated this in comparison of observations (of ellipticals and recent merger remnants) and simulations with sub-grid feedback models. The outer profile is dominated by violently relaxed stars formed before the final galaxy merger, which are dissipationlessly scattered into the ``envelope'' at large radii. The inner portion is dominated by ``starburst'' stars which form via gas that is gravitationally torqued and dissipates, falls to the center, and turns into stars in a central starburst; this dominates the central $\sim$kpc and extends inwards to form the central ``cusps'' in gas-rich merger remnants \citep[see][]{kormendysanders92,hibbard.yun:excess.light,rj:profiles,hopkins:cusp.slopes}; it is often responsible for nuclear kinematically decoupled components \citep{hernquist:kinematic.subsystems,hoffman:mgr.orbit.structure.vs.fgas}. We find that this decomposition, and the behavior of the mass profile of each sub-component separately (as well as the total) is robust to the inclusion of explicit or effective EOS models. This is reassuring, since \citet{hopkins:cusps.mergers,hopkins:cusps.ell} showed that the simulated mass distributions agree very well with observed mergers and spheroids, and \citet{cox:feedback} showed these results are robust to substantial changes in sub-grid parameterizations of feedback and star formation. The dissipationless profile should obviously be insensitive to the gas physics. The dissipative profiles are similar because, as shown in \citet{hopkins:cusps.mergers}, their characteristic radii are set not by feedback ``pressurizing'' the material, but by the radius at which the inflowing gas becomes self-gravitating, at which point it is no longer strongly torqued by the external stars and dark matter and can turn into stars relatively rapidly. The slope of the central mass profile is set by equilibrium between gravitationally driven inflow and star formation \citep{hopkins:cusp.slopes}. We may however expect differences in very low-mass, gas-rich mergers. In the EOS models, these are ``pressurized'' to the point where the effective sound speed is comparable to the circular velocity, and so tend to result in merger remnants that are larger compared to true ellipticals at $\lesssim10^{9}\,\msun$ \citep{hopkins:cusps.ell}. Allowing the gas to be more compressible (instantaneously), while retaining feedback, may resolve this discrepancy. \vspace{-0.5cm} \subsection{Disk Survival} Disk survival -- the efficiency with which gas avoids consumption in the merger and can rapidly re-form a disk after -- appears to be just as, if not slightly more, efficient in models with explicit feedback (appropriately accounting for their properties at the time of merger). They follow the same scalings for the likelihood of disk survival (and gas angular momentum loss) as derived from simplified models in \citet{hopkins:disk.survival}. This is expected: \citet{hopkins:disk.survival} found disk survival did not depend on the sub-grid parameters of the EOS model (unless feedback is very weak), and discuss the reasons for this. Gas is dissipative, so gas that survives the merger without turning into stars or losing most of its angular momentum will eventually re-form a rotating disk. The dominant torques driving angular momentum loss in mergers arise from resonant interactions exciting collisionless perturbations in the stars, which in turn force the gas into strong shocks \citep{barnes.hernquist.91,barneshernquist92,barnes02:gasdisks.after.mergers,hopkins:zoom.sims,hopkins:inflow.analytics}. Angular momentum ``cancellation,'' viscosity, other hydrodynamic torques, and the direct torque from the presence of the secondary galaxy are typically much smaller effects. As a result, one can analytically derive a critical radius and resonant conditions that depend only on merger orbit, mass ratio, and galaxy structural properties (not the gas microphysics), outside of which gas will not be strongly torqued and will thus survive to re-form disks. With few stars, gas rich disks have weak collisionless torques, so give larger surviving disks. The same has also been seen in cosmological simulations with sub-grid feedback models \citep{governato:disk.rebuilding,governato:2010.dwarf.gal.form,guedes:2011.cosmo.disk.sim.merger.survival}. Disks survive multiple major mergers by virtue of conserving gas which immediately re-forms disks; this is enhanced by the presence of feedback-driven outflows which further suppress the mass turned into stars in the center. This is important for the existence of ``realistic'' disks today \citep[][]{robertson.bullock:disk.merger.rem.vs.obs,hopkins:disk.survival.cosmo,stewart:disk.survival.vs.mergerrates}, and there appear to be a growing number of observed disks which have undergone recent mergers \citep{hammer:obs.disk.rebuilding,hammer:hubble.sequence.vs.mergers,yang:post.merger.disk.obs,puech:z06.disk.postmerger,bundy:merger.fraction.new,bundy:2010.passive.disks}. \vspace{-0.5cm} \subsection{Starbursts and Star Formation} The final starbursts are dominated by {in situ} star formation from gas which is torqued, falls into the nucleus and turns into stars at high densities. This is the ``standard'' scenario of merger-driven nuclear starbursts and star formation predicted in early simulations \citep{barnes.hernquist.91,mihos:starbursts.94,mihos:starbursts.96}, and observed at the centers of nearby ULIRGs \citep[strong, rapid inflows evident in their profiles of molecular gas, star formation, and metallicity; see e.g.][]{tacconi:ulirgs.sb.profiles,sanders96:ulirgs.mergers,kormendysanders92,hibbard.yun:excess.light,rj:profiles,titus:ssp.decomp,reichardt:ssp.decomp,michard:ssp.decomp,foster:metallicity.gradients,sanchezblazquez:ssp.gradients,kewley:2010.gal.pair.metal.grad.evol,rupke:2010.metallicity.grad.merger.vs.obs,soto:ssp.grad.in.ulirgs}. \citet{cox:feedback} showed that this is robust to large variations in the implementation of feedback in sub-grid models. As emphasized in many previous studies, we are not saying that most of the stars form in the starburst: this is typically $\sim10\%$ of the total stellar mass \citep{jogee:merger.density.08,robaina:2009.sf.fraction.from.mergers,hopkins:ir.lfs,hopkins:sb.ir.lfs}, with the rest formed in the ``isolated disk'' mode (including times during the merger outside the ``burst''). Although gas clumps (into GMCs) as it flows in, feedback efficiently disperses these GMCs after they turn just a few percent of their mass into stars. But the material is not expelled completely (which would shut down the burst entirely); most of the recycled gas is unbound {\em locally} from the GMC, but not globally from the galaxy. The typical velocities are the escape velocities from the GMC or local star cluster ($\sim10\,{\rm km\,s^{-1}}$) and so the gas is ``stirred.'' This allows it to reach the nuclei in a coherent flow, so that gravitational torques will still tend to dominate. The more explicit feedback models, however, show significantly higher star formation in tails and bridges (especially in shocks). This may resolve a long-standing discrepancy between merger models and observed star formation distributions in local mergers \citep{barnes:2004.shock.sf.mice}; a more quantitative comparison with models intended to mimic these specific systems will be the subject of future work. In the explicit feedback models, the star formation is more strongly time variable (less ``smoothed'' by the effective EOS). This owes to the more inhomogeneous nature of the ISM -- in extreme cases, a large fraction of the SFR can come from a few super star clusters. The burst ``enhancement'' (peak SFR) can be higher or lower, depending on the orbit and structural properties; as with sub-grid models, first passage bursts are much more sensitive to these details than final coalescence. In low-mass systems, it tends to be higher, because (as noted above) the EOS models may over-pressurize small disks by enforcing a minimum sound speed of $\sim10\,{\rm km\,s^{-1}}$; with molecular cooling the gas can more efficiently collapse to high densities in the nucleus. In all the explicit feedback cases, the ``tails'' of the starburst tend to be more broad; this owes to the effect of stellar winds ejecting some material at modest velocities, so it falls back into the disk in a fountain and makes the starburst more extended (discussed below). \vspace{-0.5cm} \subsection{The Kennicutt-Schmidt Relation} The explicit feedback models predict SFRs that agree extremely well with observed merging galaxies on the Kennicutt-Schmidt relation. This is a major success of the models: unlike the EOS models, where the kpc-scale SF law is imposed to match the Kennicutt law, the predicted Kennicutt relation in the explicit feedback cases is a true prediction (both in normalization, scatter, and power-law slope). Recall, star formation in these models occurs only in very dense, locally self-gravitating gas (scales $\lesssim$pc), with an instantaneous local efficiency of unity -- not the global $\sim1\%$ value of the Kennicutt relation. And without feedback (see \paperone\ \&\ \papertwo), the models predict a SFR far larger than observed, $\dot{M}_{\ast}\sim M_{\rm gas}/t_{\rm dyn}$. With feedback included -- with parameters not in any way adjusted to reproduce the observations but taken directly from stellar evolution models, the observed relation naturally emerges. This is a consequence of feedback-regulated star formation: turbulent support in the gas dissipates locally on a crossing time, leading to collapse, which proceeds until a sufficient number of young stars form (regardless of {\em how} they form, in detail) to provide enough momentum flux in feedback to balance the dissipation. As such, we showed in \paperone\ that the SFR and Kennicutt relation are direct consequences of feedback, and are {\em independent} of the microphysical star formation law once explicit feedback is included. We find the same here; we have re-run some of our simulations replacing our local self-gravity criterion for star formation with a simple density threshold with a very different efficiency (or changing the small-scale density dependence), as discussed in \S~\ref{sec:sims}, and the predicted Kennicutt law is nearly identical. The same feedback mechanisms regulating star formation in isolated galaxies extend to the regime of extreme merger-induced starbursts. This is no mean feat: the typical surface densities and star formation rates are orders of magnitude larger than the isolated disk counterparts of these galaxies. The gas surface densities in the bursts here extend to $\sim100-1000$ times larger than the mean surface density of local GMCs! We are therefore well into the regime where the medium is entirely molecular and high-density (although it may still exhibit phase structure). In \papertwo, we showed that SNe and other ``thermal'' feedback mechanisms become unimportant in gas above densities $\sim1-10\,{\rm cm^{-3}}$ (typical of the diffuse gas in GMCs), because the cooling times are extremely short compared to the dynamical times. We therefore expect that radiation pressure is the most important feedback mechanism in these starburst regions, as we found there for the most dense regions within ``normal'' GMCs. It is a remarkable success of the models, though, that they interpolate naturally into even this extreme regime, without requiring any adjustment. In contrast, most analytic models for the Kennicutt-Schmidt relation have been forced to assume some ad hoc break or transition in the scaling at high densities. We see no strong bimodality in the predicted relation. But this is not necessarily inconsistent with recent observational claims along these lines \citep{genzel:2010.ks.law,daddi:2010.ks.law.highz}; our merger and isolated galaxy simulations overlap closely with the observations from those papers. What we see is a continuum between these systems, which is completely consistent with the observations \citep[see e.g.][]{narayanan:2011.xco}. The apparent bimodality stems from a strict choice of two separate CO conversion factors; in future work, we will incorporate detailed models for molecular emission to make a quantitative comparison with these observations. \citet{narayanan:2011.xco.model} find that when applying a functional form for the CO-H2 conversion factor which depends on the physical conditions in the ISM to the observed galaxies in Figure~\ref{fig:ks.law}, a slope of $\sim1.8$ emerges. Our results are quite consistent with those, and we find a slope of $\sim1.7$; this suggests that stars do form somewhat more ``efficiently'' (in terms of consumption time) in systems like mergers which occupy the high-surface density region of the Kennicutt-Schmidt plot, though mergers do not lie on a different track of the relation than disks. The increased efficiency likely owes to a larger fraction of gas in the dense phase, a result which has been noted in observations of local ULIRGs \citep[e.g.][]{juneau:2009.enhanced.dense.gas.ulirgs}. \vspace{-0.5cm} \subsection{Comparison to Models with Weak Feedback} Recently, there have been some different conclusions reached in the literature from the study of high-resolution simulations with very different implementations of stellar feedback. \citet{teyssier:2010.clumpy.sb.in.mergers} argued that merger-induced starbursts might not occur ``in situ'' from inflows, but instead from catastrophic fragmentation of tidal arms, bars, and other instabilities. The fragments would turn a large fraction of their mass into stars very rapidly, and then sink as supermassive stellar clusters to the galaxy center via dynamical friction, building the bulge. They also suggest a sharp bimodality in the Kennicutt-Schmidt relation, with the mergers and isolated disks being well-separated, as a consequence of this induced fragmentation in the mergers (though they do discuss how this might be smoothed out). \citet{bournaud10} argued that disk survival is very inefficient in their simulations, for similar reasons -- the gas undergoes dramatic fragmentation and ``clumps'' scatter off one another, providing new torques. We believe that these differences stem from the fact that these simulations include relatively weak stellar feedback, which is unable to disperse dense gas clouds (GMCs and especially clouds in the dense tidal or starburst regions of mergers) after they form. In fact, we reproduce these results if we re-run our mergers with an effective EOS model with much weaker ``feedback strength,'' $\qeos=0$. Like the $\qeos=0$ models, the \citet{teyssier:2010.clumpy.sb.in.mergers} models adopt an effective EOS, but one in which the ``median'' ISM temperature is much colder than the sub-grid sound speeds used in the EOS models here, which makes catastrophic fragmentation possible. Without strong feedback, the clumps cannot then disperse and recycle their mass. Specifically, the models either do not include explicit feedback mechanisms or include only SNe; however, we found in \paperone\ and \papertwo\ that SNe have little effect on the dispersal of dense gas, because the cooling time in that gas is much shorter than the dynamical time. As a result, the SNe can stir up the ``diffuse'' medium but cannot recycle dense gas. In our simulations, the most dense clouds are destroyed by a combination of radiation pressure in both UV and IR, photo-ionization, and momentum from O-star winds. In models that allow cooling but do not include these explicit feedback mechanisms, therefore, gas cools and collapses into dense GMC-like objects, which steadily contract and form stars, but are not efficiently ``re-mixed'' into the ISM on a short timescale. Collapse (hence star formation) in the dense regions runs away, giving rise to the dramatically enhanced ``fragmentation mode'' of star formation and apparent bimodality in the Kennicutt relation. In \papertwo\ we show that even if slow star formation is forced in the dense regions, the lack of mixing allows the dense regions to move ballistically, with relatively small interaction cross-sections and long lifetimes. In this limit, gas is effectively no longer collisional. All the dissipation occurs within individual clumps, while the relative motions between clumps cannot be dissipated, just like relative motions of stars; the gas behaves like a collection of ``super-massive'' star or dark matter particles, and loses orbital angular momentum only via processes much slower than shocks, such as scattering and dynamical friction. In contrast, observations suggest that GMCs are short-lived, with lifetimes of a few free-fall times (few Myr), and turn just a few percent of their mass into stars before dispersing \citep{zuckerman:1974.gmc.constraints,williams:1997.gmc.prop,evans:1999.sf.gmc.review,evans:2009.sf.efficiencies.lifetimes}. Although the relevant physics in galaxy-scale simulations remains quite uncertain (and it is unclear how robustly this can be generalized to all the cases studied in this paper), short lifetimes do appear to be typical even for the most massive ($10^{8}-10^{9}\,\msun$) GMC complexes observed in local mergers \citep{rand:1990.m51.superclumps,planesas:1991.1068.superclumps,wilson:2003.supergiant.clumps,wilson:2006.arp220.superclumps}. So even if most of the gas at a given instant is in dense sub-units (GMCs), the gas can be recycled through the diffuse ISM on a short timescale, and therefore experience normal hydrodynamic forces. So long as the GMC lifetime is short compared to the total duration of the strong torques in the merger ($\sim500\,$Myr), this will be true. Likewise, since inflows in the final coalescence of a major merger require only a couple of dynamical times to reach the center, for fragmentation to ``beat'' inflow as a starburst driver the fragmenting clumps in the inflow would have to have a very high instantaneous star formation efficiency, $\dot{M}_{\ast}\sim M_{\rm gas}/t_{\rm dyn}$. This happens in models with weaker feedback; however, the observed Kennicutt relation in ULIRGs and other mergers (even high-redshift systems) implies an efficiency substantially smaller (by as much as a factor of $\sim50$). And direct observations of ULIRGs do suggest that the final coalescence star formation rate is indeed dominated by a nuclear starburst (see references above). A key conclusion is that, in any model which resolves the formation of dense clouds, predictions critically depend on the physics that may (or may not) {\em destroy} those clouds. \vspace{-0.5cm} \subsection{Caveats \&\ Future Directions} The results here are a preliminary study of how more detailed and explicit stellar feedback and ISM structure influence the results of merger simulations. We have found that the most basic global integral properties of star formation and galaxy morphology tend to be similar to those inferred in previous studies with a simplified treatment of the gas physics. However, we also find that more detailed structural and kinematic properties are sensitive to the gas physics. We do not expect this to be the case outside of the central $\sim$kpc, where the relic galaxy is dominated by stars formed before the merger and violently relaxed. But kinematic substructures in the central region may be sensitive to how the gas collapses and turns into stars. It will also be particularly interesting to examine the effects of more detailed star formation and enrichment models that can separately follow stellar winds and SNe Types I \&\ II on the age, metallicity, and abundance ($\alpha/{\rm Fe}$) gradients in the galaxies; these can place strong constraints on the merger history and role of dissipation in galaxy formation \citep[see e.g.][and references therein]{torrey:2011.metallicity.evol.merger}. In a companion paper, we examine the star clusters formed in these simulations. The mass/luminosity distribution, spatial locations, formation time distribution, and physical properties of these clusters represent a powerful constraint on small-scale models of the ISM and star formation. We have also restricted our focus to major mergers. Studies of mergers with varying mass ratios suggest that the qualitative behaviors discussed here should not depend on mass ratio for ratios to about 3:1 or 4:1, and even at lower mass ratios they can be considered similar but with an ``efficiency'' of inducing starbursts and violent relaxation that scales approximately linearly with mass ratio \citep{hernquist.mihos:minor.mergers,cox:massratio.starbursts,naab:minor.mergers,younger:minor.mergers}. But at small mass ratios the dominant role of mergers may be more subtle: inducing resonant disk perturbations \citep{donghia:resonant.stripping.dwarfs,donghia:2010.tidal.resonances} and disk heating \citep{purcell:minor.merger.thindisk.destruction,stewart:mw.minor.accretion,walker:disk.fragility.minor.merger,hopkins:disk.heating,moster:2010.thin.disk.heating.vs.gas}, for which the gas response may depend significantly on its phase structure. We note that recent studies comparing cosmological simulations done with {\small GADGET} and the new moving mesh code {\small AREPO} \citep{springel:arepo} have called into question the reliability of smoothed particle hydrodynamics (SPH) for some problems related to galaxy formation in a cosmological context \citep{vogelsberger:2011.arepo.vs.gadget.cosmo, sijacki:2011.gadget.arepo.hydro.tests,keres:2011.arepo.gadget.disk.angmom,bauer:2011.sph.vs.arepo.shocks, torrey:2011.arepo.disks}. However, we have also performed idealized simulations of mergers between individual galaxies and found excellent agreement between {\small GADGET} and {\small AREPO} for e.g. gas-inflow rates, star formation histories, and the mass in the ensuing starbursts, for reasons that will be discussed in \citep{hayward:arepo.gadget.mergers}. The discrepancies above are also minimized when the flows of interest are supersonic (as opposed to sub-sonic), which is very much the case here \citep{kitsionas:2009.grid.sph.compare.turbulence,price:2010.grid.sph.compare.turbulence,bauer:2011.sph.vs.arepo.shocks}. We have also compared a subset of our merger simulations run with an alternative formulation of SPH from \citet{hopkins:lagrangian.pressure.sph}, which produces results much more similar to grid codes; in these tests we confirm all of the qualitative conclusions here. Our new models allow us to follow the structure of the gas in the central regions of starburst systems at high resolution. This makes them an ideal laboratory for studying feedback physics under extreme conditions in, say, the center of Arp 220 and other very dense galaxies. In another companion paper, we examine the properties of the large starburst winds driven by stellar feedback in the mergers, which can reach $\sim10-500\,\msun\,{\rm yr^{-1}}$ outflow rates and so have potentially major implications for metal enrichment and self-regulation of galaxy growth. We have also here, for clarity, neglected AGN feedback in these models, but we expect it may have a significant effect on the systems and their outflows after the final coalescence. For example, it may strongly suppress the otherwise quite high post-merger SFRs we see in these simulations, which -- without something to rapidly ``quench'' them -- may make it difficult or impossible to explain the observed abundance of high-redshift, low-SFR galaxies. With high-resolution models that include the phase structure of the ISM, it becomes meaningful to include much more explicit physical models for AGN feedback. Finally, we stress that these models are still approximations, and the treatment of ISM and star formation physics is necessarily still ``sub-grid'' at some level (just at the GMC-level, instead of the kpc-level). We can follow galaxy-wide phenomena such as mergers, and large-scale processes such as the formation of GMCs and massive star clusters. However, we still require assumptions regarding the behavior of gas at densities above $\sim10^{4}-10^{6}\,{\rm cm^{-3}}$, and make simple approximations to the chemistry of the gas (especially at low temperatures); our feedback models also average over the IMF, owing to the fact that a single stellar particle still represents many individual stars, rather than discriminating individual low and high-mass stars. The treatment of radiative feedback, in particular, is not a full radiative transfer calculation (which, for the infrared, is extremely demanding and may be sensitive to very small-scale unresolved ISM structure). It remains, unfortunately, prohibitively expensive to treat these physics much more explicitly in galaxy-wide simulations. As such, the behavior of any individual molecular cloud or star cluster is at best marginally resolved in our simulations and should only be taken as a first approximation to the behavior that might be obtained if the evolution of that system could be followed in detail. However, it is common to consider both these physics and much smaller spatial and mass resolution of sub-stellar masses and $<0.1\,$pc in simulations of the ISM and star formation within small ``patches'' of the ISM. There is considerable and rapidly growing simulation work in these areas, much of it ``building up'' to larger scales such as GMCs, overlapping with our smallest resolved scales. This suggests that considerable progress might be made by combining these approaches and using smaller-scale (but more accurate and explicit) star formation and ISM simulations to calibrate and test the approximations in galaxy-scale simulations such as those here, which can themselves, in turn, be used to calibrate the kpc-scale sub-grid approaches still required for fully cosmological simulations. \vspace{-0.25in}
| 12
| 6
|
1206.0011
|
1206
|
1206.2014_arXiv.txt
|
CCOs are X$-$ray sources lying close the center of supernova remnants, with inferred values of the surface magnetic fields significantly lower ($\lesssim 10^{11}$~G) than those of standard pulsars. In this paper, we revise the hidden magnetic field scenario, presenting the first 2D simulations of the submergence and reemergence of the magnetic field in the crust of a neutron star. A post-supernova accretion stage of about $10^{-4}$-$10^{-3} M_\odot$ over a vast region of the surface is required to bury the magnetic field into the inner crust. When accretion stops, the field reemerges on a typical timescale of 1-100 kyr, depending on the submergence conditions. After this stage, the surface magnetic field is restored close to its birth values. A possible observable consequence of the hidden magnetic field is the anisotropy of the surface temperature distribution, in agreement with observations of several of these sources. We conclude that the hidden magnetic field model is viable as alternative to the anti-magnetar scenario, and it could provide the missing link between CCOs and the other classes of isolated neutron stars.
|
The handful of reported Central Compact Objects (CCOs) forms a class of X-ray sources (see \cite{deluca08,halpern10} for recent reviews), located close to the center of $\sim$kyr old supernova remnants. CCOs are supposed to be young, isolated, radio-quiet neutron stars (NSs). They show very stable, thermal-like spectra, with hints of temperature anisotropies, in terms of large pulsed fraction, or small emitting regions for hot ($0.2$-$0.4$ keV) blackbody components. The period is known for only three cases: $P=424$ ms for 1E 1207.4-5209 in G296.5+10.0 \citep{zavlin00}, $P=112$ ms for RX J0822.0-4300 in Puppis A \citep{gotthelf09}, and $P=105$ ms for CXOU J185238.6+004020 in Kes 79 \citep{gotthelf05} (hereafter 1E 1207, Puppis A and Kes 79, respectively). For Kes 79, \cite{halpern10} reported a period derivative of $\dot{P}\simeq 8.7\times 10^{-18}$ ss$^{-1}$; for 1E 1207, \cite{halpern11} give two equally good timing solutions, with $\dot{P}=2.23\times 10^{-17}$ ss$^{-1}$ and $\dot{P}=1.27\times 10^{-16}$ ss$^{-1}$. Only an upper limit $\dot{P}<3.5\times 10^{-16}$ ss$^{-1}$ is available for the CCO in Puppis A \citep{gotthelf10,deluca12}. Applying the classical dipole-braking formula gives an estimate for the dipolar component of the \textit{external} magnetic field (MF) of $B_p\sim 10^{10}-10^{11}$~G (at the pole). This low value has led to the interpretation of CCOs as ``anti-magnetars'', i.e., NSs born with very low MFs, which have not been amplified by dynamo action due to their slow rotation at birth \citep{bonanno05}. An alternative scenario to the ``anti-magnetar'' model that explains the low values of $B_p$ is to consider the fallback of the debris of the supernova explosion onto the newborn NS. During a time interval of few months after the explosion, it accretes material from the reversed shock, at a rate far superior than Eddington limit \citep{colgate71,blondin86,chevalier89,houck91,colpi96}. This episode of hypercritical accretion could bury the MF into the NS crust, resulting in an external MF (responsible for the spindown of the star) much lower than the internal ``hidden" MF. When accretion stops, the screening currents localized in the outer crust are dissipated on Ohmic timescales and the MF eventually reemerges. The process of reemergence has been explored in past pioneer works \citep{young95,muslimov95,geppert99} with simplified 1D models and always restricted to dipolar fields. It was found that, depending on the depth at which the MF is submerged (the submergence depth), it diffuses back to the surface on radically different timescales $10^3-10^{10}$ yr. Thus, the key issue is to understand the submergence process and how deep can one expect to push the field during the accretion stage. This latter issue has only been studied (also in 1D and for dipolar fields) by \cite{geppert99}, in the context of SN 1987A. They conclude that the submergence depth depends essentially on the total accreted mass. More recently, \cite{ho11} has revisited the same scenario in the context of CCOs, using a 1D cooling code and studying the reemergence phase of a buried, purely dipolar field, with similar conclusions to previous works. The {\it hidden magnetic field scenario} has also been proposed by \cite{shabaltas12} for the CCO in Kes 79. They can explain the observed high pulsed fraction ($f_p\approx 60\%$) of the X$-$ray emission with a sub-surface MF of $\approx 10^{14}$~G, that causes the required temperature anisotropy. In this paper, we further explore the viability of this scenario with improved calculations that can account for temperature anisotropies. We present the first results from 2D simulations of the submergence and rediffusion of the MF, using the recent extension \citep{vigano12} of the magneto-thermal evolution code of \cite{pons09}. The new code is able to follow the coupled long-term evolution of MF and temperature including the non-linear Hall term in the induction equation. This allows to follow the evolution for any MF geometry (not restricted to dipoles) and includes state-of-the-art microphysical inputs. \begin{figure} \centering \includegraphics[width=.2\textwidth]{images/initial.eps} \includegraphics[width=.2\textwidth]{images/initial_crust.eps}\\ \includegraphics[width=.22\textwidth]{images/initial_planar.eps} \includegraphics[width=.22\textwidth]{images/initial_planar_crust.eps} \caption{Initial configuration of model A (left) and model B (right). Poloidal field lines (solid) smoothly match a vacuum solution outside the star. Upper panels: global configuration, the dashed line indicates the core-crust interface (denoted by $R_c$); the color scale represents the toroidal MF intensity. Lower panels: planar representation of the MF in the crust ($r\in[R_c,R]$), as a function of the polar angle $\theta$; the grey scale is normalized to the maximum value of $B_t$, and the white/black regions indicate positive/negative values.} \label{fig_initial} \end{figure}
| 12
| 6
|
1206.2014
|
|
1206
|
1206.5634_arXiv.txt
|
{With the large sample of young $\gamma$-ray pulsars discovered by the \emph{Fermi} Large Area Telescope (LAT), population synthesis has become a powerful tool for comparing their collective properties with model predictions. We synthesised a pulsar population based on a radio emission model and four $\gamma$-ray gap models (Polar Cap, Slot Gap, Outer Gap, and One Pole Caustic). Applying $\gamma$-ray and radio visibility criteria, we normalise the simulation to the number of detected radio pulsars by a select group of ten radio surveys. The luminosity and the wide beams from the outer gaps can easily account for the number of \emph{Fermi} detections in 2 years of observations. The wide slot-gap beam requires an increase by a factor of $\sim 10$ of the predicted luminosity to produce a reasonable number of $\gamma$-ray pulsars. Such large increases in the luminosity may be accommodated by implementing offset polar caps. The narrow polar-cap beams contribute at most only a handful of LAT pulsars. Using standard distributions in birth location and pulsar spin-down power ($\dot{E}$), we skew the initial magnetic field and period distributions in a an attempt to account for the high $\dot E$ \emph{Fermi} pulsars. While we compromise the agreement between simulated and detected distributions of radio pulsars, the simulations fail to reproduce the LAT findings: all models under-predict the number of LAT pulsars with high $\dot{E}$, and they cannot explain the high probability of detecting both the radio and $\gamma$-ray beams at high $\dot{E}$. The beaming factor remains close to 1.0 over 4 decades in $\dot{E}$ evolution for the slot gap whereas it significantly decreases with increasing age for the outer gaps. The evolution of the enhanced slot-gap luminosity with $\dot{E}$ is compatible with the large dispersion of $\gamma$-ray luminosity seen in the LAT data. The stronger evolution predicted for the outer gap, which is linked to the polar cap heating by the return current, is apparently not supported by the LAT data. The LAT sample of $\gamma$-ray pulsars therefore provides a fresh perspective on the early evolution of the luminosity and beam width of the $\gamma$-ray emission from young pulsars, calling for thin and more luminous gaps.} \authorrunning{Pierbattista et al. 2012} \titlerunning{Gamma-ray pulsar population}
|
After the radio detection of the first pulsar signal in 1967 \citep{hbp+68}, a pulsar magnetosphere model was formulated by \cite{gj69}. A direct consequence of the Goldreich \& Julian model is the establishment of a magnetospheric charge density that creates a force-free pulsar magnetosphere. However, such a magnetosphere has no electric field along the magnetic field to accelerate charges and produce $\gamma$-rays. The detection, a few years later, of pulsed emission at $\gamma$-ray energies from the Crab \citep{mbc+73} and Vela \citep{tfko75} pulsars, and the detection of four more $\gamma$-ray pulsars by \cite{tab+94} established that pulsars accelerate particles to energies of at least a few TeV suggesting that there are magnetospheric regions where the charge density departs from that of Goldreich \& Julian, locally violating the force-free condition and allowing particle acceleration. These regions were identified in two magnetospheric zones. In the inner magnetosphere, acceleration can take place both above the polar cap and in the \emph{slot gap}, which extends to high-altitude along the last open magnetic field lines. In the outer magnetosphere, the \emph{outer gap} extends from the null charge surface to the light cylinder. These gap regions correspond to three models: the low-altitude slot-gap model, hereafter Polar Cap (PC, \cite{mh03}), the Slot Gap model (SG, \cite{mh04a}), and the Outer Gap model (OG, \cite{crz00}). In the \emph{polar-cap model} the emission comes from a region close to the neutron star (NS) surface and well confined above the magnetic polar cap. Charged particles from the neutron star are initially accelerated in the strong electrostatic field generated by a departure from the Goldreich-Julian charge density \citep{as79}. Aided by inertial frame dragging \citep{mt92}, pulsars emit high energy photons by curvature radiation (CR) and inverse Compton scattering (ICS). The most energetic of these photons reach threshold for electron-positron pair production in the strong magnetic field at a Pair Formation Front (PFF), above which the secondary pairs can screen the electric field in a short distance. The pairs, produced in excited Landau states, emit synchrotron photons which trigger a pair cascade with high multiplicity. A small fraction of the pairs is actually accelerated. The pair plasma likely establishes force-free conditions along the magnetic field lines above the PFF, as well as radiate $\gamma$-rays. Over most of the polar cap, the PFF and $\gamma$-ray emission occurs well within a few stellar radii of the NS surface. The main contribution to the $\gamma$-ray emission comes from CR from the pairs moving upward. Since the CR intensity scales with the magnetic field lines curvature, it decreases from the polar cap edge toward the magnetic axis, conferring to the emission beam the structure of an hollow cone. The \emph{slot-gap} emission is generated from the same polar cap electromagnetic pair cascade near the boundary of the closed magnetic field lines region where the parallel electric field $E_\mathrm{\parallel} \rightarrow 0$ and the PFF rises to higher altitude. Here electrons are accelerated over longer distances to produce the pair cascade. A narrow gap, \emph{the slot gap}, is formed along the closed magnetic field surface where the PFF is never established, and electrons continue to be accelerated and radiating $\gamma$-rays by self-limited curvature radiation into the outer magnetosphere. The resulting hollow beam is much broader and less collimated near the magnetic axis than the lower-altitude PC emission (see Section \ref{Phase-plot calculation and normalisation}). The outer gaps are vacuum regions characterised by a strong electric field along the magnetic field lines \citep{hol73,crs76} above the null charge surface. Two outer gap regions \citep{crs76,ry95,crz00,h06} can exist in the \emph{angular velocity-magnetic momentum} plane, one for each pole. In the physical OG model, in the case of a non-aligned rotator, the gap region closer to the pulsar surface is more active than the other gap further away from the surface due to the pair production screening operating more efficiently at lower altitude. In the OG model a charge-deficient region forms in the outer magnetosphere above the null charge surface where a charge-separated flow is formed. The induced electric field accelerates pairs radiating $\gamma$-rays in a direction tangent to the ${\bf B}$ lines. The $\gamma$-ray photons interact with thermal X-rays from the NS surface to produce pairs on field lines interior to the last open field line. The pair formation surface screening the electric field defines the interior surface of the gap. More than 2000 pulsars are listed in the ATNF database \citep{mhth05}, most of which were first observed at radio wavelength. We employ the following ten selected pulsar radio surveys in this study: Molonglo2 \citep{mlt+78}, Green Bank 2 \& 3 \citep{dtws85,stwd85}, Parkes 2 (70 cm) \citep{lml+98}, Arecibo 2 \& 3 \citep{sstd86,nft95}, Parkes 1 \citep{jlm+92}, Jodrell Bank 2 \citep{cl86}, Parkes Multi-beam \citep{mlc+01} and the extended Swinburne surveys \citep{ebsb01,jbo+09}. For these, the survey parameters are known with a high accuracy and they cover the largest possible sky surface while minimising the overlapping regions. The advent of the LAT telescope on the \emph{Fermi} satellite \citep{aaa+09a} led to a drastic increase in the number of $\gamma$-ray pulsars. After three years of observations the LAT detected about 106 pulsars, more than doubling the number of detections listed in the first pulsar catalog \citep{aaa+10} leading to the discovery of two well defined $\gamma$-ray pulsar populations consisting of 31 millisecond pulsars, and 75 young or middle aged isolated, normal pulsars. To study and compare the collective properties of the LAT normal isolated pulsars and investigate the emission mechanisms that best explain the observed emission, we synthesised a pulsar population incorporating four important high-energy radiation gap models. The simulation takes into account the axisymmetric structure of our Galaxy and is designed to match the known characteristics of the group of older radio pulsar population than the younger group of pulsars sampled in $\gamma$-rays. Four $\gamma$-ray emission gap models have been assumed: the previously described Polar Cap (PC), Slot Gap (SG), and Outer Gap (OG), and a variation of the OG, hereafter the One Pole Caustic (OPC) \citep{rw10,wrwj09} that differs from the OG in the energetics. We model the radio emission at two different frequencies, 1400 MHz and 400 MHz \citep{gvh04,hgg04}, comparing simulated radio fluxes with the flux thresholds of existing surveys. The outline of this paper is as follows. In Sections \ref{NSchoice} and \ref{birthAndEvolution}, we describe the neutron star characteristics and evolution. In Sections \ref{RadioM}, \ref{SG}, and \ref{OG}, we give a brief overview of the radio luminosity computation, $\gamma$-ray gap widths, and $\gamma$-ray luminosities computations. Sections \ref{Phase-plot calculation and normalisation} and \ref{Flux calculations} describe the pulsar light-curve and flux computation. Section \ref{Radio pulsar visibility} reviews the radio and $\gamma$-ray pulsar visibility calculations. We present the results in the final Section \ref{PopResults}.
|
\label{summary} The exceptional results obtained with the \emph{Fermi} LAT telescope in the last few years offer the unique and exciting opportunity to constrain the physics of the pulsed $\gamma$-ray emission by studying the early evolution of the pulsar population and its collective properties. We compared simulation predictions with \emph{Fermi} LAT observations for this young ordinary pulsar population. We synthesised a radio and $\gamma$-ray pulsar sample, assuming a core and cone model for the radio emission and $\gamma$-ray emission according to four gap models, the Polar cap (PC), Slot Gap (SG), Outer Gap (OG), and an alternative outer gap, the One Pole Caustic Model (OPC), that uses the OG beam geometry and a simple luminosity evolution with $\dot{E}$ consistent with the LAT data \citep{wrwj09}. We compared model expectations and LAT data by applying $\gamma$-ray and radio visibility criteria to our sample and by scaling it to the number of radio pulsars observed in the Milky Way. We found that the narrow beam of the low-altitude polar cap emission contributes at most a handful of pulsars in the LAT sample. The modelled luminosity is also too faint by an order of magnitude to account for the LAT data if one applies the average PC beaming factors we found for the given spin-down powers of the LAT pulsars. The large dispersion found in PC beaming factors, however, can substantially solve the luminosity discrepancy. We find that all the LAT pulsars are much more luminous (by 1 order of magnitude) than the PC expectations. Yet, there is a huge dispersion (1 or 2 orders of magnitude) in $f_{\Omega}$ for the PC beams, so applying the average $f_{\Omega}(\dot{E})$ trend to he LAT pulsars could be off by more than one order of magnitude in reality, so all the LAT points could go up and down by more than 1 decade in Figure \ref{NvisA_large_effi1p012p01p00p5_bldhor_trueLumvsEdot} without problems. The wide beams from the outer gaps and slot gap models can easily account for the \emph{Fermi} LAT detection number in 2 years, provided an increase of a factor of $\sim 10$ of the standard slot-gap luminosity. The required increase may result from an enhanced accelerating electric field in the context of offset polar caps \citep{hm11}. The evolution of the enhanced SG luminosity with spin-down power is compatible with the large dispersion seen in the LAT data. We took into account the difference in the LAT flux sensitivity for detecting pulsed emission from radio-selected pulsars and for blind periodicity searches. The use of the two different sensitivity maps explained the almost equal amounts of radio-loud and radio-quiet pulsars found by the LAT. For all models, we found that the $\gamma$-ray visibility horizon extends to comparable distances for radio-loud and radio-quiet pulsars as a function of $\dot{E}$, from 6 to 8 kpc at the highest powers down to 2 kpc for the least energetic LAT pulsars. The radio visibility horizon compares well with the $\gamma$-ray horizon at high $\dot{E}$, but it extends to much larger distances for less energetic pulsars, except for the rapidly evolving OG case for which the pulsars with $\dot{E} \lesssim 3 \times 10^{27}$ W put 58\% of their spin-down power into $\gamma$-rays and remain visible to 5 kpc. All the $\gamma$-ray models fail to reproduce the high probability of detecting both the radio and $\gamma$-ray beams at high $\dot{E}$. The OPC prediction for the fraction of $\gamma$-loud pulsars among the radio pulsars is consistent with the radio and LAT data, but the model significantly under-predicts the fraction of $\gamma$-ray pulsars that are radio-loud. The SG model also over-predicts the number of radio-quiet $\gamma$-ray pulsars. These discrepancies may indicate that pulsar radio beams are larger than those we have modeled, either because they are intrinsically wider or because the emission occurs at higher altitude \citep{man05}, or both. The same conclusion has been argued by \cite{kj07}, that postulates emission over a wide range of emission heights rather than over a wide range of beam longitudes, and more recently by \cite{rmh10} and \cite{wr11} in the light of the \emph{Fermi} observations. The beaming factor $f_\mathrm{\Omega}$ hardly evolves with $\dot{E}$ in the SG case. It is well constrained around 1 for both radio-loud and radio-quiet pulsars. In the OPC case, $f_\mathrm{\Omega}\sim0.8$ for radio-loud objects with $\dot{E}> 10^{28}$ W, and it decreases by a factor of 2 for the less energetic objects detected by the LAT. In the OG case, $f_\mathrm{\Omega}$ decreases from 1 to 0.3 with $\dot{E}$ decreasing down to $10^{28}$ W and the evolution flattens around $\sim 0.2$ for lower powers. In all the models, the beaming factor of radio-quiet pulsars follow the average trend found for radio-loud pulsars, but with a large dispersion than spans 1 or 2 orders of magnitude. The classical outer-gap model (OG) fails to explain many of the most important pulsar population characteristics, such as spin-down power distribution and luminosity evolution, whereas the outer-gap alternative (OPC), which is based on a simple scaling of the gap width with $\dot{E}^{-1/2}$, provides the best agreement between model predictions and data, as concluded by \cite{wr11}. This agreement relies on the very narrow gaps assumed in the OPC case. They are 10 to 100 times thinner than the values obtained for the SG for the same spin-down power, so the $\gamma$-ray luminosity is concentrated in thin and wide beams along the edge of the open magnetosphere. The OG model predicts a stronger luminosity evolution because it uses the polar cap heating by the returning particles to close the gap. The stronger evolution driven by this feedback is apparently not supported by the LAT data. \cite{twc11} studied the evolution of the two layer OG luminosity as a function of $\dot{E}$. Its result is consistent with the one we plot in Figure \ref{NvisA_large_effi1p012p01p00p5_bldhor_trueLumvsEdot} for the OG model. The less pronounced dispersion observed in \cite{twc11} is due to the choice of a $f_{\Omega}=1$ for all the pulsars. All models studied here significantly under-predict the number of visible $\gamma$-ray pulsars seen at high $\dot{E}$. This inconsistency does not depend on the modelling of the $\gamma$-ray and radio visibility thresholds. The discrepancy with the observations is significant despite our choice of birth distributions skewed to young energetic pulsars, at slight variance with the constraints imposed by the total radio and $\gamma$-ray pulsar sample observed. The fact that the four models have different $\gamma$-ray luminosity evolutions and different beam patterns suggests a different cause for the discrepancy. Concentrating the birth location in the inner Galaxy lessened but did not resolve the discrepancy. Further increasing the number of energetic pulsars near the Sun would conflict with the observed pulsar distances. The estimate of the visibility threshold in radio or $\gamma$-ray flux is not at stake since all models over-predict the number of older, fainter, visible objects. The set of present results suggests that the observations require rather luminous albeit thin gaps in the magnetospheres of young pulsars. It will be a challenge for models to match this behaviour. The impact of a magnetic alignment with age \citep{ycbb10}, of an azimuthal variation of the accelerating field, or the choice of different braking indices for the pulsar spin-down may be important and will be included in future population studies to explore the origin of the scarcity of young energetic $\gamma$-ray pulsars in the model predictions.
| 12
| 6
|
1206.5634
|
1206
|
1206.5620_arXiv.txt
|
It is believed that turbulence may have a significant impact on star formation and the dynamics and evolution of the molecular clouds in which this occurs. It is also known that non-ideal magnetohydrodynamic effects influence the nature of this turbulence. We present the results of a numerical study of 4-fluid MHD turbulence in which the dynamics of electrons, ions, charged dust grains and neutrals and their interactions are followed. The parameters describing the fluid being simulated are based directly on observations of molecular clouds. We find that the velocity and magnetic field power spectra are strongly influenced by multifluid effects on length-scales at least as large as 0.05\,pc. The PDFs of the various species in the system are all found to be close to log-normal, with charged species having a slightly less platykurtic (flattened) distribution than the neutrals. We find that the introduction of multifluid effects does not significantly alter the structure functions of the centroid velocity increment.
|
It is generally believed that molecular clouds are turbulent (see the reviews of \citealt{mac04,elm04}). Observationally \citep[e.g.][]{larson81, brunt10} this turbulence appears to be highly supersonic with RMS Mach numbers of anything up to 20 or more. It is also thought that the Alfv\'enic Mach number is not significantly less than one, and may be much greater \citep[e.g.][]{heyer12}. The amplitude of this turbulence makes it likely to be an important ingredient in both the dynamics of molecular clouds and the process of star formation \citep{elm93,kle03} and hence developing an understanding of this phenomenon is of considerable interest. Many authors have addressed the issue of MHD turbulence in the context of molecular clouds using both the ideal MHD approximation \citep{maclow98, maclow99, ost01, ves03, gus06, glover07, lem08, lem09, brunt10a, brunt10b, price11} and, more recently, various flavours of non-ideal MHD \citep{ois06, li08, kud08, dos09, dos11}. Three dimensional MHD turbulence involves the transfer of energy from an energy injection scale to ever smaller scales until the dissipation length scale of the system is reached. Given that in molecular clouds the lengthscales at which multifluid effects become important are much larger than the viscous lengthscale, it is inevitable that these effects will have an impact on the energy cascade. When ambipolar diffusion is included in MHD turbulence simulations it has been found that it causes greater temporal variability in the turbulence statistics \citep{ois06, li08, kud08}. Although clearly of lesser significance \citep{wardle99, dos09, dos11}, the Hall effect is capable of inducing topological changes in the magnetic field which are quite distinct to any influence caused by ambipolar diffusion. In particular, it introduces a handedness into the flow which is of particular interest when considering the conversion of kinetic energy into magnetic energy. Researchers working on reconnection and the solar wind have studied the Hall effect in the context of turbulence and found that, although the overall decay rate appears not to be affected, the usual coincidence of the magnetic and velocity fields seen in ideal MHD does not occur at small scales \citep{mat03, min06, ser07}. \citet[hereafter Paper I]{dos09} performed simulations of decaying, non-ideal MHD, molecular cloud turbulence incorporating parallel resistivity, the Hall effect and ambipolar diffusion. They found that the Hall effect has surprisingly little impact on the behaviour of the turbulence: it does not affect the energy decay at all and has very limited impact on the power spectra of any of the dynamical variables, with the exception of the magnetic field at very short length-scales. \citet[hereafter Paper II]{dos11} extended the results of Paper I to properly multifluid MHD turbulent decay. A system of three fluids, 1 neutral and two charged species, was simulated and, intriguingly, it appears that for the chosen system a full multifluid simulation is unnecessary and simple non-ideal MHD with temporally and spatially constant resistivities can give quite reliable results. In this paper we use the {\sevensize HYDRA} code \citep{osd06, osd07} to investigate {\em driven}, isothermal, multifluid MHD turbulence in a system consisting of 4 fluids (1 neutral and 3 charged). The aim of this work is to determine the influence of multifluid effects on the behaviour of driven turbulence in molecular clouds. An added benefit of our properly multifluid approach is that we can self-consistently deduce the behaviour of the charged species and, in principle, make links with observations such as those of \citet{lh08}. The structure of this paper is as follows: in section \ref{sec:model} we outline the equations and numerical method; in section \ref{sec:initial-conditions} we specify our parameters, initial conditions and how we drive the turbulence in our simulations; section \ref{sec:resolution-study} contains the results of a resolution study demonstrating that the results we present are reasonably well converged; section \ref{sec:results} contains the results and analysis of our simulations and finally we draw conclusions from our work in section \ref{sec:conclusions}. We note here that we omit a discussion of the differences in line widths between the charged and the neutral species. The results associated with these differences require detailed discussion in their own right and are the subject of a forthcoming paper.
|
\label{sec:conclusions} We have presented the results of 4-fluid, MHD turbulence in molecular clouds. The parameters chosen were matched to the observations of molecular clouds reported in \citet{cru99}. We performed a resolution study to ensure that the results presented, in terms of both the power spectra and the RMS Mach number, were not strongly influenced by resolution effects. We found that the RMS mass-weighted Mach number was not significantly decreased with the inclusion of multifluid effects, in spite of multifluid effects providing an effective pathway for removal of magnetic energy. The overall energy dissipation, then, appears not to be significantly affected by multifluid effects. This phenomenon is a result of the fact that the majority of the energy in a driven, turbulent system with the properties of a molecular cloud resides in kinetic energy and so an enhanced removal of magnetic energy does not significantly effect the global energetics of the system. We found that both the velocity power spectra of the neutrals and the magnetic field power spectra were strongly influenced by multifluid effects at all length-scales up to the driving scale (0.05\,pc). In multifluid MHD, the velocity power spectrum of the neutrals is found to have considerably less structure at all length-scales than those of the charged species. The electron and ion fluids behave almost identically to each other, while the dust displays some slight differences with the other charged species. This latter point arises largely from the requirement of local charge neutrality and the fact that the electrons and ions are the dominant charge carriers in the system. We also find, at very short length-scales, some signs of the Hall effect in the power spectra of the charged species. Thus we find, in common with the results presented in Papers I and II that multifluid effects are important up to quite large length-scales. This result appears to be in conflict with the observations of \cite{lh08} and we deal with this apparent contradiction in a forthcoming paper. The differences in the power spectra of the bulk densities in the multifluid and ideal MHD systems are not as dramatic as those in the other variables, although there is somewhat less power in the distribution of the neutral density in the multifluid simulation than the bulk density in the ideal MHD one. Interestingly, there is {\em less} power in the spectrum of the charged species density than the neutral species in the multifluid system. This is the opposite of what is found for the velocity power spectra. For all species, and for all cases, the density PDFs are found to be very close to log-normal. The neutral species in the multifluid simulation has a slightly less platykurtic (flattened) distribution than that in the ideal MHD case. The charged densities in the multifluid simulation have, in turn, less platykurtic distributions than the neutrals in that simulation with the dust actually having a leptokurtic distribution. Each of these results is explained in terms of the ambipolar diffusion experienced by the multifluid system, and the fact that the charged species are more directly influenced by the magnetic field than the neutrals. Finally, we find that multifluid effects do not significantly influence the centroid velocity increment structure functions.
| 12
| 6
|
1206.5620
|
1206
|
1206.0719_arXiv.txt
|
{} {We report first results of an investigation of the tidally disturbed galaxy system AM\,546-324, whose two principal galaxies 2MFGC 04711 and AM\,0546-324 (NED02) were previously classified as interacting doubles. This system was selected to study the interaction of ellipticals in a moderately dense environment. We provide spectral characteristics of the system and present an observational study of the interaction effects on the morphology, kinematics, and stellar population of these galaxies. } {The study is based on long-slit spectrophotometric data in the range of $\sim$ 4500-8000 $\AA$ obtained with the Gemini Multi-Object Spetrograph at Gemini South (GMOS-S). We have used the stellar population synthesis code STARLIGHT to investigate the star formation history of these galaxies. The Gemini/GMOS-S direct r-G0303 broad band pointing image was used to enhance and study fine morphological structures. The main absorption lines in the spectra were used to determine the radial velocity. } {Along the whole long-slit signal, the spectra of the Shadowy galaxy (discovered by us), 2MFGC 04711, and AM\,0546-324 (NED02) resemble that of an early-type galaxy. We estimated redshifts of z= 0.0696, z= 0.0693 and z= 0.0718, corresponding to heliocentric velocities of 20\,141 km s$^{-1}$, 20\,057 km s$^{-1}$, and 20\,754 km s$^{-1}$ for the Shadowy galaxy, 2MFGC 04711 and AM\,0546-324 (NED02), respectively. The central regions of 2MFGC 04711 and AM\,0546-324 (NED02) are completely dominated by an old stellar population of \mbox{$2\times10^{9} <\rm t \leq 13\times10^{9}$ yr} and do not show any spatial variation in the contribution of the stellar-population components.} {The observed rotation profile distribution of 2MFGC 04711 and AM\,0546-324 (NED02) can be adequately interpreted as an ongoing stage of interaction with the Shadowy galaxy as the center of the local gravitational potential-well of the system. The three galaxies are all early-type. The extended and smooth distribution of the material in the Shadowy galaxy is a good laboratory to study direct observational signatures of tidal friction in action. }
|
Galaxy interactions and mergers are fundamentally important in the formation and evolution of galaxies. Hierarchical models of galaxy formation and various observational evidence suggest that elliptical galaxies are, like disk galaxies, embedded in massive dark-matter halos. Lenticular and elliptical galaxies, called early-type galaxies have been thought to be the end point of galaxy evolution. These systems have shown uniform red optical colors and display a tight red sequence in optical color-magnitude diagrams (e.g. Baldry et al. \cite{b2004}). Their color separation from star-forming galaxies is thought to be due to a lack of fuel for star formation, which must have been consumed, destroyed or removed on a reasonably short timescale (e.g. Faber et al. \cite{f2007}). In addition, numerical simulations have shown that the global characteristics of the binary merger remnants of two equal-mass spiral galaxies, called major mergers, resemble those of early-type galaxies (Toomre \& Toomre \cite{tt1972}; Hernquist \& Barnes \cite{hb1991}; Barnes \cite{b1992}; Mihos et al. \cite{m1995}; Springel \cite{s2000}; Naab \& Burkert \cite{nb2003}; Bournaud, Jog \& Combes \cite{bjc2005}). Remnants with properties similar to early-type objects can also be recovered through a multiple minor merger process, the total accreted mass of which is at least half of the initial mass of the main progenitor (Weil \& Hernquist \cite{wh1994}, \cite{wh1996}; Bournaud, Jog \& Combes \cite{bjc2007}). This scenario of early-type formation through accretion and merging of bodies would fit well within the frame of the hierarchical assembly of galaxies provided by cold dark matter cosmology. Interactions between early-type galaxies are less spectacular than those observed in spiral galaxies. While impressive tidal tails, plumes, bridges, and shells are observed in tidally disturbed spirals, the effects of the interaction are less easily recognized in elliptical galaxies, since they have little gas and dust, and are dominated essentially by old stellar populations. Evidence for recent merger-driven star formation (Rogers et al. \cite{r09}) and morphological disturbances such as shells, ripples, and rings have been observed in early-type galaxies (Kaviraj et al. \cite{kv10} and Wenderoth et al. \cite{wen2011}) . The peculiar Ring Galaxies (pRGs) show a wide variety of ring and bulge morphologies and were classified by Fa\'{u}ndez-Abans \& de Oliveira-Abans (\cite{foa98a}) into five families, following the general behavior of galaxy-ring structures. From these categories eight morphological subdivisions are highlighted. One of these morphological subdivisions is a basic structure called Solitaire. The pRG Solitaire is described as an object with the bulge on the ring, or very close to it, resembling a one-diamond finger ring (single knotted ring). In these objects, the ring generally looks smooth and thinner on the opposite side of the bulge (as archetypes \object{FM\,188-15/NED02}, \object{AM\,0436-472/NED01)}, \object{ESO 202-IG45/NED01} and \object{ESO 303-IG11/NED01}). Although the statistics are as yet poor, the Solitaire type is probably produced by the interaction between elliptical-like galaxies and/or gas-poor S0 galaxies with an elliptical companion. In a forthcoming paper, a list of Solitaire-type pRGs and a preliminary study and statistics will be presented. There are no reports of Solitaires in early stages of formation in the literature yet; so a few pairs of galaxies were selected as early-stage candidates (see one of them in Wenderoth et al. \cite{wen2011}). Even though one of the selected candidates, \mbox{AM\,546-324}, originally extracted from Arp \& Madore's catalog (Arp \& Madore \cite{am1977}, \cite{am1986}; category 2, interacting doubles), seems to be morphologically different from an expected Solitaire in the early stage, it is remarkable enough to be studied as an almost isolated ``spherical/elliptical and S0 interacting objects" in centrally sparse clusters of galaxies. In this paper, we report new results for the tidally disturbed galaxy system \object{AM\,546-324} based on data obtained from long-slit spectrophotometric observations at Gemini Observatory, in Chile. Values of \mbox{$H_{\rm o}$ = 70 km s$^{-1}{\rm Mpc}^{-1}$}, $\Omega_{matter} = 0.27$ and $\Omega_{vaccum} = 0.73$ have been adopted throughout this work (Freedman et al. \cite{f2001}; Astier et al. \cite{a2006}, and Spergel et al. \cite{s2003}).
|
We reported optical band spectroscopy observations of the AM\,0546-324 system, which is the core of Abell S0546 cluster of galaxies. Morphological substructures were found in an enhanced r-image of this system. This suggests that the members are presently undergoing early stages of tidal interaction. Below is a summary of our main results: \begin{itemize} \item The AM\,0546-324 system is composed of four main galaxies: 2MFGC 04711, AM\,0546-324 (NED02), the K galaxy, and the one named S galaxy by us. Adopting the S galaxy as the center of this gravitationally bound system, the radial velocity differences between the different quoted members vary from 43 to 646 km\,$s^{-1}$. \item Within 1.2 arcmin of AM\,0546-324 there are a few relevant field companions such as the C galaxy in the SE direction and a new Polar Ring galaxy candidate in the SW. Several dwarf objects in and surrounding this system are close enough to be candidate members of this system, but no quoted redshift for these objects was found in the literature. \item The S galaxy seems to be large enough to wrap up all principal companions with its smooth distribution of material. \item The spectra of 2MFGC 04711, NED02, S, and the C galaxy resemble those of early-type galaxies and no emission lines were detected. No star-forming regions and no nuclear ionization sources were detected in the observed regions of the four main galaxies. \item The calculated heliocentric radial velocity for the S galaxy is 20\,141 $\pm$ 10 km$s^{-1}$ ({\it z} = 0.0696), which agrees with the radial velocity of the Abell S0546 cluster \mbox{(cz = 20\,893 km\,s$^{-1}$)}; for 2MFGC 04711, it is 20\,057 $\pm$ 10 km$s^{-1}$ ({\it z} = 0.0693); and for NED02, it is 20\,754 $\pm$ 10 km$s^{-1}$ \mbox{({\it z} = 0.0718)}, both in agreement with quoted values in NED. \item The C galaxy, cz = 19\,834 $\pm$ 40 km$s^{-1}$ ({\it z} = 0.0685), and the K galaxy, cz = 20\,197 ({\it z} = 0.0698), are both bound members of the AM\,0546-324 system. \item From the calculated mass lower limit, both 2MFGC 04711 and NED02 have $\sim$31\% of the mass of the S galaxy, and the C galaxy, almost 7\%. \item The rotation profiles of 2MFGC 04711 and NED02 are typical of tidal coupling in ellipticals with no net rotation, which results in a U-shaped rotation profile with the galaxy core at the base of the U. Both galaxies are gravitationally coupled directly with the proposed central object of the cluster, the S galaxy. The U-shaped structure is a direct observational signature of tidal friction with the extended material of the S galaxy. \item Internally, the no-net rotation core in the U-shaped rotation profile of both 2MFGC 04711 and NED02 seems to be slightly perturbed by the tidal interaction with the S galaxy, which lies in the center of the local gravitational potential-well of this system. \item 2MFGC 04711 and AM\,0546-324 (NED02) are completely dominated by an old stellar population with age between $2\times10^{9} <\rm t \leq 13\times10^{9}$ yr. \end{itemize} In summary, AM\,0546-324 is a system where signatures of tidal perturbations and friction are clearly visible. The deformity detected in the 2MFGC 04711, NED02, and K galaxies is due to large tidal forces exerted principally by the S galaxy (like the deformation and dynamical friction between two elliptical galaxies$-$Prugniel \& Combes \cite{pc1992}). Simultaneously, the S galaxy is perturbed by the whole interaction with all principal objects of the system. Two questions still remain to be answered: (1) is the S galaxy environment the starting point for the birth of a future cD galaxy? and (2) what is the origin of the S galaxy?
| 12
| 6
|
1206.0719
|
1206
|
1206.0005_arXiv.txt
|
The afterglows of gamma-ray bursts (GRBs) have more soft X-ray absorption than expected from the foreground gas column in the Galaxy. While the redshift of the absorption can in general not be constrained from current X-ray observations, it has been assumed that the absorption is due to metals in the host galaxy of the GRB. The large sample of X-ray afterglows and redshifts now available allows the construction of statistically meaningful distributions of the metal column densities. We construct such a sample and show, as found in previous studies, that the typical absorbing column density ($N_{\rm H_X}$) increases substantially with redshift, with few high column density objects found at low to moderate redshifts. We show, however, that when highly extinguished bursts are included in the sample, using redshifts from their host galaxies, high column density sources are also found at low to moderate redshift. We infer from individual objects in the sample and from observations of blazars, that the increase in column density with redshift is unlikely to be related to metals in the intergalactic medium or intervening absorbers. Instead we show that the origin of the apparent increase with redshift is primarily due to dust extinction bias: GRBs with high X-ray absorption column densities found at $z\lesssim4$ typically have very high dust extinction column densities, while those found at the highest redshifts do not. It is unclear how such a strongly evolving $N_{\rm H_X}/A_V$ ratio would arise, and based on current data, remains a puzzle.
|
} While it is now generally accepted that long-duration gamma-ray bursts (GRBs) primarily originate in the explosions of massive stars due to their association with type Ic supernovae, the precise nature of the progenitors and the environment in which the burst occurs are not known. Most progress to date has been made through afterglow observations; providing redshifts, emission mechanisms and information on the host galaxies. However the X-ray afterglows are still poorly understood. Indeed, one of the outstanding puzzles in understanding long GRBs is the nature and origin of the soft X-ray absorption observed in the majority of afterglows. Most GRB afterglows show evidence of absorption in the soft end of the X-ray spectrum significantly in excess of what is expected from the Galactic gas column. This has been known statistically from samples since the \emph{BeppoSAX} era \citep{2001ApJ...549L.209G}, though first observed at high confidence in a single spectrum with XMM-\emph{Newton} \citep{2002A&A...395L..41W}. It has generally been assumed from the beginning that the soft X-ray opacity is due to photoelectric absorption by the inner shells of the atoms in a column of metals -- primarily O, Si, S, Fe, He -- in the host galaxy of the GRB, in a fashion directly comparable to the X-ray absorption observed due to gas in the Galaxy. However, the absorption was quickly realised not to be directly analogous to Galactic soft X-ray absorption. The X-ray absorption in the Galaxy is strongly correlated with the dust and \ion{H}{1} column densities \citep[see][and references therein]{2011A&A...533A..16W}. However, for GRBs, the correlation with dust extinction was not clear, and if it existed, was certainly at least an order of magnitude lower in dust-to-metals ratio compared to the local group \citep{2011A&A...532A.143Z,2010MNRAS.401.2773S}. There was also no obvious correlation with the \ion{H}{1} column densities \citep{2007ApJ...660L.101W,2010MNRAS.402.2429C,2011A&A...525A.113S}. More recently a further puzzle was added. \citet{2010MNRAS.402.2429C} showed that the observed X-ray absorptions rose with redshift, with the highest column density objects ($\log{N_{\rm H_X}}\sim23$) appearing at the highest redshifts, and no comparably high column densities occurring at low redshifts ($\log{N_{\rm H_X}}\lesssim22$ at $z<1.5$). This result was particularly puzzling since the X-ray absorption measures the total metal column density, and the gas metallicity is expected to decrease rather than increase to high redshift. It was noted by \citet{2011ApJ...734...26B} that the \emph{observed} opacity at low energies, while high at low redshift, tended toward an asymptotic value at $z\gtrsim2$. This was interpreted as possible evidence for the detection of absorption by a diffuse, highly-ionised intergalactic medium \citep{2011ApJ...734...26B}. Such an interpretation has the virtue that it would solve the problems of the lack of correlation observed between the Ly$\alpha$-determined \ion{H}{1} column densities and the X-ray column densities in GRB afterglows, and the very low apparent dust-to-metals ratios. Finally, a recent investigation of a largely redshift-complete sample of bright GRBs \citep{2012MNRAS.421.1697C} found a statistically insignificant mild increase of X-ray absorption with redshift and interpreted it as due to increasing absorption by intervening systems in higher redshift GRBs. In this paper we address the nature of the X-ray absorption in GRB afterglows; we investigate the apparently increasing absorption with redshift, the claim of a possible detection of the warm-hot intergalactic medium, and the role of dust extinction. In section~\ref{observations} we detail the data used and our data analysis method. In section~\ref{results} we provide the results of our analysis. Section~\ref{discussion} contains a discussion on the interpretation of these results.
| 12
| 6
|
1206.0005
|
|
1206
|
1206.2693_arXiv.txt
|
We attempt to estimate the uncertainty in the constraints on the spin independent dark matter-nucleon cross section due to our lack of knowledge of the dark matter phase space in the galaxy. We fit the density of dark matter before investigating the possible solutions of the Jeans equation compatible with those fits in order to understand what velocity dispersions we might expect at the solar radius. We take into account the possibility of non-Maxwellian velocity distributions and the possible presence of a dark disk. Combining all these effects, we still find that the uncertainty in the interpretation of direct detection experiments for high ($>$100 GeV) mass dark matter candidates is less than an order of magnitude in cross section.
|
Many separate pieces of evidence exist pointing towards the existence of approximately 7 times as much dark matter as baryonic matter in the Universe, for example, galactic rotation curves \cite{rubin}, clusters of galaxies \cite{zwicky}, lensing studies \cite{lensing} and the growth of structure \cite{wmap}. At the time of writing though, we still do not know what the precise particle nature of the dark matter is. A class of leading candidates for dark matter are weakly interacting massive particles (WIMPs) which are charged under the standard model weak force. These models are favoured because they lead to a thermal relic abundance roughly compatible with the observed density of dark matter required to explain astrophysical observations without the requirement of fine tuning of the dark matter mass. There are three ways of looking for WIMP dark matter particles, the first is to create them directly in colliders, and researchers at the LHC are working hard to do this \cite{lhcdm}. The second way is to look for the standard model particles created when WIMP dark matter annihilates with itself in space, producing high energy gamma rays and cosmic rays, research into which is being spear-headed by the Fermi gamma ray telescope. The third way is to look for the elastic collisions of dark matter particles with nuclei in purpose built underground detectors. In order for different dark matter direct detection experiments to compare constraints with each other they have to assume a particular halo model and the one they generally use is known as the Standard Halo Model (SHM). This model is one of the simplest possible solutions of the Jeans equation but as we shall discuss in more detail, we do not expect the density profile or the velocity distribution of the dark matter in a real galaxy like our own to follow this and in fact there is an uncertainty in the local density and a large amount of uncertainty in the velocity distribution of dark matter. There are many different analyses which have taken this into account - large combined analysis data packages contain various assumptions which take {\it some aspects} of this uncertainty into account. Those uncertainties are usually only presented in the large combined analyses taking into account all the errors on other quantities including those from particle physics and constraining particular models of particle physics such as various versions of the constrained minimal supersymmetric standard model. They don't necessarily present how the constraint on the cross section changes as a function of mass. We would like to estimate the error on the cross section in an independent way such that it can be understood and used by an independent researcher who comes up with a theory that makes a particular prediction for the dark matter-nucleon scattering cross section. The dark matter direct detection experiment with the leading constraint on the dark matter nucleon cross section at the time of writing is the XENON-100 experiment, the constraint for dark matter particles of mass $M_{dm}\sim 200$ GeV being around $X\times 10^{-44}$cm$^2$ \cite{XENON100}. This actually places constraints on a class of SUSY models known as the focus point. Many collider phenomenology theorists are interested in a realistic estimate of what the errors are on this result, independent of a particle theoretical framework. The experimental errors have already been quantified by the XENON team, so what remains is the uncertainty due to astrophysical unknowns. First we will make the simplifying assumption that the bulk of the dark matter halo is spherical. This neglects many aspects of the distribution of dark matter which we might expect to exist such as the oblate/prolate nature of the halo \cite{jesper} and axisymmetry \cite{GFBaxi}, although we don't expect these effects to change the conclusions a great deal since the constraints on the local dark matter density are mainly based on observations in the inner part of the galaxy where baryons will have increased the symmetry of the halo. We will consider the possible presence and effect of a dark disk \cite{read}. Unlike many previous analyses we also focus on the whole mass regime rather than concentrating on the low mass one. There is a great deal of uncertainty in the event rate in detectors at very low mass ($M_{dm}<15 GeV$) because the uncertainties in the velocity distributions of dark matter become most pronounced in the high velocity tails, which are the only regions of phase space which allow a low mass dark matter particle to give rise a signal in direct detection experiments. There are many papers attempting to explain the discrepancy between the DAMA annual modulation signal \cite{dama} and the XENON100 constraint, so we will not attempt to add to this debate on this occasion. We are simply interested to understand what the latest constraints are really telling us in terms of numbers on the dark matter-nucleon cross section. In section \ref{sectiondir} we will introduce the standard halo model and the equations for dark matter direct detection and reconstruct our version of the XENON100 constraint. After that in section \ref{density} we will discuss the constraints and reconstruction of the density profile of the Milky Way Galaxy. This section will be a poor mans reconstruction of the careful analysis of Catena and Ullio \cite{ulliocatena} with some small differences but we are not only interested in the density of dark matter in the solar neighborhood, but also the density profile as a function of radius as this affects the solution of the Jeans equation. Section \ref{velocity} will address these solutions and explain why this adds another level of uncertainty to the problem as it changes the velocity dispersion in the solar system. We will also discuss the effects of non-Gaussianity of the velocity distribution. In section \ref{results} we will explain our procedure for combining the various astrophysical uncertainties and present results. In section \ref{darkdisk} we investigate the effect a dark disk has on the uncertainty in the signal before finally in section \ref{conclusions} we will reflect upon our analysis and the results while suggesting possible future studies. Such an estimate on the error associated with dark matter direct detection is almost subjective, since it involves estimating something that we cannot observe directly. The particular uncertainties that any researcher will focus on might be slightly different (see e.g. \cite{mccabe}), this is our best attempt to estimate these uncertainties.
|
} There are many different collaborations currently designing, building and running dark matter direct detection experiments. Unlike the energy and spatial distribution of, for example, neutrinos from the sun, dark matter is very far from being in thermal equilibrium, is non relativistic and owes its motion to the feeble gravitational force of all the matter in the local Universe as it makes its journey across the galaxy. Following others, in this work we have tried to estimate the density of dark matter in the galaxy and then used that density to make estimates of the possible kinds of velocities dark matter could have in the solar system. We have shown that the inclusion of uncertainties in density and velocity distributions both create an effect upon the rate of events in direct detection experiments and that these uncertainties are largest at lower velocities. The purpose of this work is not to focus on the low mass regions but to investigate the errors due to astrophysical uncertainties upon searches for high mass dark matter particles. We find that for a smooth spherical halo, taking into account all the possible uncertainties in the density distribution and the Jeans equation one can only increase the uncertainty in the bounds from direct detection experiments by a factor of a few. In order to try and maximise the uncertainty, we have included the presence of a dark disk but have seen that given a very pronounced dark disk is incompatible with tracers of the local density of the Galaxy, this again can only increase the uncertainty by another factor of a few, leading to an overall uncertainty which is still less than an order of magnitude. We believe that these estimates are pretty robust and that the only way they could be very wrong is if the current understanding of the smoothness of the local distribution of dark matter is wildly incorrect. We haven't included uncertainties in nuclear form factors which would be an additional source of error but our conclusion is that the astrophysical uncertainty in the direct detection rate for high mass candidates is less than an order of magnitude.
| 12
| 6
|
1206.2693
|
1206
|
1206.2370_arXiv.txt
|
This paper describes Herschel observations of the nearby (8.5pc) G5V multi-exoplanet host star 61 Vir at 70, 100, 160, 250, 350 and 500 $\mu$m carried out as part of the DEBRIS survey. These observations reveal emission that is significantly extended out to a distance of $>15$arcsec with a morphology that can be fitted by a nearly edge-on ($77^\circ$ inclination) radially broad (from 30AU out to at least 100AU) debris disk of fractional luminosity $2.7 \times 10^{-5}$, with two additional (presumably unrelated) sources nearby that become more prominent at longer wavelengths. Chance alignment with a background object seen at 1.4GHz provides potential for confusion, however the star's 1.4arcsec/year proper motion allows archival Spitzer 70$\mu$m images to confirm that what we are interpreting as \textit{disk} emission really is circumstellar. Although the exact shape of the disk's inner edge is not well constrained, the region inside 30AU must be significantly depleted in planetesimals. This is readily explained if there are additional planets outside those already known (i.e., in the 0.5-30AU region), but is also consistent with collisional erosion. We also find tentative evidence that the presence of detectable debris around nearby stars correlates with the presence of the lowest mass planets that are detectable in current radial velocity surveys. Out of an unbiased sample of the nearest 60 G stars, 11 are known to have planets, of which 6 (including 61 Vir) have planets that are all less massive than Saturn, and 4 of these have evidence for debris. The debris toward one of these planet-hosts (HD20794) is reported here for the first time. This fraction (4/6) is higher than that expected for nearby field stars (15\%), and implies that systems that form low-mass planets are also able to retain bright debris disks. We suggest that this correlation could arise because such planetary systems are dynamically stable and include regions that are populated with planetesimals in the formation process where the planetesimals can remain unperturbed over Gyr timescales.
|
\label{s:intro} Main sequence stars, like the Sun, are often found to be orbited by circumstellar material that can be categorised into two groups, planets and debris, the latter of which is comprised of asteroids, comets and the dust derived from them (e.g., Wyatt 2008; Krivov 2010). Although there are 11 examples of nearby stars that are known to have both planets and debris (e.g., Moro-Mart\'{i}n et al. 2010), there is as yet no evidence for any correlation between the two types of material (Greaves et al. 2004; K\'{o}sp\'{a}l et al. 2009), and the properties of the debris disks around stars that have planets are not found to be significantly different to those of stars without known planets (Bryden et al. 2009). This is usually explained as a consequence of the spatial separation between the planets (which are typically found within a few AU) and debris (which is typically found at 10s of AU). Despite this spatial separation, it is still expected that outer debris can be affected by the gravitational perturbations of close-in planets (Mustill \& Wyatt 2009), and that such debris can have a significant dynamical effect on the evolution of interior planets, e.g., through promoting migration (Kirsh et al. 2009) or by triggering instabilities (Tsiganis et al. 2005; Levison et al. 2011). The delivery of debris to the inner planets may also be an important source of volatiles to the planets (e.g., Horner \& Jones 2010). Furthermore it is reasonable to expect that the initial conditions in protoplanetary disks that favour the formation of certain types of inner planetary system might also result in (or exclude) specific types of debris disk (e.g., Wyatt, Clarke \& Greaves 2007; Raymond et al. 2012). Thus the search for any correlation between the two phenomena continues (e.g., Maldonado et al. 2012), so that light may be shed on the interaction between planets and debris, as well as on the formation mechanism for the two components. This paper describes observations carried out as part of the key programme DEBRIS (Disc Emission via a Bias-free Reconnaissance in the Infrared/Submillimetre) on the \textit{Herschel Space Observatory}\footnote{{\it Herschel} is an ESA space observatory with science instruments provided by European-led Principal Investigator consortia and with important participation from NASA.} (Pilbratt et al. 2010). DEBRIS is an unbiased survey searching for evidence of circumstellar dust at 100 and 160 $\mu$m toward the nearest $\sim 80$ stars of each spectral type A,F,G,K,M (see Phillips et al. 2010 for a description of the sample). The first results have already shown that Herschel observations have the potential to detect disks down to much fainter levels than previous observations, and moreover have the resolution to resolve the disks at far-IR wavelengths (Matthews et al. 2010; Churcher et al. 2011; Kennedy et al. 2012). Several of the stars in the sample are known planet hosts; others may be found to host planets in the future. As such, the survey is ideally suited to determining whether any correlation exists between planets and debris. Here we focus on Herschel observations of the star 61 Vir (HD115617), which is a main sequence G5 star. At $8.55 \pm 0.02$ pc (van Leeuwen 2007) 61 Vir is the 8th nearest main sequence G star to the Sun. This star exemplifies the merits of an unbiased survey, since the star was relatively unremarkable at the time the survey was conceived, but has since been found by radial velocity surveys to host 3 planets at 0.05, 0.218, 0.478AU with minimum masses (i.e., $M_{\rm{pl}}\sin{i}$) of 5.1, 18.2, 22.9 $M_\oplus$ respectively (Vogt et al. 2010), and has subsequently been the subject of several studies on the secular dynamics of its planetary system (Batygin \& Laughlin 2011; Greenberg \& van Laerhoven 2012) and on its formation mechanism (Hansen \& Murray 2012). Thus 61 Vir is one of the first of a growing number of systems around which only low-mass planets are known, which we define here as sub-Saturn mass. Such low-mass planets have only been discovered recently, often in multiple planet systems, either by high precision radial velocity measurements (Lovis et al. 2006; Mayor et al. 2011), or by transiting studies with Kepler (Borucki et al. 2011; Lissauer et al. 2011). Previous Spitzer observations showed that 61 Vir hosts a debris disk with a fractional luminosity of around $2 \times 10^{-5}$ (Bryden et al. 2006), but the location of the emission was uncertain, with estimates ranging from 8.3AU (Trilling et al. 2008) and 4-25AU (Lawler et al. 2009) to 96-195AU (Tanner et al. 2009). Our observations confirm the existence of a bright debris disk, and moreover the resolution of the images permit the disk structure to be ascertained. This allows us to consider the relationship between the disk and the planetary system, and the implications for the system's formation and evolution. Given that there are other systems with debris in low-mass planetary systems (e.g., Beichman et al. 2011), we also consider whether this discovery heralds an emerging correlation between the presence of debris and low-mass planets (e.g., as predicted by Raymond et al. 2011). The layout of the paper is as follows. The observations are described in \S \ref{s:obs}, including both the Herschel observations and archival Spitzer and radio observations that are needed to disentangle circumstellar from extragalactic emission. Modelling of the Herschel observations is presented in \S \ref{s:mod} to derive the radial distribution of dust. We find that the known planets probably do not play a primary role in stirring the debris, and discuss the implications of the debris for the evolution of the planetary system in \S \ref{s:pl}. Finally a statistical analysis of an unbiased sample of nearby stars is given in \S \ref{s:stat} to consider the possibility of a disk-planet correlation. Conclusions are presented in \S \ref{s:conc}.
|
\label{s:conc} Observations from the DEBRIS Key Programme are presented that resolve the structure of the debris disk around the exoplanet host star 61 Vir (\S \ref{s:obs}). Modelling shows that the dust extends from 30AU out to at least 100AU in a nearly edge-on configuration (\S \ref{s:mod}). There is likely little interaction between the disk and the known planets, which are at $<0.5$AU. The lack of planetesimals in the $<30$AU region could be explained by the existence of planets in this region, but the depletion can also be explained by collisional erosion (\S \ref{s:pl}). Considering a sample of the nearest 60 G stars there is an emerging trend that stars which, like 61 Vir, only have low-mass planets, are more likely to have detectable debris (\S \ref{s:stat}). We attribute this trend to the fact that the formation processes that make low-mass planets are likely to also result in large quantities of distant debris.
| 12
| 6
|
1206.2370
|
1206
|
1206.6079_arXiv.txt
|
We use the photometric redshift method of Chakrabarti \& McKee (2008) to infer photometric redshifts of submillimeter galaxies with far-IR (FIR) $\it{Herschel}$ data \footnote{\textit{Herschel} is an ESA space observatory with science instruments provided by European-led Principal Investigator consortia and with important participation from NASA} obtained as part of the PACS Evolutionary Probe (PEP) program. For the sample with spectroscopic redshifts, we demonstrate the validity of this method over a large range of redshifts ($ 4 \ga z \ga 0.3$) and luminosities, finding an average accuracy in $(1+z_{\rm phot})/(1+z_{\rm spec})$ of 10 \%. Thus, this method is more accurate than other FIR photometric redshift methods. This method is different from typical FIR photometric methods in deriving redshifts from the light-to-gas mass ($L/M$) ratio of infrared-bright galaxies inferred from the FIR spectral energy distribution (SED), rather than dust temperatures. Once the redshift is derived, we can determine physical properties of infrared bright galaxies, including the temperature variation within the dust envelope, luminosity, mass, and surface density. We use data from the GOODS-S field to calculate the star formation rate density (SFRD) of sub-mm bright sources detected by AzTEC and PACS. The AzTEC-PACS sources, which have a threshold $850~\micron$ flux $\ga 5~\rm mJy$, contribute 15\% of the SFRD from all ULIRGs ($L_{\rm IR} \ga 10^{12} L_{\odot}$), and 3 \% of the total SFRD at $z \sim 2$.
|
Much of the energy emitted by the universe in its infancy was from dust-enshrouded luminous galaxies. The $\it{Herschel}$ Space Telescope has opened a new window into this epoch, yielding unprecedented sensitivity in the far-infrared (Pilbratt et al. 2010), where dusty galaxies emit most of their radiation. The FIR SED allows us to probe the physical conditions of dusty sources. Chakrabarti \& McKee (2005; henceforth CM05) developed an analytic means of self-consistently solving the radiative transfer equation for spherically symmetric, centrally heated dusty sources. They derived a simple and intuitive form for the emergent SED that can be applied to infer the physical parameters of dust envelopes from the observed FIR SED. Chakrabarti \& McKee (2008; henceforth by CM08 we refer to the paper and by "CM" we refer to CM08's photometric redshift method) applied the method of CM05 to fit observed FIR SEDs of ULIRGs and SMGs and showed that accurate photometric redshifts could be inferred from derivation of the light-to-gas mass ($L/M$) ratio. CM08 demonstrated the accuracy of their method with a sample of SMGs with FIR data from Kovacs et al. (2006). This was the only FIR sample of SMGs available at the time, which constituted a sub-sample of the bright SMGs studied by Chapman et al. (2005). The new PACS (Poglitsch et al. 2010) observations of SMGs (Magnelli et al. 2012) provide an ideal sample for testing the CM method on a larger and more diverse sample. SMGs are high-redshift galaxies ($z \ga 0.3$) classified on the basis of their submillimeter flux. "Classical" SMGs have $F_{850~\micron} \ga 5~\rm mJy$, which corresponds to a luminosity of $\sim3\times10^{12}L_{\odot}$ at $z \sim 2$. ULIRGs are generally taken to refer to galaxies emitting $\ga 10^{12}L_{\odot}$ in the infrared (Soifer et al. 1984). CM08 had noted that local ULIRGs have higher $L/M$ values than their high-redshift cousins, a point that we discuss further in \S 3. The galaxies in the Magnelli sample cover four blank fields (GOODS-N, GOODS-S, COSMOS, and Lockman Hole) as well as a number of lensing clusters. They span a range in luminosities from $7 \times 10 ^{13} L_{\odot} \ga L \ga 5\times 10^{11}L_{\odot}$, and redshifts between $\sim 0.3 - 5$. Many of these sources have spectroscopic redshifts. Thus, this sample is diverse enough (Magnelli et al. 2011b) to yield a very robust measure of the accuracy of the CM photometric redshift method. This is an important test, as photometric redshifts will be crucial for analyzing the vast bulk of the observations expected from \textit{Herschel}. Having derived photometric redshifts, we then calculate the star formation rate density of sources detected by AzTEC and PACS as a function of redshift in the GOODS-S field, which is homogeneously covered in the sub-mm down to the classical SMG threshold ($F_{850~\micron} > 5~\rm mJy$). On the basis of IRAS observations, Sanders et al. (1988) suggested that local ULIRGs may be produced by the merger of gas-rich spirals. The development of hydrodynamical codes that adequately model the collision of gas-rich spirals and the subsequent starburst that results from a violent merger (Springel et al. 2005) made it possible to test this suggestion, which had been anticipated early on by Toomre \& Toomre (1972). The infrared emission of ULIRGs and SMGs during their life cycles was subsequently calculated using three-dimensional self-consistent radiative transfer calculations through the time outputs of SPH simulations of merging spirals with central AGN (Chakrabarti et al. 2007; Chakrabarti et al. 2008; Chakrabarti \& Whitney 2009) . These calculations reproduced empirically derived correlations, such as the correlation between the ratio of the $25~\micron$ to $60~\micron$ fluxes and energetically active AGN (de Grijp et al. 1984; Chakrabarti et al. 2007). Chakrabarti et al. (2008) reproduced the clustering of sources in $\it{Spitzer}$ IRAC color-color plots (Lacy et al. 2004), and explained that it was due to the prevalence of the starburst phase in the time evolution of these sources. SMGs formed during these simulations have diverse properties (Hayward et al. 2012) and constitute a heterogeneous group. However, the simulations analyzed in these papers were not cosmological simulations, but rather simulated binary mergers of gas-rich systems with central black holes that yield high star formation and accretion rates onto the central AGN (Springel et al. 2005). Thus, the number density of the SMG population as a function of redshift could not be derived. (See however Hayward et al. 2011 for a phenomenological derivation of the number density of SMGs in this context) Recently, Dave et al. (2010) performed hydrodynamical cosmological simulations and identified SMGs as the most rapidly star forming systems that match the observationally determined stellar mass function of SMGs. In these simulations, SMGs sit at the centers of large potential wells, and accrete gas-rich satellites, but are not typically undergoing major mergers. However, Magnelli et al. (2010; 2012) noted that their high star formation rates for the bright SMGs ($SFR \sim 1000 M_{\odot}~\rm yr^{-1}$) are difficult to reconcile with Dave et al.'s (2010) simulations that produce SMGs with star formation rates lower by $\sim 3$ than what is inferred observationally. Thus, models of SMGs cannot yet fully account for their contribution to the cosmic stellar mass assembly. Using our method to derive photometric redshifts from the FIR SED can potentially yield a robust measure of the number density of SMGs and their contribution to the star formation rate density of the universe (and thereby their contribution to the cosmic stellar mass assembly), without requiring model dependent assumptions. Deriving fundamental parameters of the brightest infrared galaxies, such as their luminosity, star formation rate, and dust temperature, cannot be accomplished without first deriving their redshifts. Unlike optical studies (Madau et al. 1998; Hopkins 2004), the derivation of star formation rates from the infrared luminosity is not affected by extinction, thereby yielding the most robust measure of the contribution of dusty galaxies to the star formation rate density of the early universe. The paper is organized as follows. In \S 2.1 and \S 2.2, we review very briefly the formalism of CM05 with attention to the intuitive form of the emergent SED, and the redshift inference method presented in CM08. In \S 3, we present the observational sample. In \S 4, we demonstrate the accuracy of the method with respect to the \her sample with spectroscopic redshifts. In \S 4.1, we apply our method to sources detected by AzTEC and PACS in the GOODS-S field and present their star formation rate density as a function of redshift. We discuss future work and present caveats in \S 5, and conclude in \S 6. \vspace{0.1in}
|
$\bullet$ We have applied the photometric redshift method of CM08 to \her data to demonstrate the accuracy of our method for a large and diverse sample of SMGs. We find accuracies of $\sim 10 \%$ relative to spectroscopic redshifts, i.e., in $(1+z_{\rm inf})/(1+z)$, out to $z \sim 4$. $\bullet$ We also give the average values for the $L/M$ ratios of SMGs in these samples, which is lower than that for local ULIRGs by a factor of 2. Our average star formation rate is $700~M_{\odot}/\rm yr$ for galaxies with luminosities in excess of $3 \times 10^{12} L_{\odot}$. $\bullet$ We estimate the star formation rate density of sub-mm bright PACS sources in the GOODS-S field. These sources have an extrapolated $850~\micron$ flux typical of classical SMGs, $F_{850~\micron} \ga 5 ~\rm mJy$ (we extrapolated the sub-mm flux from AzTEC observations as discussed in \S \ref{sec:observations}). Our derivation of the SFRD is a lower limit, particularly at low redshifts ($z < 1$) where normal spirals (and secular processes or minor tidal interactions) drive the star formation history of the universe. These PACS sources contribute 15 \% to the SFRD relative to ULIRGs at $z \sim 2$, and 3 \% relative to the total SFRD at that epoch that is produced by all galaxies. We find that there is no decline in the shape of the SFRD of the sub-mm bright PACS sources in the GOODS-S field out to $z \sim 4$. \bigskip \bigskip
| 12
| 6
|
1206.6079
|
1206
|
1206.1235_arXiv.txt
|
{One of the biggest challenges facing large transit surveys is the elimination of false-positives from the vast number of transit candidates. A large amount of expensive follow-up time is spent on verifying the nature of these systems.} {We investigate to what extent information from the lightcurves can identify blend scenarios and eliminate them as planet candidates, to significantly decrease the amount of follow-up observing time required to identify the true exoplanet systems.} {If a lightcurve has a sufficiently high signal-to-noise ratio, a distinction can be made between the lightcurve of a stellar binary blended with a third star and the lightcurve of a transiting exoplanet system. We first simulate lightcurves of stellar blends and transiting planet systems to determine what signal-to-noise level is required to make the distinction between blended and non-blended systems as function of transit depth and impact parameter. Subsequently we test our method on real data from the first IRa01 field observed by the CoRoT satellite, concentrating on the 51 candidates already identified by the CoRoT team.} {Our simulations show that blend scenarios can be constrained for transiting systems at low impact parameters. At high impact parameter, blended and non-blended systems are indistinguishable from each other because they both produce V-shaped transits. About 70\% of the planet candidates in the CoRoT IRa01 field are best fit with an impact parameter of $b>$0.85, while less than 15\% are expected in this range considering random orbital inclinations. By applying a cut at $b<0.85$, meaning that $\sim$15\% of the potential planet population would be missed, the candidate sample decreases from 41 to 11. The lightcurves of 6 of those are best fit with such low host star densities that the planet-to-star size ratii imply unrealistic planet radii of $R>2R_{Jup}$. Two of the five remaining systems, CoRoT1b and CoRoT4b, have been identified as planets by the CoRoT team, for which the lightcurves alone rule out blended light at 14\% (2$\sigma$) and 31\% (2$\sigma$). One system possesses a M-dwarf secondary, one a candidate Neptune.} {We show that in the first CoRoT field, IRa01, 85\% of the planet candidates can be rejected from the lightcurves alone, if a cut in impact parameter of $b<0.85$ is applied, at the cost of a $<15\%$ loss in planet yield. We propose to use this method on the Kepler database to study the fraction of real planets and to potentially increase the efficiency of follow-up. }
|
With the CoRoT and Kepler space observatories in full swing (Baglin et al. 2006, Borucki et al. 2003), which both deliver thousands of lightcurves with unprecedented photometric precision and cadence, we have moved into an exciting new era of exoplanet research. Now, the characterisation of small, possibly rocky planets has finally become a realistic prospective (e.g. Corot-7b, Leger et al. 2009; Kepler-10b, Batalha et al. 2011). One of the biggest challenges is to seperate real planets from the significant fraction of (astrophysical) false-positives that can mimic a genuine transit signal (e.g. Batalha et al. 2010). Ground-based transit surveys have revealed that stellar eclipsing binaries (EBs) blended with light from a third star are the main source of contamination (e.g. Udalski et al. 2002). Also, for Super-Earth planet candidates blends with a background transiting Jupiter-sized planet system can be important. In these systems the eclipse depth, shape and ellipsoidal light variations of an EB are diluted by the effects of chance alignment of a foreground or background star or associated companion inside a photometric aperture set by either the pixel scale or the point spread function. In addition, light from a third star in the photometric aperture can bias the fitted parameters of a planet transit system. High resolution, high signal-to-noise spectra are normally required to exclude binary scenarios by excluding their large radial velocity or bi-sector variations, a process that can be very time-consuming. Stellar blends are common in space-based transit surveys as apertures are relatively large (e.g. 19$"$x21$"$ for CoRoT), and target fields are crowded since the number of target stars is maximized in this way. To weed out false-positives, the CoRoT team relies on an extensive ground-based follow-up campaign for on-off photometry to identify the transited star in the CoRoT aperture (Deeg et al. 2009) and high resolution imaging observations to identify possible stars that dilute the lightcurve of a planet candidate. Even so, many candidates remain unresolved and defy easy characterisation after such a campaign. Kepler uses its unique astrometric precision to minimise the number of blends, which can be identified by a position shift of the flux centroid during transit, but will still require enormous ground-based efforts on the remaining $\sim$1200 candidates (e.g. Borucki et al. 2011). Together with the new influx of planet candidates from current surveys, possible future missions (such as PLATO; e.g. Catala et al. 2011) and ground-based efforts to hunt for planets around low-mass stars, the telescope demand for full follow-up may grow enormously. Therefore, any new technique or strategy that can eliminate even a moderate fraction of all candidates from the discovery lightcurves, prior to follow-up, is extremely valuable. In this paper we investigate to what extent information from the lightcurves themselves can identify blend scenarios and eliminate them as planet candidates and on the other hand rule out blend scenarios in the case of true planet systems. Our key motivation is that \textit{the lightcurves of blended systems can not be perfectly fit by pure transit models and neither can genuine transits be fit by blended light models.} In section 2 we introduce our lightcurve fitting procedure and in section 3 we apply it to simulated data of a transiting hot Jupiter and Super-Earth. While such a procedure provides a natural tool to distinguish blends from genuine planetary systems by lightcurve fitting, it breaks down for transits with high impact parameters. We therefore only consider transiting systems with impact parameter $b<0.85$, loosing potentially $\sim$15\% of the planet catch, but significantly decreasing (by an order of magnitude) the required amount of follow-up observations. In section 4 we apply our method to the candidates of the CoRoT IRa01 field, whose candidates are almost completely characterised through an extensive follow-up campaign, and discuss the results in section 5.
|
In this paper we have investigated to what extent information from lightcurves of a space-based exoplanet transit survey can identify blended light scenarios and eliminate them as planet candidates, to significantly decrease the required amount of follow-up time. If a lightcurve has sufficiently high signal-to-noise, a distinction can be made between a blended eclipsing binary and a transiting exoplanet. We first have simulated lightcurves of stellar blends and transiting planet systems to determine the required signal-to-noise as a function of impact parameter and transit depth. Our simulations show that blend scenarios can be distinguished from transiting systems at low impact parameter. At high impact parameter, blended and non-blended systems both produce V-shaped transits and are indistinguishable from each other. We have subsequently tested our method on real data from the first IRa01 field of CoRoT, concentrating on the 51 candidates already identified by the CoRoT team (Carpano et al. 2009). We show that 70\% of the planet candidates in the CoRoT IRa01 field are best fit with an impact parameter of $b>0.85$, whereas $\sim$15\% are expected assuming random orbital orientations. By applying a cut at $b<0.85$, meaning that $\sim$15\% of the potential planet population would be missed, the candidate sample decreases from 41 to 11. The lightcurves of 6 of those are best fit with such a low host star density that the planet-to-star size ratio implies an unrealistic planet radius of $R_2>2\rm{R}_{\rm{Jup}}$. From the remaining five, two systems, CoRoT-1b and CoRoT-4b, have been identified by the CoRoT team as planets, for which the lightcurves alone rule out blended light at a 14\%(2$\sigma$) and 31\%(2$\sigma$). One other candidate is also consistent with a non-blended system, but is a late M-dwarf, which will always require radial velocity follow-up for confirmation since M-dwarfs can have similar radii as Jupiter mass planets. One other system consists of a candidate Neptune around a M-dwarf according to Moutou et al. (2009). We have therefore shown that 85\% of the planet candidates can be rejected for the IRa01 field from the lightcurves alone. We propose to use this method on the Kepler database to study the fraction of real planets and to potentially increase the efficiency of follow-up. For long period candidates, possible non-zero eccentricity will affect the cut in planet-to-star ratio versus host star density, effectively increasing the sample size. However a single high-resolution spectrum would be sufficient to determine the real host star density and estimate the size of transiting objects.
| 12
| 6
|
1206.1235
|
1206
|
1206.0625_arXiv.txt
|
Observational evidence from Auger and earlier experiments shows a deficit of signal in a surface detector compared to predictions, which increases as a function of zenith angle, when the energy of the event is fixed by fluorescence measurements. We explore three potential explanations for this: the ``Cronin effect" (growth of high-transverse momentum cross sections with nuclear size), the need for more particles at high transverse momentum in $p-p$ collisions than currently predicted by high energy hadronic models used for air shower simulations, and the possibility that secondary interactions in the target air nucleus produces additional relatively soft pions not included in simulations. We report here on the differences between Pythia and QGSJet II, especially for high $p_t$ particles. The possible impact of these effects on the predicted surface array signal are also reported.
|
A discrepancy between observations and simulations of ultra-high energy air showers has been recognized for many years, initially evident as an energy-scale discrepancy between air-fluorescence and surface array experiments. A discrepancy is also seen between the predicted and observed ``attenuation curve", plotting some measure of the surface array signal as a function of angle, for a constant intensity cut (e.g., for the N highest energy events at the given angle). The discrepancy can be parameterized by a ``muon rescaling" factor $N_\mu$ and an energy rescaling factor, $E_{Resc}$ and the results of this are given using a number of different techniques in the presentation of A. Castellina for the Auger collaboration at ICRC 2009. So far, efforts to remove this discrepancy by modifying the p-p event generators (QGSJet, EPOS, Sybill,...) have not succeeded to fit both the Auger observations and other data. Here, we investigate three different examples of ``ordinary physics" that are not included in present event generators, which can increase the signal in a surface array: 1) the ``Cronin effect" \cite{cronin76}, 2) a systematic deficiency in the production of high $p_\perp$ particles in "minimum-bias" event generators like QGSJet compared to event generators like Pythia which have been tuned for high $p_\perp$ \cite{qgsjet, pythia}, and 3) the possible production of an excess of low-energy pions in the target-fragmentation region due to secondary scattering in the target nucleus. \begin{figure}[!t] \centering \includegraphics[width=2.0in]{icrc1556_fig01} \caption{The ``Cronin effect" is the growth of $\alpha$, defined in the text, with $p_\perp$ such that it significantly exceeds $\alpha = 2/3$ \cite{cronin76} . } \label{alpha} \end{figure} \begin{figure}[!t] \centering \includegraphics[width=2.5in]{icrc1556_fig02} \caption{The predicted normalized distributions of $E_\perp$, for pions and kaons, of Pythia and QGSJet II, where $E_\perp$ is defined as $\sqrt{m^2 + p_\perp^2}$. The distributions are averaged over one million collisions of a $100 GeV$ proton upon a stationary proton. The top figure is the $E_\perp$ for secondary particles with a rapidity less than one. The middle figure is the $E_\perp$ distribution for secondary particles with rapidity between 1 and 2, and the bottom figure is the distribution for secondary particles with rapidity greater than 2.} \label{eperp} \end{figure}
|
\newpage The impact on the Lateral Distribution Function, of the Cronin effect and of more accurately describing high pt particle production in the basic p-p collision, are shown in Figs. \ref{LDFcronin} and \ref{LDFpythia}. Also shown in Fig. \ref{ldfmueng} are the ratio of the energy density of muons in the showers with and without reweighting. Evidently, the effect on the energy at large core radius of extra high-pt pions and the muons from their decay can be strong, but the number of high $p_t$ secondaries is probably simply too small to explain the observed signal in the surface detector. Thus, it seems that for the purposes of simulating extensive air showers, high-$p_t$ physics can safely be modeled crudely, without significant impact on the predicted surface array signal. \begin{figure}[!t] \centering \includegraphics[width=2.0in, angle=270]{icrc1556_fig07} \caption{The ratio of the LDF energy density of muons for both the Pythia-reweightings and the implementation of the Cronin effect, for a shower inclined at $45^\circ$.} \label{ldfmueng} \end{figure} Possibly more significant for the LDF is the production of relatively low energy pions by secondary particles from the primary collision which interact within the same nucleus before they escape. The results of a study of this effect will be presented at ICRC.
| 12
| 6
|
1206.0625
|
1206
|
1206.0139_arXiv.txt
|
Light clusters are included in the equation of state of nuclear matter within the relativistic mean field theory. The effect of the cluster-meson coupling constants on the dissolution density is discussed. Theoretical and experimental constraints are used to fix the cluster-meson couplings. The relative light cluster fractions are calculated for asymmetric matter in chemical equilibrium at finite temperature. It is found that above $T=5$ MeV deuterons and tritons are the clusters in larger abundances. The results do not depend strongly on the relativistic mean field interaction chosen.
|
To gain a deeper understanding of the physics involved in stellar core collapse, supernova explosion and protoneutron star evolution, an equation of state capable of describing matter ranging from very low densities to few times the saturation density and from zero temperature to a few $\unit{MeV}$ is needed. The crust of the star is essentially determined by the low density region of the equation of state (EOS), while the high density part will be important to define properties such as the star mass and radius. Understanding the crust constitution of a neutron star is an important issue because it influences the cooling process of the star and plays a decisive role on quantities such as the neutrino emissivity and gravitational waves emission. In the outer crust, where the density is lower, the formation of light clusters in nuclear matter will be energetically favorable at finite temperature, as in core-collapse supernovae or neutron star mergers, whereas in catalyzed cold neutron stars only heavy nuclei appear. At very low densities and moderate temperatures, the few body correlations are expected to become important and the system minimizes its free energy by forming light nuclei like deuterons ($d\equiv\, ^{2}\text{H}$), tritons ($t\equiv\, ^3\text{H}$), helions ($h\equiv\, ^3\text{He}$) and $\alpha$-particles ($^4\text{He}$) due to the increased entropy \cite{ropke2009,typel2010}. Eventually, these clusters will dissolve at higher densities due to Pauli blocking resulting in an homogeneous matter \cite{ropke2009}. In particular, the cooling process of a protoneutron star is affected by the appearance of light clusters \cite{light3}. The inclusion of light clusters ($d$, $h$, $t$ and $\alpha$ particles) in the nuclear matter EOS is discussed in \cite{typel2010}, where the most important thermodynamical quantities are calculated within a density-dependent relativistic model. The conditions for the liquid-gas phase transition are obtained, and it is seen how the binodal section is affected by the inclusion of these clusters. Moreover, an EOS is obtained starting at low densities with clusterized matter up to high density cluster-free homogeneous matter. In this work the density and temperature dependence of the in-medium binding energy of the clusters was determined within a quantum statistical approach \cite{ropke} and included in a phenomenological way in the relativistic mean field (RMF) model with density dependent couplings. The $\alpha$ particle is the most strongly bound system among all light nuclei and it certainly plays a role in nuclear matter as has been pointed out in \cite{ls91,shen,ropke,hor06,typel2010}. Lattimer and Swesty \cite{ls91} worked out the EOS in the compressible extended liquid drop model based on a non-relativistic framework appropriate for supernova simulations, for a wide range of densities, proton fractions and temperatures, including the contribution of $\alpha$ particle clusters. An excluded volume prescription is used to model the dissolution of $\alpha$ particles at high densities. The same is done by H. Shen et al. in \cite{shen} where non-uniform matter composed of protons, neutrons, $\alpha$ particles and a single species of heavy nuclei is described with the Thomas-Fermi approximation and the TM1 parametrization of the non-linear Walecka model (NLWM). At low densities, these particles are described by classical gases. In \cite{hor06} the virial expansion of low-density nuclear matter composed by neutrons, protons, and $\alpha$ particles is presented, and it is shown that the predicted $\alpha$ particles concentrations differ from the predictions of the EOS proposed in \cite{ls91} and \cite{shen}. The virial expansion was extended to include other light clusters with $A\le 4$ \cite{oconnor2007,light4}. All possible light clusters were also included in recent EOS appropriate for simulations of core-collapse supernovae based in a nuclear statistical equilibrium model, including excluded volume effects for nuclei \cite{hempel2010}. The aim of the present work is to study the dissolution density of light clusters, i.e. the density above which light clusters do not exist anymore, in uniform nuclear matter within the framework of the NLWM \cite{bb}. The dependence of the dissolution density on the cluster-meson couplings as well as on the proton-fraction will be studied. In many models which include light clusters the dissolution density is determined by excluded volume effects \cite{ls91,shen,hempel2010}. In Ref. \cite{light4}, a statistical excluded volume model was compared with two quantum many-body models, a generalized relativistic mean-field model \cite{typel2010} and a quantum statistical model \cite{ropke2009,ropke2011}. It was shown that the excluded volume description works reasonably well at high temperatures, partly due to a reduced Pauli blocking. However, at low temperatures the excluded volume approach shows crucial deviations from the statistical approach. In \cite{alphas}, using a RMF approach, the coupling of the $\alpha$-clusters to the $\omega$-meson was employed to describe the dissolution. In this reference it was assumed that the intensity of the $\omega$ meson-cluster coupling was proportional to the mass number of the light cluster and no coupling to the $\sigma$-meson was included. The light clusters ($d$, $h$, $t$ and $\alpha$ particles) are treated as point like particles (without internal structure) within the RMF approximation. Just as the nucleons in nuclear matter, the clusters interact through the exchange of $\sigma$, $\omega$ and $\rho$ meson fields. In order to understand how dependent are the results on the NLWM parametrization we consider three different parametrizations; NL3 \cite{nl3}, with a quite large symmetry energy and incompressibility at saturation and which was fitted in order to reproduce the ground state properties of both stable and unstable nuclei; FSU \cite{fsu} and IU-FSU \cite{iufsu}, which were accurately calibrated to simultaneously describe the GMR in $^{90}$Zr and $^{208}$Pb, and the IVGDR in $^{208}$Pb and still reproduce ground-state observables of stable and unstable nuclei. Furthermore, the IU-FSU was also refined to describe neutron stars with high masses ($\sim2M_\odot$). In Ref. \cite{typel2010}, where a generalized RMF model with density dependent couplings was developed including clusters as explicit degrees of freedom, the medium effects on the binding energy of the clusters were introduced explicitly through density and temperature dependent terms. We aim at getting a description of the light clusters in nuclear matter that is simpler to implement, yet more realistic than the excluded volume mechanism. A brief review of the formalism is presented in section \ref{formalism}, the results and discussion of the meson-cluster coupling constants are given in section \ref{results} and conclusions are drawn in section \ref{conclusions}.
|
In the present work we have included light clusters in the EOS of nuclear matter within the framework of relativistic mean field models with constant coupling constants. Clusters are considered point-like particles that interact with the nucleons through the exchange of mesons. The formation of clusters is favorable at a quite low density; below 0.002 fm$^{-3}$, we expect that it is a reasonable approximation to consider them as point like. It was shown that the dissolution of clusters is mainly determined by the isoscalar part of the EOS. Therefore, the $\rho$-cluster coupling constant was simply determined by the isospin of the cluster. To fix the $\sigma$ and $\omega$ clusters couplings two constraints are required for each cluster. In the present work we have considered the dissolution density obtained in a quantum statistical approach \cite{ropke} as a constraint to fix the $\omega$-cluster coupling. Recent experimental results for the in-medium binding energy of light clusters \cite{hagel2012} allow the determination of the $\sigma$-cluster coupling strength at $T\sim 5$ MeV. The virial expansion of the EOS at low densities and finite temperature is another constraint that could be used to fix the $\sigma$-meson coupling and that will be investigated. We have applied the couplings proposed within the present to study symmetric and asymmetric nuclear matter with light clusters in chemical equilibrium at finite temperature and we have determined the relative light cluster fractions at $T=5\unit{MeV}$ and $T=10\unit{MeV}$. It was shown that a larger $\sigma$-cluster coupling gives rise to larger dissolution densities and larger particle fractions. The experimental determination of Mott points at more temperatures than the ones obtained in \cite{hagel2012} would allow the determination of the temperature dependence. A comparison of the in-medium binding energies obtained within the present model with the ones proposed in \cite{typel2010}, indicate that the last may be reproduced if temperature dependent meson-cluster couplings are obtained. It was shown that deuterons and tritons are the clusters with larger abundances in asymmetric matter above $T=5$ MeV.. The results do not dependent much on the RMF interaction chosen. {\bf Acknowledgments}: This work was partially supported by QREN/FEDER, the Programme COMPETE, under the project PTDC/FIS/113292/2009 and by Compstar, an ESF Research Networking Programme.
| 12
| 6
|
1206.0139
|
1206
|
1206.5000_arXiv.txt
|
We combine high-resolution {\it HST}/WFC3 images with multi-wavelength photometry to track the evolution of structure and activity of massive ($M_{\star}>10^{10}M_{\odot}$) galaxies at redshifts $z=1.4-3$ in two fields of the Cosmic Assembly Near-infrared Deep Extragalactic Legacy Survey (CANDELS). We detect compact, star-forming galaxies (cSFGs) whose number densities, masses, sizes, and star formation rates qualify them as likely progenitors of compact, quiescent, massive galaxies (cQGs) at $z=1.5-3$. At $z\gtrsim2$, most cSFGs have specific star-formation rates (sSFR$\sim10^{-9}$yr$^{-1}$) half that of typical, massive SFGs at the same epoch, and host X-ray luminous AGNs 30 times ($\sim$30\%) more frequently. These properties suggest that cSFGs are formed by gas-rich processes (mergers or disk-instabilities) that induce a compact starburst and feed an AGN, which, in turn, quench the star formation on dynamical timescales (few 10$^{8}$yr). The cSFGs are continuously being formed at $z=2-3$ and fade to cQGs down to $z\sim1.5$. After this epoch, cSFGs are rare, thereby truncating the formation of new cQGs. Meanwhile, down to $z=1$, existing cQGs continue to enlarge to match local QGs in size, while less-gas-rich mergers and other secular mechanisms shepherd (larger) SFGs as later arrivals to the red sequence. In summary, we propose two evolutionary tracks of QG formation: an early ($z\gtrsim2$), fast-formation path of rapidly-quenched cSFGs fading into cQGs that later enlarge within the quiescent phase, and a slow, late-arrival ($z\lesssim2$) path in which larger SFGs form extended QGs without passing through a compact state.
|
Nearby galaxies come in two flavors \citep{kauffman03}: red quiescent galaxies (QGs) with old stellar populations, and blue young star-forming galaxies (SFGs). This color bimodality seems to be already in place at $z\sim2-3$ (\citealt{2010ApJ...709..644I}; \citealt{brammer11}), presenting also strong correlations with mass, size and morphology: SFGs are typically larger than QGs of the same mass (\citealt{williams10}; \citealt{wuyts11b}) and disk-like, whereas QGs are typically spheroids characterized by concentrated light profiles \citep{bell11}. Since SFGs are the progenitors of QGs, their very-different, mass-size relations restrict viable formation mechanisms. \begin{figure*} \centering \includegraphics[width=8.7cm,angle=0.]{fig1a.eps} \includegraphics[width=8.4cm,angle=0.]{fig1b.eps} \caption{\label{ssfr_size} {\it Left panel:} Specific SFR as a function of the stellar mass for galaxies at $1.4<z<3.0$. The solid black line defines our threshold, $\log(\mathrm{sSFR})=-0.5$, to select QGs (red in both panels) and SFGs (blue) above $M_{\star}>10^{10}M_{\odot}$. Grey dots show galaxies with stellar masses below the mass selection limit. {\it Right panel:} Stellar mass-size relation at $1.4<z<3.0$. The solid black line defines our selection criterion for compact galaxies, $M/r$$_{e}^{1.5}$$\equiv$$\Sigma_{1.5}$$=10.3M_{\odot}$kpc$^{-1.5}$. The green line shows the local mass-size relation for elliptical galaxies \citep{shen03}. The thin brown lines are the mass-size relations for QGs found at $z=1.75$ and $z=2.25$ by \citet{newman12}.} \end{figure*} A major surprise has been the discovery of smaller sizes for massive QGs at higher redshifts -- these compact QGs (cQGs), also colloquially known as ``red nuggets", are $\sim5$ times smaller than local, equal-mass analogs (\citealt{2007MNRAS.382..109T}; \citealt{cassata11}; \citealt{szo11}). In contrast, most of the massive SFGs at these redshifts are still relatively large disks (\citealt{kriek09}). We adopt the view that galaxy mass growth is accompanied by size growth, as suggested by the mass-size relation. In this case, to form compact QGs from SFGs, three changes are required: a significant shrinkage in radius, an increase in mass concentration, and a rapid truncation of the star formation. Proposed mechanisms to create compact spheroids from star-forming progenitors generally involve violent, dynamical processes (\citealt{naab07}), such as gas-rich mergers \citep{hopkins06} or dynamical instabilities fed by cold streams \citep{dekel09}. Recent hydrodynamical simulations of mergers have reproduced some of the observed properties of cQGs \citep{wuyts10}, if high amounts of cold gas, as observed by \citet{tacconi10}, are adopted. If cQGs are so formed, we expect to see a co-existing population of compact SFGs and recently-quenched galaxies at $z\gtrsim2$. Recent works demonstrate the existence of such populations (\citealt{cava03}; \citealt{wuyts11b}; \citealt{whitaker12}), but a direct evolutionary link has not yet been clearly established. This letter shows a quantitative connection between cSFGs and QGs at high-z. We combine the deepest photometric data from the optical to the far IR from the Great Observatories Origins Deep Survey (GOODS; \citealt{goods}), the UKIDSS Ultra Deep Survey (UDS), the Cosmic Assembly Near-infrared Deep Extragalactic Legacy Survey (CANDELS; \citealt{candelsgro}; \citealt{candelskoe}), FIDEL\footnote{\anchor{http://irsa.ipac.caltech.edu/data/SPITZER/FIDEL/}{http://irsa.ipac.caltech.edu/data/SPITZER/FIDEL/}}, and SpUDS\footnote{\anchor{http://irsa.ipac.caltech.edu/data/SPITZER/SpUDS/}{http://irsa.ipac.caltech.edu/data/SPITZER/SpUDS/}} to estimate stellar masses, SFRs, and sizes for massive, high-z galaxies. By analyzing the global evolution in the space defined by these parameters, we suggest two paths (fast and slow) for QG formation from $z\sim3$ to $z\sim1$. We adopt a flat cosmology with $\Omega_{M}$=0.3, $\Omega_{\Lambda}$=0.7 and H$_{0}=70$~km~s$^{-1}$~Mpc$^{-1}$.
|
\begin{figure} \centering \includegraphics[width=8.5cm,angle=0.]{fig4.eps} \caption{\label{cartoon} Schematic view of a two path (fast/slow track) formation scenario for QGs. On the fast track, a small fraction of the massive SFGs at $z=2-3$ evolve (e.g., through gas-rich dissipational processes) to a compact star-bursting remnant. Then, the star formation is quenched in $\sim$800~Myr ( perhaps by AGN and/or supernovae feedback), and galaxies fade into cQGs. Once in the red sequence, cQGs grow envelopes, over longer time scales, de-populating the compact region by z$\sim$1. Simultaneously, at $z\lesssim2$, other (slower) mechanisms have already started to populate the red-sequence with normal-sized, non-compact QGs (formed by, e.g., secular processes, halo quenching, or gas-poor mergers)} \end{figure} Using the deepest data spanning from the X-ray-to-MIR, along with high resolution imaging from CANDELS in GOODS-S and UDS, we analyze stellar masses, SFRs and sizes of a sample of massive ($M_{\star}>10^{10}M_{\odot}$) galaxies at $z=1.4-3.0$ to identify a population of cSFGs with similar structural properties as cQGs at $z\gtrsim2$. The cSFG population is already in place at $z\sim3$, but it completely disappears by $z<1.4$. A corresponding increase in the number of cQGs during the same time period suggests an evolutionary link between them. A simple duty-cycle argument, involving quenching of the star formation activity on time scales of $\Delta t=0.3-1$~Gyr, is able to broadly reproduce the evolution of the density of new QGs formed since $z=3$ in terms of fading cSFGs. Under this assumption, we also need to invoke a replenishment mechanism to form new cSFG via gas-rich dissipational processes (major mergers or dynamical instabilities), that then become quickly inefficient at $z\lesssim1.5$, as the amount of available gas in the halo decreases with time (e.g., \citealt{croton09}). During the transformation processes, the compact phase is probably associated with: enhanced (probably nuclear and dusty) star formation, the presence of an AGN, and sometimes a short-lived quasar, followed by a decline of the star formation in $\sim$1~Gyr \citep{hopkins06}. All these phenomena fit with the observed properties of cSFGs presented in this letter. cSFGs present no visible traces of mergers, but they do show lower sSFRs than the bulk of massive SFGs, presumably being at different stages of the starburst to passive evolution. Simultaneously, $\sim$30\% of them host luminous ($L_{X}>10^{43}$erg~s$^{-1}$) X-ray detected AGNs at $z>2$, suggesting that these might be playing a role in the quenching of star formation. Our observations connect two recent results at $z\sim2$: a population of compact ($n\gtrsim2$) galaxies with enhanced star formation activity \citep{wuyts11b} and an increasing fraction of small, post-starburst galaxies recently arrived on the red sequence \citep{whitaker12}. The emerging picture suggests that the formation of QGs follows two evolutionary tracks, each one dominating at different epochs (as illustrated in Figure~\ref{cartoon}). At $z\gtrsim2$ the formation of QGs proceeds on a fast track (right-region): from $z=3.0-2.0$, the number of cQGs builds up rapidly upon quenching of cSFGs at roughly constant $\Sigma_{1.5}$. Merely 2~Gyrs later ($z\sim1$), cQGs almost completely disappear due to: 1) size growth as a result of minor mergers satellite accretion (\citealt{naab09a}; \citealt{newman12}) or the re-growth of a remnant disk (\citealt{hopkins09b}) which causes them to leave the compact region; 2) a decrement in the efficiency of the formation mechanisms for new cSFGs, and therefore new cQGs. By $z\lesssim2$, other (probably slower) mechanisms start to populate the red sequence with larger, non-compact, QGs without passing through a compact state. This slow track (left region) is likely associated with the fading of normal disk galaxies to become S0s, due to halo quenching \citep{haloquench} or morphological quenching \citep{martig09}. Alternatively, some of these QGs could also be the result late and less-gas-rich mergers.
| 12
| 6
|
1206.5000
|
1206
|
1206.1378.txt
|
%The Large Synoptic Survey Telescope (LSST) is one of the most powerful ground-based %weak lensing survey telescopes in the upcoming decade. The complete 10-year survey from the Large Synoptic Survey Telescope (LSST) will image $\sim$ 20,000 square degrees of sky in six filter bands every few nights, bringing the final survey depth to $r\sim27.5$, with over 4 billion well measured galaxies. To take full advantage of this unprecedented statistical power, the systematic errors associated with weak lensing measurements need to be controlled to a level similar to the statistical errors. This work is the first attempt to quantitatively estimate the absolute level and statistical properties of the systematic errors on weak lensing shear measurements due to the most important physical effects in the LSST system via high fidelity ray-tracing simulations. \chihway{ We identify and isolate the different sources of algorithm-independent, \textit{additive} systematic errors on shear measurements for LSST and predict their impact on the final cosmic shear measurements using conventional weak lensing analysis techniques. We find that the main source of the errors comes from an inability to adequately characterise the atmospheric point spread function (PSF) due to its high frequency spatial variation on angular scales smaller than $\sim10'$ in the single short exposures, which propagates into a spurious shear correlation function at the $10^{-4}$--$10^{-3}$ level on these scales. With the large multi-epoch dataset that will be acquired by LSST, the stochastic errors average out, bringing the final spurious shear correlation function to a level very close to the statistical errors. Our results imply that the cosmological constraints from LSST will not be severely limited by these algorithm-independent, additive systematic effects. }
| 12
| 6
|
1206.1378
|
||
1206
|
1206.4695_arXiv.txt
|
We report on the long-term dynamical evolution of the two-planet Kepler-36 system, which we studied through numerical integrations of initial conditions that are consistent with observations of the system. The orbits are chaotic with a Lyapunov time of only $\sim$10 years. The chaos is a consequence of a particular set of orbital resonances, with the inner planet orbiting 34 times for every 29 orbits of the outer planet. The rapidity of the chaos is due to the interaction of the 29:34 resonance with the nearby first order 6:7 resonance, in contrast to the usual case in which secular terms in the Hamiltonian play a dominant role. Only one contiguous region of phase space, accounting for $\sim 4.5\%$ of the sample of initial conditions studied, corresponds to planetary orbits that do not show large scale orbital instabilities on the timescale of our integrations ($\sim$ 200 million years). The long-lived subset of the allowed initial conditions are those that satisfy the Hill stability criterion by the largest margin. Any successful theory for the formation of this system will need to account for why its current state is so close to unstable regions of phase space.
|
Despite the seeming regularity of the Solar system, the planetary orbits are known to be chaotic with a Lyapunov time of $\sim$5 million years \citep{LaskarZ,Wisdom1992}. The hallmark of a chaotic system is sensitive dependence on initial conditions: two trajectories that start arbitrarily close to each other will diverge exponentially on a time scale known as the Lyapunov time. Chaos is seen not only in the orbits of the planets, but also among various satellites and minor bodies of the Solar System \citep{WisdomAsteroid,LecarRev, Gold1}. However, among the hundreds of multiplanet systems known to exist around other stars, few have orbits measured precisely enough to determine definitively if chaos is present. There is evidence that both the Kepler-11 and 55Cnc systems are chaotic \citep{paper,paper2}, but in the case of 55Cnc, where the masses and inclinations are not well constrained, this conclusion is less certain. The Kepler$-$36 system consists of a subgiant star of solar mass and the two transiting planets Kepler-36b and c, with orbital periods of 13.8 and 16.2~d and masses of 4.1 and 7.5$M_\oplus$, respectively\citep{CarterandAgol}. All of the necessary parameters for integration---the bodies' positions and velocities at a reference epoch, as well as their masses---have been measured precisely. This allows study of the true dynamical evolution of the system at a level of detail that has only been achieved for the Solar System and a handful of exoplanetary systems \citep{PulsarPlanets,paper2}. This Letter is organized as follows. Section~\ref{sec:Num} describes our integration methods and the set of initial conditions we use. Our results on the chaotic behavior of the planets, including an explanation of its salient features, are given in Section~\ref{sec:Chaos}. A stability analysis of the system is presented in Section~\ref{sec:Stability}. We discuss briefly in Section~\ref{sec:Disc} how our results may constrain models of formation of the system.
|
\label{sec:Disc} We have presented a dynamical analysis of the Kepler-36 system which has yielded several surprising results. The orbits are chaotic with an extremely short Lyapunov time, yet some still manage to be long-lived. The closeness of this system to instability is an intriguing feature of Kepler-36. In particular, did the planets form with orbits contained in the long-lived core? Alternatively, did some dissipative process drive them into this long-lived configuration? It would seem that tidal dissipation is negligible for this system, despite the proximity of the planets to the star. If tidal dissipation were important then the planets would have been on more eccentric orbits in the past, and, assuming the orbital separation was not very different at that time, it is unlikely such a configuration would have been stable. Alternatively, the planets could have been closer together if they were protected by a resonance. A commonly discussed mechanism for forming compact multiplanet systems is convergent migration in the gaseous protoplanetary disk or the disk of remnant planetesimals \citep{Papaloizou}. Perhaps the Kepler-36 planets formed at large orbital distances, and migrated until being locked into a 6:7 resonance. After migration ended, subsequent tidal evolution could have driven the planets apart to their current configuration. However, we recognize that it may be challenging to create closely packed resonant systems through conventional migration mechanisms \citep{Rein}, though other studies find that it is possible to produce systems like Kepler-36 \citep{IdaLin,Ogihara,Pierens}. Therefore this system may contain important clues about the relevance of convergent migration to the formation and dynamical evolution of planetary systems.
| 12
| 6
|
1206.4695
|
1206
|
1206.4837_arXiv.txt
|
{ Interstellar filaments are an important part of the star formation process. In order to understand the structure and formation of filaments, the filament cross-section profiles are often fitted with the so-called Plummer profile function. Currently this profiling is often approached with submillimetre studies, especially with \emph{Herschel}. If these data are not available, it would be more convenient if filament properties could be studied using groundbased near-infrared (NIR) observations. } { We compare the filament profiles obtained by NIR extinction and submillimetre observations to find out if reliable profiles can be derived using NIR observations. } { We use J-, H-, and K-band data of a filament north of TMC-1 to derive an extinction map from colour excesses of background stars. We also use 2MASS data of this and another filament in TMC-1. We compare the Plummer profiles obtained from these extinction maps with \emph{Herschel} dust emission maps. We present two new methods to estimate profiles from NIR data: Plummer profile fits to median $A_{\rm V}$ of stars within certain offset % or directly to the $A_V$ of individual stars. We compare these methods by simulations. } { In simulations the extinction maps and the new methods give correct results to within $\sim$10-20\% for modest densities ($\rho_{\rm c}$ = $10^4$--$10^5$ cm$^{-3}$). The direct fit to data on individual stars usually gives more accurate results than the extinction map, and can work in higher density. In the profile fits to real observations, the values of Plummer parameters are generally similar to within a factor of $\sim$2 (up to a factor of $\sim$5). Although the parameter values can vary significantly, the estimates of filament mass usually remain accurate to within some tens of per cent. Our results for TMC-1 are in good agreement with earlier results obtained with SCUBA and ISO. High resolution NIR data give more details, but 2MASS data can be used to estimate approximate profiles. } { NIR extinction maps can be used as an alternative to submm observations to profile filaments. Direct fits of stars can also be a valuable tool in profiling. However, the Plummer profile parameters are not always well constrained, and caution should be taken when making the fits and interpreting the results. In the evaluation of the Plummer parameters, one can also make use of the independence of the dust emission and NIR data and the difference in the shapes of the associated confidence regions. }
|
\label{sect:intro} Interstellar filaments have long been recognised as an important part of the star formation process. Recent studies have confirmed that filamentary structures are a common feature in dense interstellar clouds and different stages of potentially forming stars are located in these filaments \citep[e.g.][]{Andre2010,Arzoumanian2011,Juvela2012}. Therefore, it is important to study the formation and structure of the filaments in order to understand the details of star formation. The mass structure of molecular clouds can be studied via a number of methods. These include molecular line mapping, observations of dust emission at far-infrared/submillimetre wavelengths, star-counts in the optical and near-infrared wavelengths, and measurements of colour excesses of background stars to produce an extinction map. The surface brightness based on scattered NIR light has also proved to be a potential method for studying mass distributions~\citep[see e.g.][]{Padoan2006,Juvela2006,Juvela2008}. However, all techniques have their own drawbacks~\citep[see, e.g.,][for a comparison of several methods]{Goodman2009}. For example, line and continuum emission maps are subject to abundance variations (gas and dust, respectively) and variations in the physical conditions. Mass estimates based on dust emission and on the estimation of colour temperature can also be biased because of line-of-sight temperature variations, especially in potentially star forming high density clouds~\citep{Malinen2011}. The details of filaments have been studied for a long time using various methods, including extinction maps~\citep[see e.g.][and references therein]{Schmalzl2010}. However, the current interest in studying filament profiles originates from submillimetre observations~\citep[e.g.][]{Nutter2008,Arzoumanian2011,Hill2011,Juvela2012}, especially the high resolution data of \emph{Herschel} Space Observatory~\citep{Pilbratt2010}. Many recent studies have fitted the filament cross-section profiles with the so-called Plummer profile function (see details in Section~\ref{sect:profiles}), because the parameters of the function can give information about the formation and structure of the filament \citep[see more details e.g. in][submitted]{Arzoumanian2011,Juvela2012a}. Large magneto-hydrodynamic (MHD) simulations have shown that filaments can be formed as a natural result of interstellar turbulence~\citep[e.g.][]{Padoan2001}. \citet[][submitted]{Juvela2012a} have studied the observational effects in determining filament profiles from synthetic submillimetre observations. The use of far-infrared/submillimetre wavelengths is more expensive, as spaceborne satellites are needed. However, in the future groundbased large-area surveys made with submillimetre wavelengths (for example with SCUBA-II) will also become available~\citep[see e.g.][]{Ward-Thompson2007}. If these data are not available, it would be more convenient if groundbased NIR observations could be used to study the detailed properties of cloud filaments. The use of dust extinction could also be more reliable, as it is not dependent on the easily biased estimates of colour temperature. It would also be useful to compare mass structures obtained using different methods. We therefore aim to compare the filament Plummer profiles obtained from NIR extinction data to those obtained from submillimetre dust emission observations. The Taurus molecular cloud is one of the closest (distance $\sim$ 140 pc) and most studied star-forming regions~\citep[see e.g.][and references therein]{Nutter2008,Schmalzl2010,Lombardi2010}. The cloud has also been mapped with \emph{Herschel} as part of the Gould Belt Survey~\citep{Andre2010}, see~\citet{Kirk2012} and Palmeirim et al.~(in prep.). It is therefore a good candidate for studying the structure of filaments. In this paper, we will examine in detail two cloud filaments in Taurus, comparing the results obtained from NIR and submillimetre data. Additionally, we examine two more distant clouds mapped by \emph{Herschel}. The contents of this article are as follows: Observations and data reduction are described in Sect.~\ref{sect:observations} and the methods to derive column densities in Sect.~\ref{sect:columndensity}. We present our methods to derive filament profiles in Sect.~\ref{sect:profiles}. The results of profiling observed filaments are shown in Sect.~\ref{sect:results}. We examine the performance of the analysis methods in Sect.~\ref{sect:simulations}, with the help of simulations. We discuss the methods and results in Sect.~\ref{sect:discussion} and present our conclusions in Sect.~\ref{sect:conclusions}.
|
\label{sect:conclusions} We have compared the filament profiles obtained by submillimetre observations (method $A$) and NIR extinction (methods $B$, $C$, and $D$). \begin{itemize} \item We conclude that NIR extinction maps (method $B$) can be used as an alternative method to submillimetre observations to profile molecular cloud filaments. \item We show that if better resolution NIR data are not available, 2MASS data can also be used to estimate approximate profiles of nearby filaments. \item We also present two new methods for the derivation of filament profiles using NIR extinction data: \begin{itemize} \item[$\bullet$] Plummer profile fits to median $A_{\rm V}$ values of stars within certain offset bin from the filament centre (method $C$) \item[$\bullet$] Plummer profile fits directly to the $A_V$ values of individual stars (method $D$) \end{itemize} \item We show that in simulations, all the methods based on NIR extinction work best with modest densities $\rho_{\rm c}$ = $10^4$--$10^5$ cm$^{-3}$. Method $D$ gives correct results for the fitted and derived parameters to within $\sim$10\% and methods $B$ and $C$ to within $\sim$20\% in most cases. At high densities ($\sim10^6$ cm$^{-3}$), only method $D$ continues to work reliably. \item For the profile fits to real observations the values of individual Plummer parameters are in general similar to within a factor of $\sim$2 (in some cases up to a factor of $\sim$5). Although the Plummer parameter values can show significant variation, the derived estimates of filament mass (per unit length) usually remain accurate to within some tens of per cent. \item We confirm the earlier results of \citet{Nutter2008} that the examined TMC-1 structure, or Bull's tail, is a filament and not part of a continuous ring. Our parameter values are also consistent with the earlier results. \item The confidence regions of the Plummer parameters are quite wide. As the parameters are not always well constrained, even small changes in the data can lead to different parameter values. The correlations of the parameters should therefore be taken into account. \item The size and orientation of the confidence regions are not identical when \emph{Herschel} maps or NIR data are used. The combination of the methods can significantly improve the accuracy of the filament parameter estimates. \end{itemize}
| 12
| 6
|
1206.4837
|
1206
|
1206.6715_arXiv.txt
|
At TeV energies, the gamma-ray horizon of the universe is limited to redshifts {$z \ll 1$}, and, therefore, any observation of TeV radiation from a source located beyond {$z=1$} would call for a revision of the standard paradigm. While robust observational evidence for TeV sources at redshifts $z \geq 1$ is lacking at present, the growing number of TeV blazars with redshifts as large as $z \simeq 0.5$ suggests the possibility that the standard blazar models may have to be reconsidered. We show that TeV gamma rays can be observed even from a source at $z \geq 1$, if the observed gamma rays are secondary photons produced in interactions of high-energy protons originating from the blazar jet and propagating over cosmological distances almost rectilinearly. This mechanism was initially proposed as a possible explanation for the TeV gamma rays observed from blazars with redshifts $z \sim 0.2$, for which some other explanations were possible. For TeV gamma-ray radiation detected from a blazar with $z\geq 1$, this model would provide the only viable interpretation consistent with conventional physics. It would also have far-reaching astronomical and cosmological ramifications. In particular, this interpretation would imply that extragalactic magnetic fields along the line of sight are very weak, in the range {$ 10^{-17}\, {\rm G} <B < 10^{-14}\, {\rm G}$}, assuming random fields with a correlation length of 1~Mpc, and that acceleration of $E \geq 10^{17} \ \rm eV$ protons in the jets of active galactic nuclei can be very effective.
|
Recent observations of active galactic nuclei with ground-based gamma-ray detectors show growing evidence of very high energy (VHE) gamma-ray emission from blazars with redshifts well beyond $z=0.1$. In this paper we examine the question of whether TeV blazars can be observed from even larger redshifts, $z \geq 1$. Although primary TeV gamma rays produced at the source are absorbed by extragalactic background light (EBL), we will show that it is possible to observe such distant blazars as point sources due to secondary photons generated along the line of sight by cosmic rays accelerated in the source. To a large extent, the observations of blazars with $z> 0.1$ came as a surprise, in view of the severe absorption of such energetic gamma rays in the EBL. One of the obvious implications of these observations is the unusually hard (for gamma-ray sources) intrinsic gamma-ray spectra. Remarkably, the {\it observed} energy spectra of these objects in the very high energy band are, in fact, very steep, with photon indices $\Gamma \geq 3.5$. However, after the correction for the expected intergalactic absorption (i.e. multiplying the {\it observed} spectra to the factor of $\exp{[\tau(z,E)]}$, where $\tau(z,E)$ is the optical depth of gamma rays of energy $E$ emitted by a source of redshift $z$), the intrinsic (source) spectra appear to be very hard with a photon index $\Gamma_{\rm s} \leq 1.5$. Postulating that in standard scenarios the gamma-ray production spectra cannot be harder than $E^{-1.5}$, it was claimed that the EBL must be quite low, based on the observations of blazars H~2356--309 ($z=0.165$) and 1ES~1101--232 ($z = 0.186$) by the HESS collaboration \cite{Aharonian:2005gh}. The derived upper limits appeared to be rather close to the lower limits on EBL set by the integrated light of resolved galaxies. Recent phenomenological and theoretical studies (e.g., Refs.~\cite{Franceschini,Gilmore2012}) also favor the models of EBL which are close to the limit derived from the galaxy counts (for a recent review see Ref.~\cite{Costamante2012}). This implies that further decrease in the level EBL is practically impossible, thus a detection of TeV gamma rays from more distant objects would call for new approaches to explain or avoid the extremely hard intrinsic gamma-ray spectra. The proposed nonstandard astrophysical scenarios include models with very hard gamma-ray production spectra due to some specific shapes of energy distributions of the parent relativistic electrons -- either a power law with a high low-energy cutoff or a narrow, e.g., Maxwellian-type distribution. While the synchrotron-self-Compton (SSC) models allow the hardest possible gamma-ray spectrum with the photon index $\Gamma=2/3$ \cite{Tavecchio2009,Lefa2011}, the external Compton (EC) models can provide gamma-ray spectrum with $\Gamma=1$ \cite{Lefa2011}. Within these models one can explain the gamma-ray emission of the blazar 1ES~229+200 at $z=0.139$ with the spectrum extending up to several TeV \cite{HESS0229} and sub-TeV gamma-ray emission from 3C~279 at $z=0.536$ \cite{Magic:3C279} ($\Gamma_s \sim 1$). Formally, much harder spectra can be expected in the case of Comptonization of an ultrarelativistic outflow \cite{ATPcascade}, in analogy with the cold electron-positron winds in pulsars~\cite{BAh2000}. Although it is not clear how the ultrarelativistic MHD outflows could form in active galactic nuclei (AGN) with a bulk motion Lorentz factor $\gamma \sim 10^6$, such a scenario, leading to the Klein-Nishina gamma-ray line-type emission~\cite{AKM2012}, cannot be excluded {\em ab initio}. Further hardening of the initial (production) gamma-ray spectra can be realized due to the internal $\gamma-\gamma$ absorption inside the source \cite{AKC2008,Zachar2011}. Under certain conditions, this process may lead to an arbitrary hardening of the original production spectrum of gamma rays. Thus, the failure of ``standard" models to reproduce the extremely hard intrinsic gamma-ray spectra is likely to be due to the lack of proper treatment of the complexity of nonthermal processes in blazars, rather than a need for new physics. However, the situation is dramatically different in the case of blazars with redshift $z \geq 1$. In this case the drastic increase in the optical depth for gamma rays with energy above several hundred GeV implies severe absorption (optical depth $\tau \gg 1$), which translates into unrealistic energy budget requirements (even after reduction of the intrinsic gamma-ray luminosity by many orders of magnitude due to the Doppler boosting). In this case, more dramatic proposals including violation of Lorentz invariance~\cite{Kifune99,SteckerGlashow,JacobPiran} or ''exotic`` interactions involving hypothetical axion-like particles~\cite{De_Angelis:2007dy,Simet:2007sa} are justified. Despite the very different nature of these approaches, their main objective is the same -- to avoid severe intergalactic absorption of gamma rays due to photon-photon pair production at interactions with EBL. This feat was accomplished either by means of big modifications in the cross-sections, or by assuming gamma-ray oscillations into some weakly interacting particles during their propagation through the intergalactic magnetic fields (IGMFs), e.g., via the photon mixing with an axion-like particle. Alternatively, the apparent transparency of the intergalactic medium to VHE gamma rays can be increased if the observed TeV radiation from blazars is secondary, i.e., if it is formed in the development of electron-photon cascades in the intergalactic medium initiated by primary gamma rays \cite{ATPcascade}. This assumption can, indeed, help us to increase the {\it effective} mean free path of VHE gamma rays, and thus weaken the absorption of gamma rays from nearby blazars, such as Mkn~501 \cite{ATPcascade,ATaylor}. However, for cosmologically distant objects the effect is almost negligible because the ''enhanced`` mean free path of gamma rays is still much smaller than the distance to the source. A modification of this scenario can explain TeV signals from objects beyond $z=1$ if one assumes that the primary particles initiating the intergalactic cascades are not gamma rays, but protons with energies $10^{17}-10^{19}$~eV~\cite{Essey:2009zg,Essey:2009ju,Essey:2010er,Murase:2011cy,Razzaque:2011jc,Essey:2010nd,Essey:2011wv,Prosekin:2012ne}. AGN are a likely source of very high energy cosmic rays~\cite{Biermann:1987ep,Aharonian:2001dm}. High-energy protons can travel cosmological distances and can effectively generate secondary gamma rays along their trajectories. Secondary gamma rays are produced in interactions of protons with 2.7~K cosmic microwave background radiation (CMBR) and with EBL.
|
One can see from Fig.~\ref{fig:spectrum} that the energy spectrum of gamma rays is quite stable from several hundred GeV to 10 TeV and beyond. Although the current statistics of the results reported by HESS does not allow robust conclusions regarding the energy spectrum above 1~TeV, the detection of multi-TeV gamma rays from PKS~0447-439 as well as from other cosmologically distant blazars would not be a surprise, but rather a natural consequence of the proposed scenario. However, we note that, if the magnetic field is enhanced in the {\em transparency zone}, i.e. in the vicinity of the observer, it could cause a strong suppression of the gamma-ray flux above some energy which can be found from the condition $\lambda_{\gamma \gamma}(E)=D$. The impact of this effect on the gamma-ray spectrum detected by an observer strongly depends on the linear scale of the enhanced magnetic field, $D$, but not much on the magnetic field itself (as long as the latter is significantly larger than $10^{-15}$G). For example, for $D \sim 300$~Mpc, the steepening of the gamma-ray spectrum starts effectively around 1~TeV. This effect is illustrated qualitatively in Fig.~\ref{fig:spectrum}. The isotropic luminosity of the source in protons required to explain the data~\cite{Zech:2011ym}, is in the range $(1-3)\times10^{50}$~erg/s, depending on the spectrum of protons. This is an enormous, but not an unreasonable power, given that the actual (intrinsic) luminosity can be smaller by several orders of magnitude if the protons are emitted in a small angle. In particular, for $\Theta = 3^\circ$, the intrinsic luminosity is comparable to the Eddington luminosity of a black hole with a mass $M \sim 10^{9} M_\odot$. Assuming that only a fraction of the blazar jet energy is transferred to high-energy particles, the jet must operate at a super-Eddington luminosity. While it may seem extreme, this suggestion does not contradict the basic principles of accretion, provided that most of the accretion energy is converted to the kinetic energy of an outflow/jet, rather than to thermal radiation of the accretion flow. Moreover, there is growing evidence of super-Eddington luminosities characterizing relativistic outflows in GRBs and in very powerful blazars~\cite{Ghisellini}. Finally, we note that the protons emitted by cosmologically distant objects are potential contributors to the diffuse gamma-ray background. The total energy deposited into the cascades through secondary Bethe-Heitler pair production does not depend on the orientation of the jet or the beaming angle, but only on the injection power of $\geq 10^{18}$eV protons and on the number of such objects in the universe. Generally, the total energy flux of gamma rays is fairly independent of the strength of the intergalactic magnetic fields, except for the highest energy part of the gamma-ray spectrum. If the contribution of these sources to the diffuse gamma-ray background is dominated by cosmologically distant objects, then the development of the proton-induced electron-photon cascades is saturated at large redshifts. One should, therefore, expect a rather steep (strongly attenuated) spectrum of diffuse gamma rays above 100~GeV. However, in the case of very small intergalactic magnetic fields, the $10^{18}$eV protons can bring significant amount of nonthermal energy to the nearby universe, and thus enhance the diffuse background by TeV photons. Perhaps, this can explain the unexpected excess of VHE photons in the spectrum of the diffuse gamma-ray background as revealed recently by the { Fermi} LAT data~\cite{Murase:2012}.
| 12
| 6
|
1206.6715
|
1206
|
1206.1629_arXiv.txt
|
The James Clerk Maxwell Telescope Nearby Galaxies Legacy Survey (NGLS) comprises an H\,{\sc i}-selected sample of 155 galaxies spanning all morphological types with distances less than 25 Mpc. We describe the scientific goals of the survey, the sample selection, and the observing strategy. We also present an atlas and analysis of the CO J=3-2 maps for the 47 galaxies in the NGLS which are also part of the Spitzer Infrared Nearby Galaxies Survey. We find a wide range of molecular gas mass fractions in the galaxies in this sample and explore the correlation of the far-infrared luminosity, which traces star formation, with the CO luminosity, which traces the molecular gas mass. By comparing the NGLS data with merging galaxies at low and high redshift which have also been observed in the CO J=3-2 line, we show that the correlation of far-infrared and CO luminosity shows a significant trend with luminosity. This trend is consistent with a molecular gas depletion time which is more than an order of magnitude faster in the merger galaxies than in nearby normal galaxies. We also find a strong correlation of the $L_{FIR}/L_{CO(3-2)}$ ratio with the atomic to molecular gas mass ratio. This correlation suggests that some of the far-infrared emission originates from dust associated with atomic gas and that its contribution is particularly important in galaxies where most of the gas is in the atomic phase.
|
Star formation is one of the most important processes driving the evolution of galaxies. The presence or absence of significant star formation is one of the key characteristics which distinguishes spiral and elliptical galaxies. The intense bursts of star formation triggered by galaxy interactions and mergers produce some of the most luminous galaxies in the local universe \citep{sm96}. At high redshift, many galaxies are seen to be forming stars at rates which far exceed those of all but the most extreme local mergers \citep{t08,tacc10}. Since stars form from gas, specifically from molecular gas, understanding the properties of the interstellar medium (ISM) is critical to understanding the rate and regulation of star formation in galaxies \citep{l08,b08}. At the most basic level, the amount of gas in a galaxy is an important constraint on the amount of star formation the galaxy can sustain \citep{k89,kenn07,l08,b08}. Recent studies have shown that it is the amount of molecular gas that is most important, rather than the total gas content \citep{b08,l08,b11}. This picture is consistent with Galactic studies which show that stars form exclusively in molecular clouds, and most commonly in the densest regions of those clouds \citep{l91a,l91b,a11}. This suggests that the properties of the molecular gas, in particular its average density and perhaps the fraction of gas above a critical density on sub-parsec scales, are likely to affect the resulting star formation. Star formation in turn can affect the properties of the dense gas, by increasing its temperature \citep{w97,meier01,t07} and perhaps by triggering a subsequent generation of stars \citep{z10}. Unusual environments are also likely to affect both the gas properties and the star formation process. High shear in galactic bars, harassment in a dense galaxy cluster, and galaxy mergers and interactions have the potential to either dampen or enhance the star formation process. The wide range of environmental processes at work, both on galactic and extragalactic scales, implies that large samples of galaxies are required to tease out the most important effects, while high resolution is required to isolate individual star forming regions, separate arm from interarm regions, and resolve galactic bars. There have been a number of large surveys of the atomic gas content of nearby galaxies, of which some of the most recent include The H\,{\sc i} Nearby Galaxy Survey, THINGS \citep{w08}, the Arecibo Legacy Fast ALFA survey, ALFALFA \citep{giov05}, and the VLA Imaging of Virgo Spirals in Atomic Gas survey, VIVA \citep{c09}. However, only the 34 galaxies in the THINGS and the 53 galaxies in the VIVA sample have sufficient spatial resolution to probe scales of one kiloparsec and below. Compared to the H\,{\sc i} 21 cm line, the CO lines used to trace molecular gas are relatively more difficult to observe due to their shorter wavelengths and the smaller field of view of millimeter-wave radio telescopes equipped with single pixel detectors. As a result, most CO extragalactic surveys have sampled a relatively small region at the center of each galaxy \citep{y95,h03,b93,d01}. Two recent surveys \citep{k07,l09} have used array receivers to observe large unbiased regions, although still in relatively small (18 and 40) samples of galaxies. Finally, dust continuum observations in the submillimeter to far-infrared present an alternative method of tracing the molecular gas that requires a mostly independent set of physical parameters, such as dust emissivity, gas-to-dust mass ratio, and the atomic gas mass \citep{t88,i97,e10,l11}. Two large surveys have been made at 850 $\mu$m, one of an Infrared Astronomical Satellite (IRAS) selected sample \citep{d00} and one of an optically selected sample \citep{v05}. The galaxies selected for these samples were relatively distant ($> 25$ Mpc) and thus primarily global measurements of the dust luminosity were obtained. The Herschel Reference Survey \citep{boselli10} is observing 323 galaxies with distances between 15 and 25 Mpc at 250, 350, and 500 $\mu$m and has significant overlap with the sample presented here. Other Herschel surveys which overlap with the NGLS sample include the Key Insights on Nearby Galaxies: A Far-Infrared Survey with Herschel \citep{k11} and the Herschel Virgo Cluster Survey \citep{d10}. Using the dust continuum to measure the star forming gas is a promising avenue to explore, especially since this method is one that can in principle be used for galaxies at higher redshifts. Taking advantage of new instrumentation for both spectral line and continuum data on the James Clerk Maxwell Telescope (JCMT), we are carrying out the Nearby Galaxies Legacy Survey (NGLS)\footnote{http://www.jach.hawaii.edu/JCMT/surveys/}, a large survey of 155 nearby galaxies using the CO J=3-2 line and continuum observations at 850 and 450 $\mu$m. The survey is designed to address four broad scientific goals. \begin{enumerate} \item {\it Physical properties of dust in galaxies.} The continuum data from this survey will probe a range of the dust spectral energy distribution (SED) that is critical to determining the total mass of dust as well as the relative proportion and physical properties of the different dust components (polycyclic aromatic hydrocarbons, very small grains, large grains). Most importantly, these long wavelength data will trace any excess submillimetre emission that causes an upturn in the dust SED. The submillimetre excess could originate from: dust with $<$10 K temperatures \citep{g03,g05,g11}; very small grains with shallow dust emissivities that are not prominent in the dust SED between 60 and 500 $\mu$m \citep{lis02,z09}; large dust grains with enhanced dust emissivities at $>$850 $\mu$m \citep{b06,o10}; or spinning dust grains \citep{bot10,ade11}. \item {\it Molecular gas properties and the gas to dust ratio.} Our high-resolution data will allow us to compare the radial profiles of the dust, H\,{\sc i}, and CO emission within our well-selected sample. The CO J=3-2 line will effectively trace the warmer, denser molecular gas that is more directly involved in star formation \citep{i09}. In comparison, the CO J=1-0 line also traces more diffuse and low density gas \citep{ww94,r07}. With observations of all three components of the ISM (molecular gas, atomic gas, and dust), we will be able to determine accurate gas-to-dust mass ratios and provide constraints on the variation of the CO-to-H$_2$ conversion factor $X_{CO}$ \citep{s88} by fitting the data \citep{t88,i97,b97}. \item {\it The effect of galaxy morphology.} The larger-scale galaxy environment can play a significant role in the properties and structure of the dense ISM. For example, an increase in the density in the ISM in the centers of spiral galaxies has been attributed to an increased pressure \citep{h93}. Elliptical galaxies have relatively low column densities of gas and dust \citep{knapp89,b98} combined with an intense radiation field dominated by older, low-mass stars and, in some cases, substantial X-ray halos. Early-type galaxies also show more compact and symmetric distributions of dust at $< 70$ $\mu$m compared to late-type galaxies \citep{bendo07,mm09}. In spiral galaxies, dust and gas properties may differ between arm and inter-arm regions \citep{a02,f10}, while the role of spiral arms in the star formation process is the subject of considerable debate \citep{e86,v88}. With observations across the full range of galaxy morphologies and with sub-kiloparsec resolution, we will be able to study the effect of morphology on the molecular gas properties. \item {\it The impact of unusual environments.} Metallicity has been shown to affect the structural properties of the ISM \citep{l11}. Lower self-shielding is expected to produce smaller regions of cold, dense gas that can be traced by CO emission with relatively larger and warmer photon-dominated regions \citep{m97,m06}. Galaxies residing in rich clusters can be affected by ram pressure stripping \citep{l03} and gravitational harassment \citep{m98} which reduces their ISM content relative to field galaxies. The halos of spiral galaxies can be populated with gas via superwinds or tidal interactions \citep{r10} or even by relatively normal rates of star formation \citep{l97}. With its galaxies spanning the full range of environment, from isolated galaxies to small groups to the dense environment of the Virgo cluster, the NGLS will be able to quantify the effect of environment on the molecular ISM in galaxies. \end{enumerate} Most surveys of molecular gas in our own or other nearby galaxies have used the ground state CO J=1-0 line as a tracer. The choice of the CO J=3-2 line for the NGLS was driven by the available instrumentation at the JCMT. Compared to the CO J=1-0 line, (5.5 K above ground with a critical density of $1.1\times 10^3$ cm$^{-3}$), the CO J=3-2 line (33 K and $2.1\times 10^4$ cm$^{-3}$) traces relatively warmer and denser gas. (Since both lines are usually optically thick, the effective critical density is likely reduced by a factor of 10 or more.) There is growing evidence that the CO J=3-2 emission correlates more tightly with the star formation rate or star formation efficiency than does the CO J=1-0 line \citep{w09,muraoka07}. \citet{komugi07} showed that the CO J=3-2 emission correlates linearly with star formation rate derived from extinction-corrected H$\alpha$ emission and that the correlation was tighter with the J=3-2 line than with the J=1-0 line. Similarly, \citet{i09} showed that the CO J=3-2 emission correlates nearly linearly with the far-infrared luminosity for a sample of local luminous infrared galaxies and high redshift submillimeter galaxies. Thus, it is appears that the CO J=3-2 emission is preferentially tracing the molecular gas associated directly with star formation, such as high-density gas that is forming stars or warm gas heated by star formation, rather than the total molecular gas content of a galaxy. In this paper, we describe the NGLS sample selection, the CO J=3-2 observations and data reduction, and the planned observing strategy for continuum 450 and 850 $\mu$m observations (\S\ref{sec-obs}). In \S~\ref{sec-disc}, we present the CO J=3-2 integrated intensity images, as well as maps of the velocity field and velocity dispersion for those galaxies with sufficiently strong signal. In \S~\ref{sec-comp}, we examine the molecular and atomic gas masses for the Spitzer Infrared Nearby Galaxies Survey (SINGS) sample \citep{k03} and compare the CO J=3-2 luminosity with the far-infrared luminosity for both the galaxies in the SINGS sample, local luminous and ultraluminous infrared galaxies, and high redshift quasars and submillimeter galaxies which have also been observed in the CO J=3-2 transition \citep{i09}. We give our conclusions in \S\ref{sec-concl}. Previous papers in this series have examined individual or small samples of galaxies \citep{w09,warren10,b10,w11,i11,s-g11}. Future papers will exploit the spatially resolved nature of these data by examining the CO line ratios and excitation (Rosolowsky et al. in prep.) and the gas depletion times (Sinukoff et al., in prep.).
|
The James Clerk Maxwell Telescope Nearby Galaxies Legacy Survey (NGLS) comprises an H\,{\sc i}-selected sample of 155 galaxies spanning all morphological types with distances less than 25 Mpc. We have used new, large-area CO J=3-2 maps from the NGLS to examine the molecular gas mass fraction and the correlation between far-infrared and CO luminosity using 47 galaxies drawn from the SINGS sample \citep{k03}. We find a good correlation of the CO J=3-2 luminosity with the CO J=2-1 luminosity \citep{l09} and the CO J=1-0 luminosity \citep{k07}, with average global line ratios of $0.18 \pm 0.02$ for the CO J=3-2/J=1-0 line ratio and $0.36\pm 0.04$ for the CO J=3-2/2-1 line ratio. The galaxies in our sample span a wide range of ISM mass and molecular gas mass fraction, with 21\% of the galaxies in the sample having more molecular than atomic gas and 60\% having molecular to atomic mass ratios less than 0.5. We explore the correlation of the far-infrared luminosity, which traces star formation, with the CO luminosity, which traces the molecular gas mass. We find that the more luminous galaxies in our sample (with $\log L_{FIR} > 9.5$ or $\log L_{CO(3-2)} > 7.5$) have a uniform $L_{FIR}/L_{CO(3-2)}$ ratio of $62\pm 5$. The lower luminosity galaxies show a wider range of $L_{FIR}/L_{CO(3-2)}$ ratio and many of these galaxies are not detected in the CO J=3-2 line. We can convert this luminosity ratio to a molecular gas depletion time and find a value of $3.0\pm 0.3$ Gyr, in good agreement with recent estimates from \citet{b11}. We also find a correlation of the $L_{FIR}/L_{CO(3-2)}$ ratio with the mass ratio of atomic to molecular gas, which suggests that some of the far-infrared emission originates from dust associated with the atomic gas. This effect is particularly important in galaxies where the gas is predominantly atomic. By comparing the NGLS data with merging galaxies at low and high redshift from the sample of \citet{i09}, which have also been observed in the CO J=3-2 line, we show that the $L_{FIR}/L_{CO(3-2)}$ ratio shows a significant trend with luminosity and thus that the correlation of $L_{FIR}$ with L$_{CO(3-2)}$ is not as linear a trend as found by \citet{i09}. Taking into account differences in the CO J=3-2/J=1-0 line ratio and CO-to-H$_2$ conversion factor between the mergers and the normal disk galaxies, this trend in the $L_{FIR}/L_{CO(3-2)}$ ratio is consistent with a molecular gas depletion time of only 50 Myr in the merger sample, roughly 60 times shorter than in the nearby normal galaxies.
| 12
| 6
|
1206.1629
|
1206
|
1206.3306_arXiv.txt
|
We study the impact of primordial non-Gaussianity generated during inflation on the bias of halos using excursion set theory. We recapture the familiar result that the bias scales as $k^{-2}$ on large scales for local type non-Gaussianity but explicitly identify the approximations that go into this conclusion and the corrections to it. We solve the more complicated problem of non-spherical halos, for which the collapse threshold is scale dependent.
|
The standard cosmological model that fits a wide variety of observations is based on inflation, in particular on the production of a nearly scale-invariant spectrum of adiabatic perturbations at very early times. Inflation also predicts, and this too is verified by the data, that the perturbations should be nearly Gaussian. While inflation is successful in explaining the current suite of observations, it has not been successfully connected to the rest of physics. Equivalently, the mechanism that drove inflation has not been identified. For these purposes, the small deviations from scale-invariance or Gaussianity may prove crucial~\cite{Komatsu:2009kd}. The simplest single field slow-roll models of inflation predict negligible non-Gaussianity, so a detection has the potential to rule out a large class of models. Multiple-field models, on the other hand, often predict levels of non-Gaussianity within the range of upcoming surveys. The cosmic microwave background is the most obvious place to search for non-Gaussianity, as the perturbations are observed when they are still small, and therefore relatively unprocessed. Large scale structure, on the other hand, is comprised of highly evolved perturbations that are nonlinear and hence different Fourier modes have mixed with one another. Even if the primordial perturbations were perfectly Gaussian, the observed large scale structure would be highly non-Gaussian. The trick to using large scale structure is to identify a feature in the spectrum that can be caused by primordial non-Gaussianity but not by standard gravitational instability. Recently, such a feature has been identified~\cite{Dalal:2007cu} in the form of scale-dependent bias.\footnote{Generically effects that scale as $k^{-2}$, mimicking the scale dependent bias on large scales resulting from primordial non-Gaussianity, can arise from relativistic effects in general relativity with Gaussian density fluctuations. However, the effects due to primordial non-Gaussianity are much stronger than those arising dynamically from relativistic effects (see e.g.\ \cite{Bartolo:2010ec, Jeong:2011as, Baldauf:2011bh, Bruni:2011ta}).} While it has been known for some time that primordial non-Gaussianity affects the abundances of rare objects \cite{Matarrese:2000iz,Verde:2000vr, LoVerde:2007ri}, the signal is expected to be diminished due to the gravitational evolution generically evolving the distribution away from Gaussian. The initial argument for scale-dependent bias was based on both simulations and the statistics of high peak regions~\cite{Press:1973iz,Kaiser:1984sw,Dalal:2007cu,Matarrese:2008nc,Slosar:2008hx,Afshordi:2008ru,McDonald:2008sc,Giannantonio:2009ak}. Subsequently, several groups~\cite{Desjacques:2010gz, Desjacques:2011mq,Scoccimarro:2011pz} have applied the peak-background split in the context of excursion set theory~\cite{Bond:1990iw}. Here we apply a recent generalization of the excursion set approach~\cite{Maggiore:2009rv,Maggiore:2009rx} to the problem of the clustering of halos in order to extract the scale-dependent bias term and to understand under what conditions it holds. We extract the large-scale $k^{-2}$ behavior of the bias for the case of spherical collapse and also generalize to ellipsoidal collapse. Here too the bias contains a scale-dependent $k^{-2}$ piece, but the coefficient differs from that obtained assuming spherical collapse. This paper is organized as follows. In \S\ref{review}, we briefly review the path integral approach to the excursion set before we derive the conditional and unconditional crossing rates in \S\ref{sec:uncondcross} and \S\ref{sec:condcross} respectively, and finally, the halo bias in \S\ref{sec:halobias}. In \S\ref{sec:scaledepbias} we extract the bias parameters. We conclude in \S \ref{sec:conclusions}. In Appendix \ref{app:linearbarrier}, we compare our results to a case where the probability density can be evaluated exactly -- where the barrier depends only linearly on time.
|
\label{sec:conclusions} Making use of the path integral approach to excursion set theory, we have derived the bias of collapsed objects. We have kept all terms linear in the long wavelength density fluctuation and we have allowed for a completely general moving barrier. The bias contains both the scale independent contributions, analogous to those first reported by \cite{DeSimone:2011dn} for a general barrier, and contributions that depend on the long wavelength smoothing scale. We have demonstrated how this dependence can be interpreted as a scale dependent biasing and shown that our result reduces to the famous Dalal et.\ al.\ \cite{Dalal:2007cu} result in the limit of spherical collapse and the local ansatz. In the spherical collapse limit for a general bispectrum shape, we reproduce the results of Desjacques, Jeong and Schmidt \cite{Desjacques:2011mq, Desjacques:2011jb}, including the previously overlooked additional term. In the case where the barrier is not constant and depends on the smoothing scale, we find that the coefficient of the scale dependent bias is no longer simply related to the Gaussian bias parameter, but rather contains additional terms (\ec{coeff}) that might affect the extraction of $f_{\rm NL}$ from upcoming surveys. While the simple relation is recovered on sufficiently large mass scales, on smaller mass scales, we find a significant departure from the expected result. In arriving at this result, we have made several approximations. Following standard techniques in evaluating the effect of a moving barrier (which characterizes ellipsoidal collapse), we truncated the probability distribution in \ec{Piuncond}. This approximation causes $\Pi$ to no longer satisfy the Fokker-Planck equation. However, we have shown in Appendix \ref{app:linearbarrier} that when used in combination with the Fokker-Planck equation to calculate the halo bias, this approximation leads to results consistent with an exact treatment (at least to cubic order in derivatives) when compared to the case of a linear barrier where the probability distribution can be computed exactly. For this reason, we do not think this approximation leads to the upturn in $c_n$ at large $S_n$ depicted in Fig.\ \ref{fig:bias_sn_dependence}. A second approximation treating non-Gaussianity in the excursion set is likely to have more effect on our final answer. We evaluated the 3-point function at the end point of the trajectories, as in \ec{correlatorapprx}. As noted there, one may think of this as keeping the zeroth order term in a Taylor expansion about the final time $S_n$. It is straightforward to calculate the contributions from additional terms in this series by making use of the results and methods of \cite{Maggiore:2009rv,Maggiore:2009rw,Maggiore:2009rx}. This approximation can be shown to correspond to an expansion in $S_n/B_{n}^2$. Furthermore, using the saddle point techniques of \cite{D'Amico:2010ta} it is possible to resum a number of higher order corrections to this formula to obtain a more accurate formula for the bias. However, to reproduce the existing results in the literature, the zeroth order approximation used here appears sufficient. We leave the calculation of higher order corrections to future work. While this paper was in preparation, we became aware of the work \cite{D'Aloisio:2012hr}, which also considers the halo bias in the path integral formulation of the excursion set. While the authors of \cite{D'Aloisio:2012hr} consider only a constant barrier, they include next-to-leading order corrections from relaxing the approximation at \ec{correlatorapprx}. The results of this work are consistent with those found in \cite{D'Aloisio:2012hr} the limit where the barrier is constant, and one restricts to the leading order result.
| 12
| 6
|
1206.3306
|
1206
|
1206.0015.txt
|
We apply a new method to determine the local disc matter and dark halo matter density to kinematic and position data for $\sim 2000$ K dwarf stars taken from the literature. Our method assumes only that the disc is locally in dynamical equilibrium, and that the `tilt' term in the Jeans equations is small up to $\sim 1$\,kpc above the plane. We present a new calculation of the photometric distances to the K dwarf stars, and use a Monte Carlo Markov Chain to marginalise over uncertainties in both the baryonic mass distribution, and the velocity and distance errors for each individual star. We perform a series of tests to demonstrate that our results are insensitive to plausible systematic errors in our distance calibration, and we show that our method recovers the correct answer from a dynamically evolved N-body simulation of the Milky Way. We find a local dark matter density of $\rhodm=0.025^{+0.014}_{-0.013}$\msun pc$^{-3}$ ($0.95^{+0.53}_{-0.49}$\,GeV\,cm$^{-3}$) at 90\% confidence assuming no correction for the non-flatness of the local rotation curve, and $\rhodm=0.022^{+0.015}_{-0.013}$\msun pc$^{-3}$ ($0.85^{+0.57}_{-0.50}$\,GeV\,cm$^{-3}$) if the correction is included. Our 90\% lower bound on $\rhodm$ is larger than the canonical value typically assumed in the literature, and is at mild tension with extrapolations from the rotation curve that assume a spherical halo. Our result can be explained by a larger normalisation for the local Milky Way rotation curve, an oblate dark matter halo, a local disc of dark matter, or some combination of these.
|
\label{sec:intro} The local dark matter density is an average over a small volume, typically a few hundred parsecs, around the Sun. It provides constraints on the local halo shape and allows us to predict the flux of dark matter particles in laboratory detectors. The latter is required to extract information about the nature of a dark matter particle from such experiments, at least in the limit of a few tens to hundreds of detections \citep{Peter_2011}. The Galactic halo shape can be constrained by combining two methods of determining the local dark matter density. Firstly, one can infer it from the Galactic rotation curve ($\rho^\mathrm{ext}_\mathrm{dm}$). This requires an assumption about the shape of the Galactic halo \citep[typically spherical; e. g.][] {sofue_unified_2008,weber_determination_2010,catena_2010}. Secondly, one can calculate the dark matter density locally from the vertical kinematics of stars near the Sun ($\rho_\mathrm{dm}$) \citep[e. g.][]{bahcall_self-consistent_1984,holmberg_local_2000}. If $\rho_\mathrm{dm}<\rho^\mathrm{ext}_\mathrm{dm}$, this suggests a prolate dark matter halo for the Milky Way; while $\rho_\mathrm{dm}>\rho^\mathrm{ext}_\mathrm{dm}$, could imply either an oblate halo or a dark disc \citep{lake_must_1989,read_thin_2008,read_dark_2009}. Determining the local matter density from the kinematics of stars in the Solar Neighbourhood has a long history dating back to \cite{oort_force_1932,oort_note_1960} in the 1930's. Oort used the classical method of solving the combined Poisson-Boltzmann equations for a sample of stars, assumed to be stationary in the total matter distribution of the disc. He found 50\% more mass than the sum of known components. A more modern study by \cite{bahcall_self-consistent_1984} introduced a new method that described the visible matter as a sum of isothermal components. He also found dynamically significant dark matter in the disc\footnote{We should be careful about what we mean by `dark matter in the disc'. Early studies like \cite{oort_force_1932} were typically interested in missing disc-like matter (a `thin dark disc'); more modern studies try to constrain a significantly more extended dark matter halo that has a near-constant dark matter density up to $\sim 1$\,kpc. Even the `dark disc' predicted by recent cosmological simulations \citep{read_thin_2008,read_dark_2009} is sufficiently hot that its dark matter distribution is approximately constant up to $\sim 1$\,kpc. Throughout this paper when we talk about `dark matter in the disc' we refer to a constant density dark matter component within the disc volume.} \citep{bahcall_k_1984}. Using faint K dwarfs at the South Galactic Pole, \cite{bahcall_local_1992} confirmed his earlier result that more than $50\%$ of the mass was dark, although with a lower statistical significance. However, the early studies by \cite{oort_force_1932,oort_note_1960} and \cite{bahcall_self-consistent_1984,bahcall_k_1984} assumed that different tracers could be simply averaged to form a single tracer population. \cite{kg_1989c} demonstrated that the two samples of F stars analysed by \cite{bahcall_self-consistent_1984} were not compatible (i.e. they had different spatial density distribution, but no evidence for a difference in their kinematics) and therefore should not be averaged. They re-analysed the K giant sample used by \cite{bahcall_k_1984}, assigning more realistic errors to the density profile and using a more detailed fit to the velocity data, finding a value of total matter density compatible with the observed one. They concluded that the determination of the local volume density remained limited by systematic and random errors with the available data. With the launch of the ESA satellite Hipparcos (1997), the kinematics and position of tracer stars were measured with much higher accuracy. The improved distance measures give a much more accurate measurement of the local luminosity function, so that the total amount of visible matter can be better estimated as well. The latest dynamical measurements of the local density of matter -- $\rhotot$ -- from Hipparcos data show no compelling evidence for a significant amount of dark matter in the disc \citep{creze_distribution_1998,holmberg_local_2000}. \cite{holmberg_local_2000} found $\rhotot = 0.102\pm0.01$M$_{\odot}$pc$^{-3}$, with a contribution of about $0.095$M$_{\odot}$pc$^{-3}$ in visible matter, consistent with the \cite{kg_1989b}'s value. In addition to the local volume density, several authors have calculated the local {\it surface density} of gravitating matter, probing up to larger heights above the disc plane \citep[typically $\sim 1$\,kpc; e.g.][]{kg_1989a, kg_1989b,kg_1991, holmberg_local_2004}. Using faint K dwarfs at the South Galactic Pole, and using a prior from the rotation curve, \cite{kg_1989b, kg_1991} find $\rhodm^\mathrm{KG} = 0.010\pm 0.005$\,\msun\,pc$^{-3}$, consistent with that expected from the rotation curve assuming a spherical Galactic dark matter halo\footnote{Note that this consistency with the rotation curve is somewhat circular since this is input as a prior in their analysis.} \citep[e. g.][] {sofue_unified_2008,weber_determination_2010,catena_2010}. A similar result was found in the post-Hipparcos era by \cite{holmberg_local_2004}. Recently, \cite{monibidin_2012} have estimated the surface density using tracers at heights $1.5 < z < 4$\,kpc above the disc, making a rather stronger claim (incompatible with the earlier results of \cite{kg_1991} and \cite{holmberg_local_2004}) that there is no dark matter near the Sun. However, \cite{bovytrem_2012} demonstrate that this result is erroneous and owes to one of ten assumptions used by \cite{monibidin_2012} being false. Furthermore, \cite{sanders_2012} estimate that the velocity dispersion gradients derived by \cite{monibidin_2012} could be biased by up to a factor of two, which would also significantly alter their determination of $\rhodm$. With next generation surveys round the corner \citep[e.g. Gaia;][]{jordan_gaia_2008}, a significant improvement in the number of precision astrometric, photometric and spectroscopic measurements is expected. For this reason, \cite{paper1} (hereafter Paper I) revisited the systematic errors in determining $\rhodm$ from Solar Neighbourhood stars; these will likely soon become the dominant source of error, if they are not already. We were the first to use a high resolution N-body simulation of an isolated Milky Way-like galaxy to generate mock data. We used these mock data to study a popular class of mass modelling methods in the literature that fit an assumed distribution function to a set of stellar tracers \citep{holmberg_local_2000,binney_galactic_2008}. We found that realistic mixing of stars due to the formation of a bar and spiral arms (similar to those observed in the Milky Way) breaks the usual assumption that the distribution function is separable, leading to systematic bias in the recovery of $\rhodm$. We then introduced a new method that avoids this assumption by fitting instead moments of the distribution function (i.e. that solves the Jeans-Poisson equations). Our Minimal Assumption method (or MA method) uses a Monte Carlo Markov Chain technique (hereafter MCMC) to marginalise over remaining model and measurement uncertainties. Given sufficiently good data, we showed that our method recovers the correct local dark matter density even in the face of disc inhomogeneities, non-isothermal tracers and a non-separable distribution function. In this article, we apply our MA method to real data from the literature. The key advantages of our new method over previous works are that: (i) we use a `minimal' set of assumptions; (ii) we use a MCMC to marginalise over both model and measurement uncertainties; and (iii) we require no prior from the Milky Way rotation curve as has been commonly used in previous works \citep{kg_1989b,kg_1991,holmberg_local_2000,holmberg_local_2004}. This latter means that we can compare our determination to that derived from the rotation curve to constrain the Milky Way halo shape. Our method requires at least one equilibrium stellar tracer population with known density fall off $\nu(z)$ and vertical velocity dispersion $\vztwo(z)$, both as a function of height $z$. The requirements for a suitable sample of stellar tracers are that: (i) they are in dynamical equilibrium with the Galactic potential (i.e. they must be sufficiently dynamically old to have completed many vertical oscillations through the Galactic plane); (ii) they are available in sufficient numbers to give good statistical precision; (iii) they have reliable distances and vertical velocities $v_z$; (iv) the sample completeness needs to be sufficiently well understood in order to measure the density fall off as a function of the distance $z$ from the disc plane; and (v) they extend up to 2-3 times the disc scale height (in order to break a degeneracy between the disc and dark matter densities; see Paper I). While full six dimensional phase space information is now available for a large number of stars (e.g. RAVE \citealt{steinmetz_rave:_2003,steinmetz_radial_2006,zwitter_radial_2008}; and SEGUE \citealt{yanny_segue:_2009}), these surveys are magnitude rather than volume complete, with additional survey selection effects based on colour. This makes it difficult to reliably estimate $\nu(z)$ for a given tracer population. For this reason, we return to the volume complete K-dwarf data from \cite{kg_1989b} for our disc tracers -- the `KG' data. These data consist of a photometric sample of 2016 K dwarf stars, complete in the $z$-range $\sim 0.2-1.5$\,kpc, with a spectroscopic sample of 580 K dwarfs (most of which are included in the photometric catalogue). We use data from Hipparcos and SEGUE \citep{kotoneva_2002,zhang} to perform a new photometric distance measurement for each K dwarf star. We model the local gravitational potential using the baryonic mass distribution of the Galactic disc by \cite{flynn_2006}. This article is organised as follows. In Section \ref{sec:data}, we present the K dwarf data from \cite{kg_1989b} (hereafter KG989II) and describe our new distance determinations (\ref{sec:newdist}). In Section \ref{sec:Met}, we summarise our MA mass modelling method (\ref{sec:MA}) and, for comparison, the method adopted by KG89II (\ref{sec:KG}). In Section \ref{sec:Nbody}, we test both methods on a mock data set derived from a dynamically evolved N-body simulation. In Section \ref{sec:Res}, we apply our MA method to the KG data and present our results. Finally in Section \ref{sec:discuss+concl}, we summarise and present our conclusions. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
|
\label{sec:discuss+concl} We have presented a new measurement of the local matter and dark matter densities from the kinematics of K dwarf stars near the Sun. We presented a new photometric distance calibration for the the K dwarf data of KG89II (the KG data), derived using modern survey catalogues and the Hipparcos satellite data. We then used these data as tracers of the local gravitational potential to calculate the visible ($\rhos$) and dark matter ($\rhodm$) densities at the solar position $R_\sun$ and the surface density of the Milky Way disc up to 1.1\,kpc above the plane ($\Sigma_s$). To determine $\rhodm$ and $\rhos$, we applied our new mass modelling method (presented already in Paper I) that relies on a minimum set of assumptions (the MA method) to the rejuvenated KG data. The key advantages of our new method are that: (i) we do not require any hypothesis about the shape of the tracers' velocity distribution function; (ii) we use a MCMC to marginalise over uncertainties in the distances and velocities of the tracer stars, and the underlying baryonic mass model for the visible disc; and (iii) we require no prior from the Milky Way rotation curve as has been commonly used in previous works. This latter means that we can compare our determination to that derived from the rotation curve to constrain the Milky Way halo shape. We used a dynamically evolved high resolution N-body simulation of a Milky Way-like galaxy as a mock data set to test our MA method, finding that we could correctly recover $\rhodm$ and $\rhos$ within our 90\% confidence interval (for eight sample Solar neighbourhood-like volumes) even in the face of disc inhomogeneities, non-isothermal tracers, asymmetric distance errors and a non-separable tracer distribution function. Furthermore, we confirmed the result from our Paper I that assuming a separable distribution function (as has been typically done in the modern literature) leads to a biased determination of $\rhodm$. Applying our MA method to the K dwarf data, we obtain a new measurement of the local dark matter density: $\rhodm= 0.025^{+ 0.014}_{- 0.013}$\,\msun\,pc$^{-3}$ ($0.95^{+0.53}_{-0.49}$\,GeV\,cm$^{-3}$); which, adding a correction for the local non-flatness of the rotation curve correction term ($\simeq -0.0033\pm0.0050$, see Sections \ref{sec:MA} and \ref{sec:Res}), gives: $\rhodm=0.022^{+0.015}_{-0.013}$\msun pc$^{-3}$ ($0.85^{+0.57}_{-0.50}$\,GeV\,cm$^{-3}$). Our new value is systematically larger than the results from KG89II and KG91 derived from the same data. We show that this primarily owes to our new MA modelling method (and the fact that it does not assume a separable distribution function for the tracers); our new distance determination for the K dwarfs plays a more minor role. At the same time we determine a value of the local visible matter density of $\rhos = 0.098^{+0.006}_{-0.014}$\,\msun\,pc$^{-3}$ that largely reflects the prior from our baryonic mass model. Our error bars are larger than is often quoted in the literature, however they reflect the full combination of model systematic, measurement and statistical uncertainties. Other recent determinations either rely on the rotation curve and therefore a strong assumption about the Milky Way halo shape, or require a large number of assumptions with associated (and typically unmodelled) systematic errors. In addition to measuring $\rhodm$ and $\rhos$, we also obtain an estimate of the baryonic disc mass up to $z = 1.1$\,kpc above the disc plane: $\Sigma_\mathrm{s}=45.5^{+5.6}_{-5.9}$\msun pc$^{-2}$ at 90\% confidence. This is slightly lower than the mean of the prior from our baryonic mass model: $\Sigma_\mathrm{vis} = 49.4 \pm 4.6$\,\msun\,pc$^{-2}$. Splitting the number into the contribution from stars and stellar remnants: $\Sigma_* = 33.4^{+5.5}_{-5.2}$\,\msun\,pc$^{-2}$ and gas: $\Sigma_\mathrm{g} = 12.00^{+1.9}_{-2.0}$\,\msun\,pc$^{-2}$, we see that our model favours slightly lower surface density in both the gas and the stars than the mean of our priors ($\Sigma_{\mathrm{g}}^\mathrm{obs}=13.3\pm3.4$\,\msun\,pc$^{-2}$, $\Sigma_{*}^\mathrm{obs}=36.1\pm3.0$\,\msun\,pc$^{-2}$). It is this tendency for our models to favour lower disc surface density that leads to our high median value for $\rhodm$ (see figure \ref{fig:chi} and Appendix \ref{app:low-high-z}). Unfortunately, current estimates of the stellar and gaseous inventory in the Solar neighbourhood are too uncertain to confirm or rule out our favoured $\Sigma_\mathrm{s}$ \citep[e.g.][]{bovy_rix_2012,ferriere2001}. Our median value of the local dark matter density is larger at 90\% confidence than the Standard Halo Model value of $\rhodm^\mathrm{SHM}=0.008$\,\msun\,pc$^{-3}$ ($0.30$\,GeV\,cm$^{-3}$) usually adopted in the literature. This could be a statistical fluctuation; in one out of eight patches in our simulated mock data, our method overestimated $\rhodm$ by $\sim90\%$. However, if our high median value is confirmed by future data then it has some interesting implications. Firstly, it is particularly important for direct detection experiments because it implies a larger flux of dark matter particles and therefore a greater chance of detection. Secondly, our result is at mild tension with the value of $\rhodm^\mathrm{ext}$ extrapolated from the rotation curve measurements, assuming a spherical dark matter halo. This suggests that the halo of our Galaxy is oblate and/or that we have a disc of dark matter, as predicted by recent cosmological simulations (see upper panel of figure \ref{fig:KGlik}).
| 12
| 6
|
1206.0015
|
1206
|
1206.6523_arXiv.txt
|
We present the results from a detailed analysis of photometric and spectrophotometric data on five Seyfert 1 galaxies observed as a part of a recent reverberation mapping program. The data were collected at several observatories over a 140-day span beginning in 2010 August and ending in 2011 January. We obtained high sampling-rate light curves for Mrk~335, Mrk~1501, 3C\,120, Mrk~6, and PG\,2130+099, from which we have measured the time lag between variations in the 5100\,\AA \ continuum and the \Hbeta \ broad emission line. We then used these measurements to calculate the mass of the supermassive black hole at the center of each of these galaxies. Our new measurements substantially improve previous measurements of \mbh and the size of the broad line-emitting region for four sources and add a measurement for one new object. Our new measurements are consistent with photoionization physics regulating the location of the broad line region in active galactic nuclei.
|
In the past two decades, several correlations have been observed between various properties of galaxies and the masses of their central supermassive black holes (BHs). Two of the best-studied correlations, in both active and quiescent galaxies, are the relations between the mass of the central black hole ($M_{\rm BH}$) and the stellar velocity dispersion of the host bulge, commonly known as the \msigma relation (e.g., \citealt{Ferrarese00}; \citealt{Gebhardt00a}, \citealt{Tremaine02}; \citealt{Onken04}; \citealt{Woo10}), and the relation between \mbh and the luminosity of the host bulge, also referred to as the \mlum relation (e.g., \citealt{Kormendy95}; \citealt{Magorrian98}; \citealt{Bentz09b}; \citealt{Gultekin09}). The existence of these correlations suggests that there is a connection between supermassive BH growth and galaxy evolution. If this connection exists, simulations or theories of galaxy and BH growth must naturally produce these observed correlations. Explanations for the observed $M_{\rm BH}$--galaxy correlations have ranged from hierarchical mergers and quasar feedback to self-regulated BH growth (e.g., \citealt{Silk98}; \citealt{Dimatteo05}; \citealt{Hopkins09a}), although there are also arguments that it is simply a consequence of random mergers (e.g., \citealt{Peng07}; \citealt{Peng10}; \citealt{Jahnke11}). A large sample of accurate direct \mbh measurements is crucial to understanding this BH--galaxy connection. Because the BH sphere of influence is much too small to be resolvable in any but the nearest galaxies, the only direct method of measuring \mbh in distant galaxies is reverberation mapping (\citealt{Blandford82}; \citealt{Peterson93}), which is applicable to Type 1, or broad-line, active galactic nuclei (AGNs). Reverberation mapping relies on the correlation between variations of the AGN continuum emission and the subsequent response of the broad emission lines. By monitoring AGN spectra over a period of time, one can measure the radius of the broad line region by observing the time delay, or ``lag'', between fluctuations in the continuum and emission-line fluxes, which is due to light travel time between the continuum source and the BLR. Assuming the gas is in virial motion, this BLR radius, $R_{\rm BLR}$, can be combined with some measure of the BLR gas velocity from the doppler-broadened emission-line widths to obtain an estimate of $M_{\rm BH}$. To date, this method has been applied to measure BLR radii and \mbh in nearly 50 AGNs (e.g., \citealt{Peterson04}; \citealt{Bentz09c}; \citealt{Denney10}). See \cite{Marziani12} for a recent review on using the BLR to measure \mbh. These measurements have confirmed the existence of a correlation predicted by photoionization theory between the radius of the BLR and the AGN continuum luminosity, known as the \radlum \ relation (e.g., \citealt{Davidson72}; \citealt{Davidson79}). This correlation allows one to obtain both velocity and \rblr estimates from a single calibrated spectrum, and has been used to calculate \mbh in large samples of AGNs (e.g., \citealt{YShen08}). This can be used to investigate the evolution of the BH mass function (e.g., \citealt{Greene07}; \citealt{Vestergaard08}; \citealt{Vestergaard09}; \citealt{Kelly10}), the growth of BHs compared to their hosts, the Eddington ratios of quasars (e.g., \citealt{Kollmeier06}; \citealt{Kelly10}), and even the dependence of accretion disk sizes on BH mass (\citealt{Morgan10}). The existence of local correlations between host properties and \mbh provides another means of exploring BH populations, where BH masses can be inferred from the properties of their hosts. However, there has recently been some discussion on the nature of these correlations, especially the \msigma relation. In these applications, the \msigma relation is assumed to be similar in quiescent and active galaxies, but there are claims that many AGNs lie below or above the \msigma relation at both the high and low-luminosity ends (see, for example, \citealt{Dasyra07}; \citealt{Greene10}; \citealt{Mathur11}). Whether or not \mbh estimates based on these relationships are reliable is openly debated. Continuing to make new and improved \mbh measurements using reverberation mapping is one way to investigate this. Light curve quality, in terms of sampling density, duration, and precision flux measurements, is a very important factor in reverberation measurements. In particular, light curves that are too short in duration or inadequately sampled can result in incorrect lag measurements (e.g., \citealt{Perez92}; \citealt{Welsh99}; \citealt{Grier08}). Since the 1990s, our view of what constitutes ``adequately-sampled'' has changed dramatically, and we now know that some of the early measurements need to be redone, as their sampling rates are low enough that we have serious doubts about their suitibility in recovering BLR radii. In a continuing effort to improve the database of reverberation-mapped objects, we carried out a massive reverberation mapping program at multiple institutions beginning in 2010 August and running until 2011 January. The main goals of our program were (1) to re-observe old objects lacking well-sampled light curves, (2) to expand the reverberation-mapped sample by observing new objects, (3) to obtain velocity-delay maps for several of the targets, and (4) if possible, to measure a reverberation lag in the high-ionization \HeII \ emission line in a narrow-line Seyfert 1 galaxy (Mrk 335 in this case, with results published in \citealt{Grier12a}). We limited our target list to galaxies with expected time lags that were short enough to allow successful measurements during our four-month long campaign. Our final target list included eight objects, and we succeeded in measuring lags for six. Two objects, NGC\,4151 and NGC 7603, were dropped due to weather-related time losses. Here we present lag measurements for five of the six remaining objects, while the sixth object, NGC\,7469, presents us with a number of interesting challenges and will be discussed in a future work. These five targets and their basic properties are listed in Table \ref{Table:obj_info}. We will discuss velocity-delay maps in a forthcoming study.
|
\subsection{The Radius--Luminosity Relationship} We compute the average 5100\,\AA \ luminosities of our sources, correcting for host galaxy contamination following \cite{Bentz09a}. We measure the observed-frame host-galaxy flux in our aperture for each source using HST images (Table \ref{Table:static}). With these measurements, we calculate the host-subtracted, rest-frame 5100\,\AA \ AGN luminosity for placement on the radius-luminosity relationship. The final host-subtracted AGN luminosities are given in Table \ref{Table:mbh}. Note that we do not currently have HST images from which to measure the host luminosity for two of our objects, Mrk 6 and Mrk 1501. As a consequence, the luminosities listed for these objects are the total 5100\,\AA\ luminosities rather than just that of the AGN, and we expect them to fall to the right of the \radlum relationship. Figure \ref{fig:f5} shows the \cite{Bentz09a} \radlum relationship and the placement of our new measurements. Previous measurements from \cite{Bentz09a} are represented as open shapes, while our new measurements are represented by filled shapes, varying in shape and color by object. We have not re-fit the best-fit trend including our new data; we leave this to a future work. Mrk 335 and 3C\,120 both fall very close to their positions from the \cite{Bentz09a}, but we have increased the precision of their $R_{\rm BLR}$ measurements. PG2130\,+099 continues to lie somewhat to the right of the relation. Both Mrk 6 and Mrk 1501 also lie noticeably below the relationship, as is expected since we were unable to subtract the host galaxy starlight --- we therefore show these luminosity measurements as upper limits. Host measurements for these galaxies will shift both of them to lower luminosities and hence closer to the existing \radlum relation. To see where we expect Mrk 1501 and Mrk 6 to lie on the relation after host subtraction, we examined the host galaxy light fraction in galaxies with similar BLR sizes (i.e. similar lags) as these two objects. Using measurements from \cite{Bentz09a}, we calculated the average fraction of host galaxy light among galaxies with similar lags, and used this fraction to calculate the expected host galaxy fluxes, and hence the expected host-subtracted luminosities, in Mrk 1501 and Mrk 6. Host galaxies in objects with lags similar to Mrk 1501 contributed on average 34\% of the total luminosity, so we expect Mrk 1501 to change from log $\lambda L_{5100}$ = 44.32 $\pm$ 0.05 to around 44.10. Host galaxies in objects with lags similar to Mrk 6 contributed on average 56\% of the total luminosity. If we applied this to Mrk 6, the host-subtracted luminosity would then be log $\lambda L_{5100}$ = 43.40. Both of these objects will likely continue to lie below the current \radlum relation, but within the normal range of scatter currently observed. However, it is important to note that there is a very large scatter in the fraction of the luminosity contributed by the host galaxies in general, so these numbers are used for very rough estimations only. \subsection{Comments on Individual Objects} \subsubsection{Mrk 335} Previous reverberation measurements of Mrk 335 were made by \cite{Kassebaum97} and \cite{Peterson98} and reanalyzed by \cite{Peterson04} and \cite{Zu11}. Previous \Hbeta \ measurements for this object are quite good, and it was included in this study mainly for the potential to measure the size of the high ionization component of the BLR. Details from our study have been reported by \cite{Grier12a}, and the data have been included in this study for completeness. Our new measurement of \rblr = 14.1$^{+0.4}_{-0.4}$ days is consistent with the previous measurement of \rblr = 15.3$^{+3.6}_{-2.2}$ (\citealt{Zu11}) when taking into account the luminosity change of Mrk 335 between these two campaigns. In other words, the position of Mrk 335 on the \radlum relationship changed predictably given the expected photoionization slope of $R\sim L^{1/2}$ (i.e., $\tau \sim L^{1/2}$). \subsubsection{Mrk 1501} No previous reverberation mapping measurements exist for Mrk 1501. We measure $\tau$~=~15.5$^{+2.2}_{-1.9}$ days and a resulting black hole mass of $M_{\rm BH}$~=~(1.84~$\pm$~0.27)~$\times$~10$^{8}$~\Msun. As noted above, this object lies noticeably to the right of the \radlum relation, which is expected since we have not yet subtracted the host galaxy contribution to the 5100\,\AA \ luminosity due to the lack of HST imaging data. As mentioned above, once we have corrected for host subtraction we expect the object to lie below the relation, but still within the normal scatter. \subsubsection{3C\,120} 3C\,120 was observed by \cite{Peterson98} and reanalyzed by \cite{Peterson04}. The latter study reported $\tau_{\rm cent}$~=~ 39.4$^{+22.1}_{-15.8}$~days, corresponding to \mbh~=~ 5.55$^{+3.14}_{-2.25}$~$\times$~10$^{7}$~\Msun. We included 3C\,120 in our campaign in an effort to reduce the large uncertainties in $R_{\rm BLR}$. Our new measurement of $\tau$~=~27.2$^{+1.1}_{-1.1}$ days leads to $M_{\rm BH}$~=~(6.7~$\pm$~0.6)~$\times$~10$^{7}$~\Msun, which is consistent with the previous measurements, but has much smaller uncertainties due to both better-sampled light curves and the improved techniques of measuring lags using SPEAR. Our new measurements place this object slightly below the \radlum relation, consistent with its previously-measured position. \subsubsection{Mrk 6} Mrk 6 was observed in reverberation studies by \cite{Sergeev99}, \cite{Doroshenko03}, and \cite{Doroshenko12}, who measured \Hbeta \ time lags using cross correlation. \cite{Doroshenko12} report $\tau_{\rm cent}$ = 21.1~$\pm$~1.9~days. This measurement was used to calculate \mbh~=~(1.8~$\pm$~0.2)~$\times~10^{8}$ \Msun. This study used light curves that cover a very long time period with more sparse sampling than our campaign. Because of our dense time sampling, our light curves are sensitive to lags as small as a day or two. We measure a \Hbeta \ time lag of 9.2~$\pm$~0.8~days and \mbh~=~(1.36~$\pm$~0.13)~$\times~ 10^{8}$~\Msun. Our new $\tau$ measurement is substantially lower than the previous measurement -- however, varying BLR sizes are expected if the luminosity of the object changes, in accordance with the \radlum relation. In this case, the previous study reports lower AGN luminosity measurements than we find, and by the \radlum relation we would also expect a smaller $\tau$ measurement in their data. However, they measure a lag on order of twice the length of ours, so this difference cannot be explained by a change in luminosity state. To investigate, we ran the light curves from \cite{Doroshenko12} through both the CCF and SPEAR analysis software, and obtain results that are generally consistent with theirs to within errors when using cross correlation. However, we do note that the lags we measure using SPEAR are noticeably lower than the lags they report when we confine our attention to their more well-sampled light curves. For example, with their best-sampled light curves that cover the end of their observing period, we measure $\tau$~=~11.5$^{+1.2}_{-0.8}$ days, where they report $\tau$ = 20.4$^{4.6}_{-4.1}$ days for the same light curves. The median spacing between observations in the \cite{Doroshenko12} light curves is always above 10 days, which we suspect renders their light curves insensitive to lags shorter than this. We are confident that our measurement of $\tau$~=~9.2 days is accurate for our data set, as the lag signal is clearly visible in our light curves and the sampling rate is very high in both the continuum and \Hbeta \ light curves. Mrk 6 has a very interesting \Hbeta \ profile (see Figure \ref{fig:f1}) that has been observed to change dramatically both in flux and shape (\citealt{Doroshenko03}, \citealt{Sergeev99}). The rms line profile from our study is clearly double-peaked and shows significant blending of the \heii \ emission with the \Hbeta \ emission. To verify that our line width measurement is not affected by the \heii \ component, we fit a second-order polynomial to the \heii \ feature in the rms spectrum and subtracted it from the total rms spectrum. We then re-measured the line width from this new spectrum and obtained a measurement consistent with that taken from the entire rms spectrum. This suggests that the \heii \ blending did not affect our measurement of $\sigma_{\rm line}$, so we adopted our original measurement for use in the \mbh calculations. There are a variety of physical models that can produce this double-peaked profile, many of which we expect would show clear velocity-resolved signatures in our data. This analysis is beyond the scope of this paper and will be explored in detail in a future work. \subsubsection{PG\,2130+099} Initial reverberation results for PG\,2130+099 were first published by \cite{Kaspi00}, who measured a value of $\tau$ on the order of 200 days and thus inferred a black hole mass of 1.4~$\times$~10$^8$~\Msun. It was a significant outlier on both the \msigma and \radlum relations. However, PG\,2130+099 was later re-observed and measured to have $R_{\rm BLR}$~=~22.9$^{+4.4}_{-4.3}$~days and \mbh~=~(3.8~$\pm$~1.5)~$\times$~$10^{7}$~\Msun \ (\citealt{Grier08}), both of which are about an order of magnitude smaller than the original measurements. The discrepancy was attributed to undersampled light curves in the first measurements, as well as long-term secular changes in the \Hbeta \ equivalent width. While the 2008 data showed a clear reverberation signal, the amplitude of the variability in the study was quite low and the campaign was short in duration, rendering it insensitive to lags above 50 days, which made the light curves less than ideal. We included this object in our study in hopes of obtaining a better-sampled light curve sensitive to a wide range of time lags that would yield a more definitive result. Our new measurements of $\tau$~=~$12.8^{+1.2}_{-0.9}$~days and \mbh~=~(4.6~$\pm$~0.4)~$\times$~$10^{7}$~\Msun \ are consistent with those of \cite{Grier08}, but with higher precision. Note that PG\,2130+099 is in a noticeably different position on the \radlum relation --- it has moved nearly parallel to the relation from its previous location, since its luminosity has also changed. Like Mrk 335, this is consistent with the expectations from photoionization models of the BLR.
| 12
| 6
|
1206.6523
|
1206
|
1206.3595_arXiv.txt
|
Observations of the thermal X-ray emission from old radio pulsars implicate that the size of hot spots is much smaller then the size of the polar cap that follows from the purely dipolar geometry of pulsar magnetic field. Plausible explanation of this phenomena is an assumption that the magnetic field at the stellar surface differs essentially from the purely dipolar field. Using the conservation of the magnetic flux through the area bounded by open magnetic field lines we can estimate the surface magnetic field as of the order of $10^{14}$G. Based on observations that the hot spot temperature is about a few million Kelvins the Partially Screened Gap (PSG) model was proposed which assumes that the temperature of the actual polar cap equals to the so called critical temperature. We discuss correlation between the temperature and corresponding area of the thermal X-ray emission for a number of pulsars. We have found that depending on the conditions in a polar cap region the gap breakdown can be caused either by the Curvature Radiation (CR) or by the Inverse Compton Scattering (ICS). When the gap is dominated by ICS the density of secondary plasma with Lorentz factors $10^{2}-10^{3}$ is at least an order of magnitude higher then in a CR scenario. We believe that two different gap breakdown scenarios can explain the mode-changing phenomenon and in particular the pulse nulling. Measurements of the characteristic spacing between sub-pulses ($P_{2}$) and the period at which a pattern of pulses crosses the pulse window ($P_{3}$) allowed us to determine more strict conditions for avalanche pair production in the PSG.
|
The Standard model of radio pulsars assumes that there exists the Inner Acceleration Region (IAR) above the polar cap where the electric field has a component along the opened magnetic field lines. In this region particles (electrons and positrons) are accelerated in both directions: outward and toward the stellar surface (\cite{1975_Ruderman}). Consequently, outflowing particles are responsible for generation of the magnetospheric emission (radio and high-frequency) while the backflowing particles heat the surface and provide required energy for the thermal emission. The Vacuum Gap model assumes that ions cannot be extracted from stellar surface due to huge surface magnetic field of a pulsar. On the other hand it predicts the surface temperature of few million Kelvins (heating by backflowing particles). As shown by \cite{2007_Medin} for such high temperatures the ions extraction from surface cannot be ignored. In fact for surface temperature few million Kelvins the gap can form only if surface magnetic field is much stronger than the dipolar component ($B_s=10^{14} G$). The analysis of X-ray radiation is an excellent method to get insight into the most intriguing region of the neutron star (NS). X-ray emission seems to be a quite common feature of radio pulsars. In general X-ray radiation from an isolated NS can consist of two distinguishable components: the thermal emission and the nonthermal emission. The thermal emission can originate either from the entire surface of cooling NS or the spots around the magnetic poles on stellar surface (polar caps and adjacent areas). The nonthermal component is usually attributed to radiation produced by Synchrotron Radiation and/or Inverse Compton Scattering from charged relativistic particles accelerated in the pulsar magnetosphere. For most observations it is very difficult to distinguish contribution of different components (thermal and nonthermal). To get an information about polar cap of radio pulsars we analysed X-ray radiation from old pulsars as their surface is already cooled down and their magnetospheric radiation (nonthermal component) is also significantly weaker. The blackbody fit allows us to obtain directly the temperature ($T_s$) of the hot spot. Using the distance ($D$) to the pulsar and the luminosity of thermal emission ($L_{bol}$) we can estimate the area ($A_{pc}$) of the hot spot. In most cases ($A_{pc}$) differs from the conventional polar cap area $A_{dp} \approx 6.2 \times 10^4 P^{-1} \,{\rm m^2}$, where $P$ is the pulsar period. We use parameter $b = A_{dp} / A_{pc}$ to describe the difference between $A_{dp}$ and $A_{pc}$. Pulsars for which it is possible to determine polar cap size (old NSs) show that the actual polar cap size is much smaller ($b\gg 1$) than the size of conventional polar cap (see Tab. \ref{tab:results}). The surface magnetic field can be estimated by the magnetic flux conservation law as $b = A_{dp} / A_{pc} = B_s / B_d$, where $B_d = 2.02 \times 10^{12} \left ( P \dot{P}_{-15} \right ) ^{0.5}$, and $\dot{P}_{-15} = \dot{P}/10^{-15}$ is the period derivative. The X-ray observations suggest that surface magnetic field strength at polar cap should be of the order of $10^{14}$ G. On the other hand we know from radio observations that magnetic field at altitudes where radio emission is generated should be dipolar. To meet both these requirements Partially Screened Gap model assumes the existence of crust-anchored local magnetic anomalies which affect magnetic field only on short distances. According to our model the actual surface temperature equals to the critical value ($T_s \sim T_{crit}$) which leads to the formation of Partially Screened Gap.
|
To follow both theoretical predictions and observational data PSG model was proposed. Recent studies on the model showed that cascade scenario in a gap (CR or ICS) strongly depends on spark width. X-ray observations in combination with sub-pulse drift analysis allowed to determine that for observed pulsars ICS is responsible for gamma-ray photon generation in a gap. The exact density of secondary plasma can be calculated only by performing full cascade simulation with inclusion of heating by backstreaming particles. Nevertheless we can still find dependence of multiplicity factor on number of photons upscatterd by one primary particle. We were able to find two populations of secondary plasma with different energy distribution. It turns out that ICS dominated gap creates conditions suitable for generation of radio emission at altitudes several tens of stellar radii.
| 12
| 6
|
1206.3595
|
1206
|
1206.3076_arXiv.txt
|
Many models of dark matter contain more than one new particle beyond those in the Standard Model. Often heavier particles decay into the lightest dark matter particle as the Universe evolves. Here we explore the possibilities that arise if one of the products in a (Heavy Particle) $\rightarrow$ (Dark Matter) decay is a positron, and the lifetime is shorter than the age of the Universe. The positrons cool down by scattering off the cosmic microwave background and eventually annihilate when they fall into Galactic potential wells. The resulting 511 keV flux not only places constraints on this class of models but might even be consistent with that observed by the INTEGRAL satellite.
|
Although there is ample evidence for the existence of non-baryonic dark matter, the properties of the particle[s] that make up the dark matter are not well determined. This is not surprising as the evidence for dark matter to date is from observations of its gravitational effects. But this situation may change soon as more powerful direct and indirect detection experiments come online; indeed, there are already numerous hints from both sectors. Direct detection experiments DAMA/LIBRA \cite{Bernabei:2010mq}, COGENT \cite{Aalseth:2010vx}, CRESST \cite{Angloher:2011uu} have events consistent with a dark matter signal and there are several hints of indirect detection from the Fermi Gamma Ray Satellite and radio observations~\cite{Hooper:2010mq,Linden:2011au}. There are also two sets of observations of positrons that could be explained by dark matter: the observed excess of positrons over electrons in PAMELA \cite{Adriani:2008zr} and INTEGRAL observations of the 511\kev line from the centre of the galaxy (see, e.g., \cite{Jean:2003ci,Knodlseder:2003sv,Boehm:2003bt} and \cite{Prantzos:2010wi} for a review). It has been suggested~\cite{Hooper:2004qf,Picciotto:2004rp,Pospelov:2007xh,Cembranos:2007vk} that the INTEGRAL observations can be explained by dark matter decays\footnote{See, however, Ref.~\cite{Lingenfelter:2009kx} for problems with the dark matter interpretation. While the model proposed here circumvents some of these (the positrons being quite cold as they enter the disk and bulge), tension remains in the bulge to disk ratio.}. The observed flux of photons $F_{\rm INTEGRAL}\simeq 10^{-3}$ ph cm$^{-2}$ sec$^{-1}$ is apparently produced by the annihilation of positrons with galactic electrons almost at rest. Known astrophysical sources cannot account for the totality of these positrons \cite{Prantzos:2010wi}, so it is natural to consider positrons produced by dark matter annihilations or decays. In the decay scenarios considered so far, a small fraction $(t_0/\tau_{\rm DM})$ of dark matter particles decay at the present time into low momenta positrons ($\sim$few MeV) in the bulge, and the positrons subsequently annihilate with electrons to produce the 511 keV line. The required lifetime is larger than the age of the universe $\tau_{\rm DM}\sim 10^{20}\sec \(100\, \gev/m_{\rm DM}\)$. Here we explore the possibility that the required positrons were produced at earlier times. For concreteness, we assume that the sector containing dark matter has a stable component $\chi$ (which is the dark matter today) and an unstable component, $X$, that decays into $\chi$ and positrons some time after recombination. Even within this simple scenario, there are a number of dials to turn: the masses of $\chi$ and $X$; the lifetime of $X$; the charge of $X$; the relative abundance of $X$ (since $\chi$ is the dark matter today, its abundance is fixed by observations); and the branching ratios for the $X$ decay into positrons and photons. What happens to the decay-produced positrons depends on the values of these parameters, and we find a rich spectrum of observational consequences. Thus, although motivated by the 511 keV excess, we will analyze the constraints and possibilities of the generic idea of a heavy particle decaying into dark matter and positrons. The main difference between our proposal and others is that the positrons in this class of models cool in the early universe very efficiently before they can annihilate. Therefore, when they enter our Galaxy, they are already very cold. This contrasts with models in which the positrons are produced in decays or annihilations of Galactic dark matter, typically concentrated near the center of the Galaxy. In those models, cooling the positrons before they can annihilate is a central problem. Section II introduces notation, the constraints, and physical processes that are relevant for any positron scenario. The next sections elaborate on the details of cosmological down-scattering (III); energy loss in our Galaxy (IV); and capture in the bulge (V). We conclude in \S VI with a discussion of viable particle physics models.
|
High energy positrons produced at early times via the decay of a second dark matter species cool down and will eventually get trapped in galaxies. These cooled positrons would contribute to the 511 keV flux in our Galaxy and -- with a suitable choice of parameters -- might explain the observed bulge to disk ratio of this flux. We conclude with a brief discussion of the possible models that can generate this kind of mass hierarchy and couplings in the dark sector. First consider the case where the unstable DM component is neutral. In supersymmetric models, while the possibility of two neutral species is quite natural (with both a neutralino and gravitino), the heavier species will often decay predominantly into photons, leading to very tight constraints from the diffuse flux~\cite{Boubekeur:2010nt}. This traces back to the fact that the lightest neutralino usually has a large photino component. More generally, we can use effective field theory to write down operators that lead to decays. If the dark matter species consists of fermions, the coefficient is of order $\Lambda^{-2}$ where $\Lambda$ is the UV scale above which the effective theory breaks down. The long lifetimes required then point to $\Lambda\sim10^{11}$ GeV. For scalar dark matter, the operator is suppressed by only a single power of $\Lambda$, so the long lifetimes require a UV scale of order the Planck mass. In this effective field theory context, too, operators leading to decay to photons must be suppressed. Models that accomplish this typically rely on a secluded dark sector, which communicates with the Standard Model through suppressed interactions. In all models, it is a challenge to obtain the correct relic density since couplings are so small. Likely, $\chi$ cannot be a conventional thermal relic and other mechanisms (such as `freeze-in'' \cite{Hall:2009bx} or dilution via a short phase of thermal inflation \cite{Lyth:1995hj}) need to be explored. Getting the correct relic density is even more difficult in the case of charged dark matter. In the charge symmetric case, dark matter is likely to annihilate quickly into photons, leaving no appreciable $X$'s to decay at late times. So a mechanism to suppress annihilations will be essential to any successful symmetric model. Asymmetric models do not suffer from this problem, but generating an asymmetry -- which of course requires out-of-equilibrium -- might prove challenging in a sector that interacts electromagnetically. Although we have touched on particle physics realizations of our scenario, in principle our constraints apply to any source of positrons that are operating from recombination up to now. Most of the constraints (apart from heavy water, colliders, and possibly direct decays to photons) apply to any mechanism that produces positrons in the early universe.
| 12
| 6
|
1206.3076
|
1206
|
1206.1073_arXiv.txt
|
We investigate a spatially flat Friedmann-Robertson-Walker (FRW) cosmological model with cold dark matter coupled to a dark energy which is given by the modified holographic Ricci cutoff. The interaction used is linear in both dark energy densities, the total energy density and its derivative. Using the statistical method of $\chi^2$-function for the Hubble data, we obtain $H_0=73.6km/sMpc$, $\omega_s=\gamma_s -1=-0.842$ for the asymptotic equation of state and $ z_{acc}= 0.89 $. The estimated values of $\Omega_{c0}$ which fulfill the current observational bounds corresponds to a dark energy density varying in the range $0.25R < \ro_x < 0.27R$.
|
Many different observational sources such as the Supernovae Ia \cite{astro-ph/9805201}-\cite{astro-ph/9812133}, the large scale structure from the Sloan Digital Sky survey \cite{arXiv:0707.3413} and the cosmic microwave background anisotropies \cite{arXiv:1001.4538} have corroborated that our universe is currently undergoing an accelerated phase. The cause of this behavior has been attributed to a mysterious component called dark energy and several candidates have been proposed to fulfill this role. For example, a positive cosmological constant $\Lambda$, explains very well the accelerated behavior but it has a deep mismatch with the theoretical value predicted by the quantum field theory. Another issue of debate refers to the coincidence problem, namely: why the dark energy and dark matter energy densities happen to be of the same order precisely today. In order to overcome both problems, it has proposed a dynamical framework in which the dark energy varies with the cosmic time. This proposal has led to a great variety of dark energy models such as quintessence \cite{astro-ph/9807002}, exotic quintessence \cite{arXiv:0706.4142}, N--quintom \cite{arXiv:0811.3643} and the holographic dark energy (HDE) models \cite{hep-th/0403127} based in an application of the holographic principle to the cosmology. According to this principle, the entropy of a system does not scale with its volume but with its surface area and so in cosmological context will set an upper bound on the entropy of the universe \cite{hep-th/9806039}. In \cite{hep-th/9803132} it has been suggested that in quantum field theory a short distance cut-off is related to a long distance cut-off (infra-red cut-off L) due to the limit set by the formation of a black hole. Further, if the quantum zero-point energy density caused by a short distance cut-off is taken as the dark energy density in a region of size L, it should not exceed black hole mass of the same size, so $\rho_{\Lambda}=3c^{2}M^{2}_{~P}L^{-2}$, where $c$ is a numerical factor. In the cosmological context, the size L is usually taken as the large scale of the universe, thus Hubble horizon, particle horizon, event horizon or generalized IR cutoff. Between all the interesting holographic dark energy models proposed so far, here we focus our attention on a modified version of the well known Ricci scalar cutoff proposed in \cite{arXiv:0810.3663}. Besides, there could be a hidden non-gravitational coupling between the dark matter and dark energy without violating current observational constraints and thus it is interesting to develop ways of testing an interaction in the dark sector. Interaction within the dark sector has been studied mainly as a mechanism to solve the coincidence problem. We will consider an exchange of energy or interaction between dark matter and dark energy which is a linear combination of the dark energy density $\ro_x$, total energy density $\ro$, dark matter energy density $\ro_c$, and the first derivate of the total energy density $\ro'$ as has been studied in \cite{arXiv:0911.5687}.
|
We have examined a modified holographic Ricci dark energy coupled with cold dark matter and found that this scenario describes satisfactorily the evolution of both dark components. We have shown that the compatibility between the modified and the global conservation equations constraints the equation of state of the dark energy component. From the observational point of view we have obtained the best fit values of the cosmological parameters $z_{acc}=0.89$, $H_0=73.6km/sMpc$ \ and $\gamma_s=0.158$ with a $\chi^{2}_{dof}=0.761 < 1$ per degree of freedom. The $H_{0}$ value is in agreement with the reported in the literature \cite{Riess:2009pu} and the critical redshift $z_{acc}=0.89$ is consistent with BAO and CMB data \cite{Li:2010da}. We have found that the age crisis at high redshift cannot be alleviated so it will need another kind of interaction. \begin{theacknowledgments} M.I. Forte is grateful for the invitation and for financial support provided by I GAC. M. G. Richarte is partially supported by CONICET. The authors are grateful with the Prof. L.P. Chimento for a careful reading of the manuscript. \end{theacknowledgments}
| 12
| 6
|
1206.1073
|
1206
|
1206.2841_arXiv.txt
|
{ We have examined the structure of hot accretion flow with a large-scale magnetic field. The importance of outflow/wind and thermal conduction on the self-similar structure of a hot accretion flows has been investigated. In comparison to the accretion disk without winds/outflow, our results show that the radial and rotational velocities of the disk become faster however it become cooler because of the angular momentum and energy flux which are taking away by the winds/outflows. but thermal conduction opposes the effect of winds/outflows not only decrease the rotational velocity but also increase the radial velocity as well as the sound speed of the disk. In addition we have studied the effect of global magnetic field on the structure of the disk. We have found out that all three components of magnetic field have a noticeable effect on the structure of the disk such as velocities and vertical thickness of the disk.
|
% \label{sect:intro} Black hole accretion disks provide the most powerful energy-production mechanism in the universe. It is well accepted that many astrophysical systems are powered by black hole accretion. Most of the observational features of black hole accretion systems can be explained through The standard thin disks ( geometrically thin and optically thick accretion disks) with a significant success (Shakura \& Syunyaev, 1973). The ultraviolet and optical emission which is observed in quasars frequently has a role in the thermal radiation in the standard disks (SD) that surrounds by the massive black holes in quasars (e.g., Sun \& Malkan, 1989). On the other hands, the SD seems unable to recreate the spectral energy distributions (SED) of in a lot of other luminance sources (e.g., Sgr A.) accreting in low rates, and the advection dominated accretion flows (ADAF) were recommended to be present in these sources (Narayan \& Yi, 1994, 1995). In the ADAF model, the most energy which is released from the infalling gases in the accretion flow is changed to the internal energy of the gas. only a small fraction of the energy in ADAFs gets radiated away, thus, their radiation efficiency is a lot less than that for SD (see Narayan et al., 1998, for a review and references therein, Kato, Fukue \& Mineshige 2008). Outflows/wind have been found in numerical simulations of hot accretion flow by both magnetohydrodynamic (MHD) and hydrodynamic (HD) simulations. In some pioneer paper outflow was reportedin HD simulations are by Igumenshchev \& Abramowicz (1999, 2000) and Stone, Pringle \& Begelman (1999). Igumenshchev \& Abramowicz (2000) have shown that convective accretion flows and flows with large-scale circulations have significant outward-directed energy fluxes, which have important implications for the spectra and luminosities of accreting black holes. Most recent work focusing on the convective outflows is by Yuan \& Bu (2010). As a result, they have shown that the mass accretion rate decreases inward, i.e., only a small fraction of accretion gas can fall onto the black hole, while the rest circulates in the convective eddies or lost in convective outflows. Stone \& Pringle (2001) in their MHD simulations have reported the appearance of wind and outflow. In their simulations the net mass accretion rate is small compared to the mass inflow and outflow rates at large radii associated with turbulent eddies. There are a lots of models indicate that modeling the hot accretion flows is a big challenging and it is controversial problem which should be investigated. Thermal conduction is one of the important neglected physical phenomena in the modeling of ADAFs. Recent observations of the hot accretion flow around AGNs indicated that they should be based on collision-less regime (Tanaka \& Menou 2006). Chandra observations provide tight constraints on the physical parameters of gas. Tanaka \& Menou (2006) have used these constraints ( Loewenstein et al. 2001; Baganoff et al. 2003; Di Matteo et al. 2003; Ho et al. 2003) to calculate the mean free path for the gas materials. They have reported and suggested that thermal conduction can be as a possible choice as a mechanism for transporting sufficiently extra heating is provided in hot advection dominated accretion flows. So it should be important to consider the role of thermal conduction in a ADAF solution. Shadmehri (2008), Abbassi et al. 2008,2010, Tanaka \& Menou (2006), Faghei 2012, have studied the effect of hot accretion flow with thermal conduction with a semi-analytical method; dynamics of such systems have been studied in simulation models (e. g. Sharma et al. 2008; Wu et al. 2010). Shadmehri (2008) has shown the thermal conduction oppose the rotational velocity, but increase the temperature. Abbassi et al. (2008) have shown that for this problems there are two types of solution, high and low accretion rate. They had plotted the radial velocity for both of the solutions which have shown that it will modified by thermal conduction. previous studies about black hole accretion disks recommend that a large-scale magnetic field root in the ISM or even central engine, will be taken inward and compressed closed in the black hole by the accreting materials (Bisnovatyi-Kogan \& Ruzmaikin 1974, 1976). Therefor, large-scale magnetic field has a great role on the structure of hot flow because of highly ionized flow. The effect of magnetic field on the structure of ADAFs were also reported by ( Balbus \& hawley 1998, Kaburiki 2000, Shadmehri 2004, Meier 2005, Shadmehri \& khajenabi 2005, 2006, Ghanbari et al. 2007, Abbassi et al. 2008, 2009, Bu,Yuan \& Xie (2009)). Akizuki \& fukue 2006 and Abbassi et al 2008, presented the self-similar solutions of the flow based on the vertically integrated equations. At the same time, they have stressed on the intermediate case where the magnetic force is compared with other forces by this assumption that the physical variables of an accretion disk have just radius function and also they explained more about disk toroidal magnetic field . In this paper, we first extend the work of Akizuki \& Fukue 2006; Zhang \& Dai 2008 and Abbassi et al. 2008, 2010 by considering a general large-scale magnetic field in all the three components in cylindrical coordinates $ (r, \varphi, z) $ and then discuss effects of the global magnetic field on the flows with thermal conduction and wind. We adopt the treatment that the flow variables are functions of the disk radius, neglect the different structure in the vertical direction except for the z-component momentum equation. We also compare our results with pervious studies, in which a large-scale magnetic field, thermal conduction and wind are neglected.
|
\label{sect:conclusion} In this paper, we studied an accretion disk in the advection dominated regime by considering a large-scale magnetic field and in presence of wind/outflow and thermal conduction . Also, some approximations were made to simplify our main equations. We supposed an static and axially symmetric disc with the $\alpha$-prescription of viscosity, $ \nu = \alpha c_{s} H $ and a set of similarity solution was presented for such a configuration. We have extended Akizuki \& Fukue 2006; Zhang \& Dai 2008 and Abbassi et al 2010 self similar solutions to present the structure of the advection dominated accretion flows (ADAFs). Also, we ignored self-gravity of the discs and the general relativistic effects. Our results have shown that strong wind/outflow can have lower temperature and there are satisfied with the results presented by Kawabata \& Mineshige 2009. The most important finding of these similar solutions is that, our accreting flow is affected not only by mass-loss but also by loss of energy by the wind/outflow. There are some limitations in these solutions. One of them is that the accretion flow with conduction is a single-temperature structure. So, if one uses a two-temperatures structure for the ions and electrons in the discs, it is expected that the ions and electron temperatures decouple in the inner edge, which will modify the role of the conduction. The anisotropic character of conduction in the presence of magnetic field is the other limitation in our solution (Balbus 2001) Although our preliminary similarity solutions are so simplified, they obviously improve our understanding of the physics of hot accretion flow around a compact object. To have a more realistic picture of a hot accretion flow, a global solution is needed rather than the self similar solution. In our future studies we intend to investigate the effect of thermal conduction on the observational appearance and properties of a hot magnetized flow.
| 12
| 6
|
1206.2841
|
1206
|
1206.4306_arXiv.txt
|
Ground-based optical surveys such as PanSTARRS, DES, and LSST, will produce large catalogs to limiting magnitudes of $r \gtrsim 24$. Star-galaxy separation poses a major challenge to such surveys because galaxies---even very compact galaxies---outnumber halo stars at these depths. We investigate photometric classification techniques on stars and galaxies with intrinsic FWHM $<0.2$~arcsec. We consider unsupervised spectral energy distribution template fitting and supervised, data-driven Support Vector Machines (SVM). For template fitting, we use a Maximum Likelihood (ML) method and a new Hierarchical Bayesian (HB) method, which learns the prior distribution of template probabilities from the data. SVM requires training data to classify unknown sources; ML and HB don't. We consider i.) a best-case scenario (SVM$_{best}$) where the training data is (unrealistically) a random sampling of the data in both signal-to-noise and demographics, and ii.) a more realistic scenario where training is done on higher signal-to-noise data (SVM$_{real}$) at brighter apparent magnitudes. Testing with COSMOS $ugriz$ data we find that HB outperforms ML, delivering $\sim80\%$ completeness, with purity of $\sim60-90\%$ for both stars and galaxies. We find no algorithm delivers perfect performance, and that studies of metal-poor main-sequence turnoff stars may be challenged by poor star-galaxy separation. Using the Receiver Operating Characteristic curve, we find a best-to-worst ranking of SVM$_{best}$, HB, ML, and SVM$_{real}$. We conclude, therefore, that a well trained SVM will outperform template-fitting methods. However, a normally trained SVM performs worse. Thus, Hierarchical Bayesian template fitting may prove to be the optimal classification method in future surveys.
|
Until now, the primary way that stars and galaxies have been classified in large sky surveys has been a morphological separation \citep[e.g.,][]{kron80,yee91,vasconcellos11a,henrion11a} of point sources (presumably stars) from resolved sources (presumably galaxies). At bright apparent magnitudes, relatively few galaxies will contaminate a point source catalog and relatively few stars will contaminate a resolved source catalog, making morphology a sufficient metric for classification. However, resolved stellar science in the current and next generation of wide-field, ground-based surveys is being challenged by the vast number of unresolved galaxies at faint apparent magnitudes. To demonstrate this challenge for studies of field stars in the Milky Way (MW), we compare the number of stars to the number of unresolved galaxies at faint apparent magnitudes. Figure~\ref{fig:stellarfraction} shows the fraction of COSMOS sources that are classified as stars as a function of $r$ magnitude and angular size. The COSMOS catalog \citep[($l,b$) $\sim$ (237,43) degrees, ][]{capak07a,scoville07b,ilbert09} relies on 30-band photometry plus HST/ACS morphology for source classification (see Section 4 for details). In Figure~\ref{fig:stellarfraction} we plot separately relatively bluer ($g-r < 1.0$) and redder ($g-r > 1.0$) sources because bluer stars are representative of the old, metal-poor main sequence turnoff (MSTO) stars generally used to trace the MW's halo while redder stars are representative of the intrinsically fainter red dwarf stars generally used to trace the MW's disk. We will see that the effect of unresolved galaxies on these two populations is different, both because of galaxy demographics and because the number density of halo MSTO stars decreases at faint magnitudes while the number density of disk red dwarf stars increases at faint magnitudes. \begin{figure} \centering \includegraphics[clip=true, trim=0cm 0cm 0.0cm 0.cm,width=8cm]{fig1.pdf} \caption{The stellar fraction of COSMOS sources as a function of magnitude, for sources with $g-r<1$ (left) and $g-r>1$ (right). Only stars and galaxies were included in this figure; Only a few percent of the COSMOS point sources are AGN. Colored curves indicate the upper limit in intrinsic full-width half-maximum (FWHM) allowed in the sample. Even in an optimistic scenario where galaxies with FWHM $\gtrsim 0.2$~arcsec can be morphologically distinguished from stars, unresolved galaxies will far outnumber stars in point source catalogs at faint magnitudes. This challenge is much greater for blue stars than for red stars.} \label{fig:stellarfraction} \end{figure} In an optimistic scenario in which galaxies with FWHM $\gtrsim 0.2$~arcsec can be morphologically resolved (the blue line in Figure~\ref{fig:stellarfraction}, second from the top), unresolved galaxies will still greatly outnumber field MW stars in a point source catalog. For studies of blue stars, field star counts are dominated by unresolved galaxies by $r\sim23.5$ and are devastated by unresolved galaxies at fainter magnitudes. The problem is far less severe for studies of red stars, which may dominate point source counts for $r\lesssim24.5$. Although morphological identification of galaxies with FWHM as small as $0.2$~arcsec is better than possible for the Sloan Digital Sky Survey (median seeing $\sim1.3$~arcsec), future surveys with higher median image quality (for example, $0.7$~arcsec predicted for LSST) may approach this limit. Utilizing the fundamental differences between SEDs of stars and galaxies can mitigate the contamination of unresolved galaxies in point source catalogs. In general, stellar SEDs are more sharply peaked (close to blackbody) than galaxies, which exhibit fluxes more broadly distributed across wavelength. Traditionally, color-color cuts have been used to eliminate galaxies from point source catalogs \citep[e.g.,][]{gould92a,reitzel98a,daddi04a}. Advantages of the color-color approach include its simple implementation and its flexibility to be tailored to the goals of individual studies. Disadvantages of this approach can include its simplistic treatment of measurement uncertainties and its limited use of information about both populations expected demographics. Probabilistic algorithms offer a more general and informative approach to photometric classification. The goal of probabilistic photometric classification of an astronomical source is to use its observed fluxes $\datavector{F}$ to compute the probability that the object is of a given type. For example, a star ($S$) galaxy ($G$) classification algorithm produces the posterior probabilities $p(S|\datavector{F})$ and $p(G|\datavector{F})$ and decides classification by comparing the ratio of the probabilities \begin{eqnarray}\displaystyle \Omega = \frac{p(S|\datavector{F})}{p(G|\datavector{F})} \quad . \label{eqn:oddsratio} \end{eqnarray} A natural classification threshold is an odds ratio, $\Omega$, of 1, which may be modified to obtain more pure or more complete samples. Algorithmically there are a large number of approaches which produce probabilistic classifications. Generally, these fall into i) physically based methods---those which have theoretical or empirical models for what type of physical object a source is, or ii) data driven methods---those which use real data with known classifications to construct a model for new data. Physically based Bayesian and $\chi^2$ template fitting methods have been extensively used to infer the properties of galaxies \citep[e.g.,][]{coil04a, ilbert09, xia09, walcher11a,hildebrandt10}. However, in those studies relatively little attention has been paid to stars which contribute marginally to overall source counts (although see \citealt{robin07}). Several groups have recently investigated data driven, support vector machine based star--galaxy separation algorithms \citep[e.g.,][]{saglia12,solarz12a,tsalmantza12a}. In this paper, we describe, test, and compare two physically based template fitting approaches to star--galaxy separation (maximum-likelihood and hierarchical bayesian), and one data driven (support vector machine) approach. In Section 2, we present the conceptual basis for each of the three methods. In Section \ref{sec:data}, we describe the COSMOS data set with which we test the algorithms. In Section \ref{sec:specifics}, we discuss the specific details, choices, and assumptions made for each of our classification methods. Finally, in Section \ref{sec:results} we show the performance of the algorithms, and discuss the advantages and limitations related to their use as classifiers.
|
Imminent and upcoming ground-based surveys are observing large portions of the sky in optical filters to depths ($r\gtrsim24$), requiring significant amounts of money, resources, and person power. In order for such surveys to best achieve some of their science goals, accurate star--galaxy classification is required. At these new depths, unresolved galaxy counts increasingly dominate the number of point sources classified through morphological means. To investigate the usefulness of photometric classification methods for unresolved sources, we examine the performance of photometric classifiers using {\it ugriz} photometry of COSMOS sources with intrinsic FWHM $<0.2$~arcsec, as measured with {\it HST}. We have focused our analysis on the classification of full survey datasets with broad science goals, rather than on the classification of subsets of sources tailored to specific scientific investigations. Our conclusions are as follows: \begin{itemize} \item Maximum Likelihood (ML) template fitting methods are simple, and return informative classifications. At $\ln(\Omega)=0$, ML methods deliver high galaxy completeness ($\gtrsim90\%$) but low stellar completeness ($\sim50\%$). The purity of these samples range from $\sim50-95\%$, and are a strong function of the relative sample fraction. \item We present a new, basic Hierarchical Bayesian (HB) approach to template fitting which outperforms ML techniques, as shown by the Receiver Operating Characteristic (ROC). HB algorithms have no need for training, and have nuisance parameters that are tuned according to the likelihood of the data itself. Further improvements to this basic algorithm are possible by hierarchically modeling the redshift distribution of galaxies, the SEDs of the input templates, and the distribution of apparent magnitudes. \item Support Vector Machine (SVM) algorithms can deliver excellent classification, which outperforms template fitting methods. Successful SVM performance relies on having an adequate set of training data. For optimistic cases, where the training data is essentially a random sample of the data (with known classifications), SVM will outperform template fitting. In a more-realistic scenario, where the training data samples only the higher signal to noise sources in the data to be classified, SVM algorithms perform worse than the simplest template fitting methods. \item It is unclear when, if ever, adequate training data will be available for SVM-like classification, HB algorithms are likely the optimum choice for next-generation classifiers. \item A downside of a paucity of sufficient training data is the inability to assess the performance of both supervised (SVM) and unsupervised (ML, HB) classifiers. If knowing the completeness and purity in detail is critical to the survey science goals, it may be necessary to seek out expensive training/testing sets. Otherwise, users will have to select the best unsupervised classifier (HB here), and rely on performance assessments extrapolated from other studies. \item Ground based surveys should deliver probabilistic photometric classifications as a basic data product. ML likelihoods are useful and require very little computational overhead, and should be considered the minimal delivered quantities. Basic or refined HB classifications require more overhead, but can be run on small subsets of data to learn the priors and then run quickly on the remaining data, making them a feasible option for large surveys. Finally, if excellent training data is available, SVM likelihoods should either be computed or the data should be made available. In any scenario, we strongly recommend that likelihood values, not binary classifications, should be delivered so that they may be propagated into individual analyses. \end{itemize} The future of astronomical studies of unresolved sources in ground based surveys is bright. Surveys like PanSTARRS, DES, and LSST will deliver data that, in conjunction with approaches discussed here, will expand our knowledge of stellar systems, the structure of the Milky Way, and the demographics of distant galaxies. We have identified troublesome spots for classification in single-epoch $ugriz$ photometric data, which may hinder studies of M-giant and metal-poor main-sequence turnoff stars in the Milky Way's halo. Future studies could improve upon our preliminary results by impleneting more-sophisticated prior distributions, by identifying crucial improvements needed in current template models or training data, or by pursuing complementary non-SED based classification metrics.
| 12
| 6
|
1206.4306
|
1206
|
1206.2400.txt
|
We study \DEWSB~in the setting of ultra-high--energy cosmic rays. The additional gauge interactions in these extension of the standard model can significantly modify the cosmic-ray--air cross section and thus lead to the direct detection or exclusion of a particular model as well as to explanations for features of the cosmic-ray spectrum.
|
% The present study is concerned with the quest after the origin of mass in our universe and the phenomenology of ultra-high--energy cosmic rays: On one hand, the huge energies available in cosmic rays might give evidence for \DEWSB. On the other hand, some observed cosmic-ray characteristics might be natural in the framework of \DEWSB. ~ The standard model answers the question after the origin of mass of the weak gauge bosons and the standard model fermions by the elementary scalar Higgs mechanism. We have, however, never seen any elementary scalar particle in nature and the elementary Higgs remains only a prediction of the standard model. Furthermore, all scalar degrees of freedom that have been encountered so far turned out to be composites. In particular this is also true for the scalar of Ginzburg and Landau \cite{Ginzburg:1950sr}, which describes the generation of the photon mass (the inverse London screening length \cite{London:1935}) in superconductivity in a gauge invariant manner and thus the Meissner-Ochsenfeld \cite{Meissner:1933} effect. It is the Abelian prototype for the standard model Higgs, but was identified by Bardeen, Cooper, and Schrieffer (BCS) \cite{Cooper:1956} to be a composite of two electrons, the Cooper pairs. For more guidance consider the standard model without the Higgs sector. There the chiral symmetry breaking in quantum chromodynamics (QCD) would break the electroweak symmetry dynamically. (In order to avoid the subtleties arising from the alignment of the vacuum, for this gedanken experiment, we constrain ourselves to one generation of fermions, especially to only two quarks.) With the scale of quantum chromodynamics kept fixed this would result in massive weak gauge bosons, which are roughly 1000 times to light. The pions of quantum chromodynamics would become the longitudinal degrees of freedom of the weak gauge bosons, and would, thus, not appear in the spectrum. This is suggestive of employing an additional sector of electroweakly charged fermions that interact under a new force that grows strong at low energy scales, leads to chiral symmetry breaking among the fermions, and, simultaneously, breaks the electroweak symmetry. This approach is known as technicolour \cite{Susskind:1978ms}, which is to the elementary Higgs mechanism what the BCS theory is to the description by Ginzburg and Landau. As will be explained in detail in Sect.~\ref{sec:dewsb}, the difference between theories of dynamical symmetry breaking and those with some kind of elementary Higgs mechanism that will be crucial vis-\`a-vis cosmic ray physics, is the presence of additional gauge interactions. In that section, we are also going to discuss several different implementations of the dynamical elecroweak symmetry breaking paradigm in the context of cosmic-ray physics. ~ The dynamic electroweak symmetry breaking mechanism can manifest itself directly only at LHC energies or above. Hence, if we would like to see its traces we have to look at features at the high energy end of the cosmic ray spectrum, i.e., at cosmic-ray energies of at least 10$^{18}$eV. The most prominent feature below this energy range is the ``knee", an abrupt steepening of the spectrum between 10$^{15}$ and 10$^{16}$eV. (Here and in the following, see, for example, Fig.~24.8 in \cite{Nakamura:2010}.) Under the assumption that all the cosmic rays up to 10$^{18}$eV have their origin inside the Milky Way, the knee could be the manifestation of the fact that all possible acceleration mechanisms within our galaxy have reached their maximal attainable energy. From 10$^{18}$eV onward cosmic rays are expected to be of extragalactic origin. In an interval of one order of magnitude around 10$^{19}$eV the spectrum is flatter than after the knee. The place where this plateau begins is known as the ``ankle.'' In case one merely extrapolates cosmic-ray--air cross sections straightforwardly to higher energies, the ankle structure must be explained by a changing composition of the cosmic rays. Under this assumption, the direction of change (to lighter or to heavier particles), however, inferred by the HiRes \cite{Abbasi:2005} and Auger \cite{Unger:2007} experiments, respectively, from the depth at which the shower maximum occurs in the atmosphere, do not agree. Conclusions about the composition change considerably once the assumption of the validity of a straightforward extrapolation of the scattering-cross sections is dropped \cite{Nakamura:2010}. Another explanation attempt for the ankle feature is a higher energy, e.g., cosmological flux that overtakes one of lower energy, e.g., a galactic flux \cite{Nakamura:2010}, which, however, also would not overcome the composition issue. At energies beyond $5\times10^{19}$eV the spectrum should decay rapidly if the cosmic rays were of extragalactic origin because, according to Greisen, Zatsepin, and Kuzmin (GZK) \cite{Greisen:1966} at these energies the electrically charged nuclei should lose a considerable fraction of their energy by pion production after scattering with photons from the cosmic microwave background (CMB) when propagating over extragalactic distances. (The analogous production of electron-positron pairs sets in at lower energies around 10$^{17}$eV, but is much less efficient in dispersing the cosmic ray energy than the heavier pions.) % Trans-GZK events have been observed \cite{Bird:1994,Takeda:2003,Abbasi:2008}. The hypothesis of an isotropic distribution of the highest-energy cosmic rays is consistent with observations, and a correlation with the position of known active galactic nuclei, which would give an indication for their origin, is not confirmed \cite{Tsunesada:2011}. The AGASA experiment does not confirm the GZK suppression \cite{Takeda:2003}, whereas other experiments see some suppression \cite{Abbasi:2008,Tsunesada:2011,Abreu:2011}. Barring instrument errors, the explanation of trans-GKZ events requires either so far unknown galactic sources or particles that are not affected by the GKZ cutoff. These would have to be sufficiently weakly interacting particles from cosmological sources, which still have a cross section that allows them to be accelerated to the required energies and to interact with the atmosphere in order to be detected. Taking stock, the three kinds of mechanisms brought forward to explain the ankle feature are changes in the cosmic-ray--air cross-section, the composition of the cosmic ray, and/or changes in their flux. In the following, we are going to investigate, which of these mechanisms are expected naturally in the presence of dynamical electroweak symmetry breaking. Moreover, we will study, whether the GZK cutoff would still have any bearing in this context. ~ The paper is organised as follows. In section \ref{sec:dewsb} we recall those details about \DEWSB~we need in order to proceed with our analysis. Thereafter, in section \ref{sec:main} we describe in detail the effect of \DEWSB~expected for the interaction of hadronic cosmic-ray particles with air. In subsection \ref{sec:bsm}, we also allow for a component of the cosmic ray flux that is not made from standard model particles, but from particles that are associated with \DEWSB. In the second part of that subsection, we extend our considerations also to cases, where the air shower is initiated by such a particle existing in a halo around the Earth and which is struck by a cosmic-ray particle. Finally, in Sect.~\ref{sec:sum} we summarise our findings. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
|
} We have studied \DEWSB~in the setting of ultra-high--energy cosmic rays. We have considered the possibility that ultra-high--energy cosmic rays offer the required energy to find direct signs for dynamic symmetry breaking and that some of the inconclusively explained features of the ultra-high--energy cosmic ray spectrum might have a natural explanation in such an extension of the standard model. First of all this necessitates evaluating the contribution to ultra-high--energy cosmic ray events from within the standard model, which is dominated by quantum chromodynamics. The decisive question is whether quantum chromodynamics already explains the entire cross section deduced from the observation of ultra-high--energy cosmic rays, e.g., in air showers, or not. There are still considerable uncertainties in the extrapolation of the cross section due to quantum chromodynamics, where we were able to measure it reliably, i.e., in collider experiments. In this context, it would be of utmost importance to obtain data from proton-nucleus collisions at the Large Hadron Collider (LHC). This system would come closest to the situation expected in air showers and would reduce the uncertainties from having to extrapolate the part of the cross section stemming from quantum chromodynamics considerably. Furthermore, extensions of the standard model that break the electroweak symmetry dynamically introduce new strongly interacting sectors into our description of the world, which---if they involve additional gauge interactions, and these are the ones, we have concentrated on in this paper---at increasing centre-of-mass energies should share the feature of a rapidly growing cross section with quantum chromodynamics. As a consequence, better knowledge from better data about the cross section of quantum chromodynamics will allow us to better judge on the extensions of the standard model. In fact, there are two scenarios. Either, based on further studies and measurements, it becomes clear that quantum chromodynamics does explain the entire cosmic-ray--air cross section in which case we can use this information to rule out extensions that would have further modified said cross section, or it turns out that the cross section cannot be explained by quantum chromodynamics alone which implies that extra contributions are needed, which could be supplied by additional gauge interactions like the ones present in \DEWSB. It should also be emphasised that this is the point where the ultraviolet sector of \DEWSB~can be probed and where it can be told apart from an effective description in terms of composite fields in four dimensional or extra-dimensional set-ups. In our investigation, we have distinguished different systems. We have started by studying the contribution of extensions to the case where the cosmic-ray projectile as well as the target in the atmosphere are ordinary hadrons. There the contribution from the extension is suppressed alone by the fact that, unlike quantum chromodynamics, it cannot connect directly to the constituents of the hadrons. Therefore, the onset of a HERA-like rapid growth of the cross section in the extension is probably hidden by the large contribution from quantum chromodynamics. Nevertheless, at energies around the ankle of the cosmic-ray spectrum a low-scale extension like technicolour can still form a dense system of gauge bosons (here technigluons) which, as they mediate much harder interactions than the ordinary gluons, can contribute a comparable amount of produced transverse momentum. For a given projectile species and incident energy this would lead to a shallower shower maximum. As the same average amount of transverse momentum is coming from relatively rare events the event-by-event fluctuations should become bigger. Moreover, the range of incident energies, where technicolour may impact the hadron-hadron cross section is in the vicinity of the ankle structure of the cosmic ray spectrum. Sectors that break the electroweak symmetry dynamically also come with other candidates for the projectiles. These can be bound states of the new strongly interacting sector but also new heavy leptons needed to cancel anomalies. First of all, they do not see the GZK cutoff predicted for ordinary hadrons. (On photons from the cosmic microwave background nuclei photo-disintegrate, while protons are exited and radiate off pions.) The bound states of a new sector are much more deeply bound. Hence, the scale of the cutoff is correspondingly higher. If, on top, they are electrically neutral any such process is delayed even further. (A similar statement holds for neutrinos.) What the cross section is concerned, their constituents can couple directly to the technigluons and the corresponding suppression of the cross section drops out. To the contrary, quantum chromodynamics receives a two-fold penalty; apart from models where the techniquarks carry also ordinary colour it cannot couple directly to the constituents anymore, and, moreover, the momentum scale set by the bound-state wave function is much larger than the fundamental scale of quantum chromodynamics, which makes its gauge coupling very small. If one could isolate such events, even the onset of the HERA-like growth of the cross section could be visible. If the GZK cutoff were effective for ordinary hadrons, all trans-GZK events could be of the nature described here. Taken the other way round, given that bound states of a new strongly interacting sector do not see the GZK cutoff predicted for ordinary hadrons and in the absence of a fundamental reason why particles can practically not be accelerated to trans-GZK energies, the small number of observed trans-GZK events seems to indicate that these bound states are not stable enough to make it to our atmosphere in great numbers. The heavy leptons couple without big suppression to the technigluon sector through the analogue of a large Yukawa coupling. The direct coupling of all these non-hadronic projectiles to the strongly interacting sector also allows for their efficient acceleration at the source. Light neutrinos offer a different glance at \DEWSB~extensions of the standard model. In the absence of (the analogue) of a large Yukawa coupling they couple democratically to quarks and techniquarks through the exchange of weak gauge bosons. Despite the suppression by the weak coupling this offers a relatively clean look at standard-model and non--standard-model components of the cross section \cite{CooperSarkar:2011gf} also at neutrino observatories such as IceCube \cite{ref:icecube}. In technicolour the connection between the gauge sector and electroweak symmetry breaking is most direct and consequently the involved energy scales are the smallest possible. In all other realisations of \DEWSB~the connection is less direct and the energy scale for probing the additional gauge sector is delayed. That makes finding an increase of the corresponding contribution to the cross section energetically more and more difficult. Concretely, in topcolour the gauge interactions generate four-fermion interactions, which later break the electroweak symmetry. This is very similar to the extended technicolour sector; the part that gives the masses to the heaviest standard-model fermion generation comes in at the lowest scales. In composite and little Higgs theories a priori no gauge interaction is specified, but under the assumption that the Goldstone sector of these theories is generated in this way, it is detached from the electroweak scale by virtue of the Goldstone theorem. In the so-called technicolour limit of composite Higgs theories where the pion decay constant and the electroweak scale coincide a similar cosmic-ray phenomenology could be expected. A concrete analysis necessitates, however, specifying an ultraviolet completion for these models. This also holds for little Higgs theories only that there are additional gauge fields above at least 2TeV that generate the Higgs potential. Additional space for discovering \DEWSB~through ultra-high--energy cosmic rays becomes available once we take into account the possibility that the target is not a hadron but a bound state of the extension or a heavy lepton, in the halo of the earth. To begin with, for a given lab-frame energy the centre-of-mass energy grows like the square root of the rest mass of the target. For the increase from the proton to the Z mass this already provides an additional order of magnitude. Thus, the prospect on discovering new gauge interactions by scattering on bound states they hold together is improved, as for these targets reaching the energy frontier is postponed. (See Fig.~\ref{fig:emissions}.) There are several paths that could be taken to further elaborate on the above results. It would be very beneficial to obtain data on the growth of the cross section of quantum chromodynamics with the energy at larger energies in a clean environment, which can be achieved in proton-nucleus collisions at the Large Hadron Collider. This is needed as better starting point for extrapolating the part of the cross section coming from quantum chromodynamics and as a milestone for understanding the growth of the cross section in an additional gauge sector. Before this data becomes available the above computations can already be developed further, i.e., e.g., taken to higher orders and/or beyond the mean-field approximation in the initial conditions as well as corrected for quantum fluctuations in the evolution of the parton distribution functions. That will also serve to get a better understanding of how to translate the quantum chromodynamics measurements to new gauge sectors. Moreover, the findings can then be implemented in a full shower simulation in order to quantify the statements about the shower profile and lateral spread. Furthermore, here, we mostly compare overall numbers. More pieces of information from ultra-high--energy cosmic-ray observations could be compared to our predictions, like, e.g., the secondary composition of the shower or the shower direction. An investigation of event-by-event fluctuations was already mentioned above, but requires a large number of recorded cosmic-ray events. Combining these investigations with information obtained from considering the acceleration mechanism at the source and/or the transport through space to the earth could also give additional insight \cite{Hooper:2008pm}. ~ In conclusion, studying \DEWSB~in the setting of ultra-high--energy cosmic rays can allow for the direct detection or exclusion of such a mechanism and may explain naturally structures in the cosmic-ray spectrum. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
| 12
| 6
|
1206.2400
|
1206
|
1206.3710_arXiv.txt
|
{Continuum radiative-transfer simulations are necessary for the interpretation of observations of dusty astrophysical objects and for relating the results of magnetohydrodynamical simulations to observations. The calculations are computationally difficult, and simulations of objects with high optical depths in particular require considerable computational resources.} {Our aim is to show how radiative transfer calculations on adaptive three-dimensional grids can be accelerated.} {We show how the hierarchial tree structure of the model can be used in the calculations. We develop a new method for calculating the scattered flux that employs the grid structure to speed up the computation. We describe a novel subiteration algorithm that can be used to accelerate calculations with strong dust temperature self-coupling. We compute two test models, a molecular cloud and a circumstellar disc, and compare the accuracy and speed of the new algorithms against existing methods.} {An adaptive model of the molecular cloud with fewer than 8 \% of the cells in the uniform grid produces results in good agreement with the full resolution model. The relative root-mean-square (RMS) error of the surface brightness is $\la 4$ \% at all wavelengths, and in regions of high column density the relative RMS error is only $\sim 10^{-4}$. Computation with the adaptive model is faster by a factor of $\sim 5$. Our new method for calculating the scattered flux is faster by a factor of about four in large models with a deep hierarchy structure, when images of the scattered light are computed towards several observing directions. The efficiency of the subiteration algorithm is highly dependent on the details of the model. In the circumstellar disc test the speed-up is a factor of two, but much larger gains are possible. The algorithm is expected to be most beneficial in models where a large number of small, dense regions are embedded in an environment of low mean density.} {}
|
Radiative transfer modelling is an indispensable tool in the interpretation of observations of dusty astrophysical objects such as circumstellar discs \citep[e.g.,][]{Acreman2010}, molecular cloud cores \citep[e.g.,][]{Steinacker2005}, spiral galaxies \citep[e.g.,][]{deLooze2012}, and galaxy mergers \citep[e.g.,][]{Hayward2011}. Solving the radiative transfer equation is numerically a very difficult problem, requiring considerable computational resources. It is sometimes possible to use one-dimensional or two-dimensional geometries, but for a realistic representation of inhomogenous structures such as turbulent molecular clouds, a fully three-dimensional (3D) model is needed. Moreover, an accurate description of the structure often requires the inclusion of a large variety of scales. For instance, a model of a circumstellar disc may require a resolution of $\sim R_{\sun}$ near the star, while, to include the whole disc, the total extent of the model needs to be several hundred AU. On a uniform cartesian grid, such a model would comprise hundreds of billions cells. Furthermore, the radiative transfer problem needs to be solved at several wavelengths and iteratively, if the dust is hot and the model is not optically thin, to calculate the thermal dust emission self-consistently. To reduce computational cost, several 3D radiative-transfer codes have been developed with support for adaptive resolution, i.e., the possibility of using a higher resolution in some parts of the model. With adaptive resolution, it is possible to use the finest resolution only where necessary, thereby reducing the number of cells in the model in some cases by many orders of magnitude. With cartesian grids, the most commonly used structure has been the oct-tree \citep[e.g.,][]{Jonsson2006, Acreman2010}. In an oct-tree, every model cell can be divided into eight subcells, which can then be divided further. A completely different approach was chosen by \citet{Ritzerveld2006}, who dispenses with the cartesian grid and moved the photons along the edges of Delaunay triangles in a point cloud. Regardless of the method used to calculate the radiation field, iteration is needed in cases where the dust self-coupling is significant. The simplest and most commonly used method is the $\Lambda$ iteration, but this suffers from very slow convergence in models with a high optical depth. Convergence can be improved with accelerated $\Lambda$ iteration (ALI) at the cost of increased computer memory requirements and the additional computation required at each iteration step \citep{Cannon1973,Rybicki1991}. Accelerated lambda iteration was reformulated for use with the Monte Carlo methods in \citet{Hogerheijde2000}, where it was called an accelerated Monte Carlo method \citep[see also][]{Juvela1999}. These methods are based on treating separately the part of the radiation emitted by a cell that is absorbed in the same cell or, in some variations of the method, in its immediate neighbourhood. Nevertheless, models with optical depths of several thousand, such as dense circumstellar envelopes, can require tens of iterations even when using ALI \citep{Juvela2005}. For a large model, the computation of a single iteration can be very time-consuming, making solving the full problem infeasible. The programme described in this article is based on the Monte Carlo method. The main difference from other Monte Carlo radiative-transfer codes is the use of a hierarchial tree structure of nested grids, closely resembling that employed in patch-based adaptive mesh refinement (AMR) hydrodynamics codes. Although hierarchial grids have been used in radiative transfer calculations before ~\citep[e.g.,][]{Robitaille2011}, the method described here differs from the previous ones in some key aspects. In the Monte Carlo simulation, the programme works grid-by-grid, moving to the next only after all photon packages in a grid have been processed instead of following one photon package at a time through the whole model. The most important new feature is the possibility of using subiterations, i.e., iterating separately those parts of the model that suffer from slow convergence. Although the current implementation uses the Monte Carlo method to compute the formal solution, the subiteration algorithm is independent of the solution method. The programme described here has already been used in the study of molecular cloud cores \citep{Malinen2011, Juvela2011} and galaxy mergers (Karl et al., in preparation). We describe how the Monte Carlo radiation transfer is performed on a hierarchial grid in Sect. 2, while the use of subiterations is explained in Sect. 3. In Sect. 4, we present results from some tests of the new method and compare them with a radiative transfer code that uses a regular 3D grid. Section 5 discusses possible future extensions, and Section 6 presents the conclusions.
|
We have presented new algorithms for radiative transfer on hierarchial grids. We have tested the algorithms in realistic test cases and compared the results with existing methods. Our main conclusions are: \begin{itemize} \item The grid-by-grid processing provides some computational benefits owing to the higher cache hit rate. In Monte Carlo calculations, knowing the full radiation field at the grid boundary allows us to use adaptive resampling methods. \item Results from a hierarchial model built from a uniform grid are close to the full resolution results, although the number of cells in the hierarchial model is smaller by more than an order of magnitude. Calculations with the hierarchial model are also faster than with the uniform grid. \item Pre-calculated extinction tables can be used to accelerate the calculation of scattered flux. In a large model with a deep hierarchy structure, and when images of the scattered light are calculated for several observer directions, the speed-up can in practice be a factor of at least four. \item Although in the circumstellar-disc test case using the subiterations only provided a speed-up of a factor of two, in other cases the gain can be far more significant. The subiteration algorithm is most beneficial in cases where the model contains several small, dense regions with a high optical depth embedded within a medium with a much lower mean density. \end{itemize}
| 12
| 6
|
1206.3710
|
1206
|
1206.1715_arXiv.txt
|
The correct amplitude and phase modulation formalism of the Blazhko modulation is given. The harmonic order dependent amplitude and phase modulation form is equivalent with the Fourier decomposition of multiplets. The amplitude and phase modulation formalism used in electronic transmission technique as introduced by Benk\H o, Szab\'o and Papar\'o (2011, MNRAS 417, 974) for Blazhko stars oversimplifies the amplitude and phase modulation functions thus it does not describe the light variation in full detail. The results of the different formalisms are compared and documented by fitting the light curve of a real Blazhko star, CM UMa.
|
The periodic modulation of the light curve of a large percentage of RR Lyrae stars, the so-called Blazhko effect, is a hundred-year old enigma. The height and time of maximum light of these stars oscillate with the same period of several days, weeks, months or even years. \citet{bl} was the first who noticed that no constant period could satisfy the observed times of maximum light of RW Dra, an RR Lyrae type star\footnote{At the beginning of the $20^{\mathrm {th}}$ century the origin of the light variation of RR Lyrae (cluster-type) variables was unknown and they were classified according to the shape of their light curves as `antalgol' type.}, and an oscillation in the fundamental period with a cycle length of 41.6 days had to be postulated. The striking changes in the height of the light maximum and in the shape of the light curve of RR Lyr, which turned out to be periodic with 40 days, was discovered by \cite{sh}. Following these discoveries, the Blazhko effect has been detected and studied in a number of RR Lyrae stars. The large-scale surveys as MACHO and OGLE \citep{alcock,scz03,scz11} detected 10\,--\,30 per cent incidence rate of the modulation among RRab stars in the Magellanic Clouds and the Galactic bulge, while a recent ground-based multicolour photometric survey \citep{kbs} and the highly accurate observations of the CoRoT and {\it Kepler} space missions \citep{ch09,k10} revealed that a significantly larger fraction of the fundamental-mode RR Lyrae stars (about 50 per cent) shows the effect. The high occurrence rate of modulated RR Lyrae stars makes the Blazhko effect an even more intriguing problem of the pulsation theory. No wonder that it has captivated the researchers' interest again in recent years. In spite of the fact that the highly accurate space data and the ground-based multicolour photometries have given new insights into the Blazhko phenomenon \citep[for a recent review see][]{k11}, no generally accepted theory exists that is able to explain the observed features of the effect \citep{ko09}. The connection between the shock-waves in the atmosphere and light-curve modulation is particularly interesting. The shock-waves play a significant role in the appearance of the bump/hump features on the light curve of RR Lyrae stars. \cite{psp}, investigating the spectral-feature variations during the 41-day modulation cycle of RR Lyrae, suggested a model in which `a critical level of shock-wave formation moves up and down' in the star's atmosphere. Later, \cite{cg97} have found that the amplitude of shock-waves is greatly correlated with the Blazhko modulation. To describe the light variations of Blazhko stars, mathematical models have been propounded. A model that aimed to describe the full Fourier spectrum of the modulation was suggested by \cite{bk}. The triplet was interpreted as the non-linear coupling of the main pulsation mode and a non-radial mode close to it, and many of their combination terms. However, no detailed confrontation of the possible predictions of this model with the observations (e.g. on the amplitudes of the components of the multiplets) has been performed. In other approaches \citep[][hereafter BSP11]{szj,b11} the multiplet structure is the simple result of the modulation of purely radial pulsation. In their comprehensive study, BSP11 presented an analytical formalism (applied in electronic signal transmission) for the description of the light curves of Blazhko RR Lyrae stars. Their model shows several light-curve characteristics similar to those observed in real Blazhko stars. It was also claimed that the new light-curve solution drastically reduced the number of necessary parameters compared to the traditional methods. However, in BSP11, the suggested method was not tested on real observational data. Recently, \cite{gug} made an attempt at applying the proposed new analytic modulation formalism to the Kepler data of the complex Blazhko star KIC 6186029 = V445 Lyr. Although the model light curve showed the global properties of the observed one, the fitted light curve deviated from the observed data significantly in certain phases of the pulsation and the modulation \citep[see figure 16 in][]{gug}. The surprisingly high variance of the residuals was explained by method-specific and object-specific reasons. The method did not describe the migration of the bumps and humps, and as any `stationary' model, it was unable to follow an irregular, time-dependent phenomenon \citep{gug}. In this paper we look into the possible short-comings of the formalism introduced in BSP11. The mathematical formalisms are given in Sect. 2, and the results are documented by the different fits of the light curve of a Blazhko star, CM UMa in Sect. 3. As the modulation of CM UMa is quite simple and regular; no secondary modulation is detected, and no higher order than quintuplet modulation components are present in the Fourier spectrum, therefore no bias of the results arises from any time-dependent irregularity of the modulation, what was the case in V445 Lyr.
|
The Fourier sequence describing the light curve of a Blazhko star (Eq.~\ref{eq:four}) has been transformed into the form of an amplitude and angle modulated signal (Eq.~\ref{eq:summod}). The two descriptions are fully equivalent to each other. The correctly deduced amplitude and angle modulation functions, $f_{\mathrm{A}i}(t)$ and $f_{\mathrm{F}i}(t)$ depend upon the harmonics of the unmodulated light curve ($i$). Since these are periodic functions, they can be expressed as Fourier sums (Eqs.~\ref{amod} and \ref{pmod}). A priori, there is no knowledge about the parameters of these equations, they can be determined only through observations. The possible correlations among them can be revealed by observations, as well. In their study, BSP11 employed an amplitude and angle modulation pattern used in electronic signal transmission. (In this technique, the coding of the modulation of a signal is known in advance and can be controlled.) They assume that the amplitude modulation function does not depend upon the harmonic order of the unmodulated signal (light curve) and the difference between the angle modulation functions in the different orders is simply the multiplication by the harmonic-order number. This oversimplified procedure leads to the drastically reduced number of parameters but, on the cost of an unacceptably poor fit. The analysis of the light curve of the Blazhko RR Lyrae star, CM~UMa (Sect.~\ref{cmu}), clearly shows that the BSP11 formalism gives a poor fit, especially around the minimum and maximum light and the ascending branch of the light curve. Even if the number of the parameters are increased, the fit does not improve. It should be emphasized that these parts of the light curve reflect the changing strength and occurrence of the shock-waves during the Blazhko cycle \citep{psp,cg97}. We thus conclude that the BSP11 approach does not take the nonlinear interactions, which determine the final form of the light variation, fully into account because of the strong restrictions of the formalism. Its capability to describe the changes of the pulsation light-curve's shape is seriously limited, since it only shifts the pulsation light curve in phase and scales it in amplitude periodically during the modulation. Plots that show the amplitude and phase variations of the different harmonic orders during the Blazhko cycle (Figs.~\ref{egg} and \ref{four}) reveal it convincingly that the amplitude and angle modulation functions given in BSP11 are unsuited to describe the Blazhko modulation correctly. Nevertheless, the Blazhko modulation can be correctly interpreted as amplitude- and angle-modulated signal (Eqs.~\ref{eq:summod}--\ref{pmod}). This fact hints at the possibility that during the Blazhko cycle the radial pulsation of an RR Lyrae star is modulated in phase/frequency and amplitude, and the fundamental period is subject to real oscillation. Up to now, the only model that is in conformation with this scenario has been suggested by \cite{st1}, although the inside physics of this model has been strongly criticized by \cite{sm} and \cite{mol} recently.
| 12
| 6
|
1206.1715
|
1206
|
1206.4289_arXiv.txt
|
We report the first results of a multi-epoch search for wide (separations greater than a few tens of AU), low-mass tertiary companions of a volume-limited sample of 118 known spectroscopic binaries within 30~pc of the Sun, using the 2MASS Point Source Catalog and follow-up observations with the KPNO and CTIO 4m telescopes. Note that this sample is not volume-complete but volume-limited, and, thus, there is incompleteness in our reported companion rates. We are sensitive to common proper motion companions with separations from roughly 200~AU to 10,000~AU ($\sim10\arcsec \rightarrow~\sim10\arcmin$). From 77 sources followed-up to date, we recover 11 previously known tertiaries, three previously known candidate tertiaries, of which two are spectroscopically confirmed and one rejected, and three new candidates, of which two are confirmed and one rejected. This yields an estimated wide tertiary fraction of $19.5^{+5.2}_{-3.7}\%$. This observed fraction is consistent with predictions set out in star formation simulations where the fraction of wide, low-mass companions to spectroscopic binaries is $>$10\%, and is roughly twice the wide companion rate of single stars.
|
Formation simulations have had a difficult time modeling the very close separations of many spectroscopic binaries. A mechanism is needed to draw angular momentum away from an already close pair of objects \citep{kis98}. Recent star formation simulations \citep{sd03,dd04,umb05} show that one potential mechanism for the transfer of angular momentum is through three-body interactions. The third bodies used, in these cases, are cool dwarfs. Cool dwarfs are stellar and sub-stellar objects with spectral types $\gtrsim$ M and masses less than a few tenths of a solar mass. Interactions between loosely bound or totally unbound low-mass objects can dramatically tighten already-close orbits. These results predict that spectroscopic binary systems should have a larger fraction of wide cool companions than so-called `single' stars. The dynamic interaction simulations of \citet{sd03} and \citet{dd04} produce some testable predictions. The simulations of \citet{dd04} predict that, if a cool dwarf is found to be in a stable, $>$10~AU orbit, its primary is frequently ($\sim75\%$) a tight spectroscopic binary. A similar qualitative result is found in the dynamic simulations of \citet{sd03}. Recent work by \citet{law10}, which studied a sample of known, very wide M dwarf binary systems, found that $45^{+18}_{-16}\%$ were higher order multiples. This supports the simulation predictions. Note that `wide' in this work means separations $\gtrsim 10$'s of AU. The Delgado-Donate et~al.\ simulations found that $\sim$10\% of their tight multiple systems survive with wide, cool dwarf companions by the end of their simulations, 10.5~Myr. There are many empirical measurements of the cool dwarf companion frequency to stars. \citet{gizis01} estimated a wide cool dwarf companion frequency of $18\%{\pm}14\%$, based on only three L and T dwarf companions. \citet{jc09} examined 21 FGK stars within 20~pc, and found no companions in a range of 20~AU - 250~AU down to masses of 50~$M_J$. \citet{pl05} used the Hubble Space Telescope to examine 45 young stars ($\sim$0.15~Gyr) at separations from 15~AU to 200~AU and masses down to typically 30~$M_J$ to find or confirm 8 cool companions. \citet{mz04} examined a sample of $\sim$300 single, G, K, and M stars at separations from 75 AU to 1200 AU and masses as low as $\sim$5~$M_J$ and found a cool dwarf binary frequency of 1-2\%. \citet{mh04,mh06,mh09} examined $\sim$250 `Solar Analogs' (F5-K5 stars). They found two new brown dwarf and 24 new stellar companions with separations from 28~AU to 1590~AU and probed masses down to ${\sim}10~M_J$. They calculate an ultracool companion frequency of $\sim3\%$, which is marginally higher than that of \citet{mz04}. All of the above works are consistent with a low, wide, cool dwarf companion fraction of only a few percent. However, the primaries are all apparently single stars, which does not test the simulation results on tertiary companions. \citet{tok06} conducted such a study of 165 spectroscopic binaries, in which they searched for wide companions using the 2MASS database. A subset of those objects (62) were observed at high spatial resolution with NACO on the VLT. They found a very high tertiary rate, adjusted for incompleteness, of 63\%. They also found that the fraction of spectroscopic binaries with tertiary components is a strong function of spectroscopic binary period. Those with very short periods (less than 12 days) almost all have wide companions (96\%). It should be noted that other groups have conducted common proper motion (CPM) comparisons between various wide-field surveys, though none has specifically targeted wide companions to spectroscopic binaries. The Brown Dwarf Kinematics Project \citep{jf09,jf10} has studied the kinematics of ultra cool dwarfs and found or confirmed several wide cool companions to stellar primaries, including one spectroscopic binary. \citet{lb07} performed a re-analysis of the Digitized Sky Survey using the custom built software package SUPERBLINK. They found that ${\sim}9.5\%$ of Hipparcos stars have companions at separations wider than 1000 AU and proper motions great than $0{\farcs}15/yr$. There have been several papers cross-correlating 2MASS and SDSS which have reported individual discoveries, such as \citet{met08} and \citet{kg11}. There have also been large systematic cross-correlations, such as \citet{jdk10} and \citet{sc09}, 2MASS to SDSS; or \citet{nd09}, 2MASS to UKIDSS. However, these works focused on finding individual field objects with large proper motions, not finding multiple systems. Finally there is the Slowpokes survey \citep{slowpokes} which cross-correlates USNO-B to SDSS and was designed to look for CPM companions. They focused on the general field population and only searched separations $\le 180"$. This is narrower than our search radius and is limited to the optical and will not have the same sensitivity to extremely cool objects as our near-infrared survey. Here we report the first results from a study to measure the wide tertiary fraction around spectroscopic binaries via CPM, using 2MASS as a first epoch and our own deep near-infrared imaging as the second epoch, which \citet{tok06} did not carry out. Section 2 describes the experimental setup and sample selection, while Section 3 outlines the wide-field NIR imaging campaign and the data reduction procedures. Sections 4 and 5 detail our CPM analysis techniques and discusses the results, respectively. Section 6 summarizes the current work and our results.
|
\label{sec:disc} \subsection{Sensitivity} \label{sec:sens} We made preliminary estimates of our survey sensitivity, both in separation from the bright spectroscopic binary primary and in magnitude. This analysis was performed by creating a psf star for each field. This psf star was then randomly placed at ten locations within the field, each with a random magnitude. The {\it daophot} IRAF package was used to create the psf star and place the random, `fake' stars. The resultant images were searched for point sources using identical {\it starfind} parameters from the initial search for candidates, as described in Section \ref{sec:cpmtech}. The insertion of ten fake sources at a time was repeated 1000 times for each field for a total of 10,000 fake sources. The magnitude limit was then determined as a function of separation from the central binary by comparing the number of fake sources inserted to the number recovered by our search parameters. The left-hand panel of Figure \ref{fig:sens} displays the sensitivity curve generated for the field of HIP 39064 at 50\% and 90\% completeness levels. For this field, we see that the inner 10-15 arcseconds is mostly lost, due to the bright primary. However, outside of $\sim$15 arcseconds, our sensitivity is fairly uniform, with an average 50\% completeness of J = 18.3 mag and a 90\% completeness of J = 17.6 mag. In comparison to the field of HIP 39064, which corresponds to the typical brightness of our central binaries (V = 7.70 mag), the sensitivity curve of HIP 44248 (V = 3.97 mag) is displayed in the right hand panel of Figure \ref{fig:sens}. This is one of the brightest objects in our sample. The average 50\% completeness is 18.1 mag and the average 90\% completeness is 16.7 mag. However, as expected for a brighter central object, the radius at which a uniform sensitivity is reached is much larger, $30-40$ arcseconds. It should also be noted that the effects of the bright central binaries are not limited to the central 10-15 arcseconds. Among the brighter primaries, we noted a ringing effect in our images. These looked like ripples in a pond, centered on the primary. This ringing also caused wide swings in sensitivity across the FOV. This effect can be seen in the sensitivity curve of HIP 44248 in the right-hand panel of Figure \ref{fig:sens}. It is particularly noticeable between 20 and 50 arcsecond separations where the 90\% completeness limit jumps by several magnitudes a couple of times. While we are not completely certain what the cause of the `ringing' is, it is likely due to scattered light within the camera from the extremely bright central objects in our fields. Figure \ref{fig:senshist} displays a histogram of the 50\% and 90\% completeness limits for all fields observed in this work. The median 50\% limit is 19.1 mag and the median 90\% limit is 18.2 mag. From these data, we can see that we did not achieve our expected sensitivity in many fields (J band limit of 20 mag), but can consistently recover objects one to two magnitudes brighter. These sensitivity curves as a function of radial separation from the central binary will be used in a future statistical analysis of the sample. \subsection{Overall Wide Companion Rate} Our primary goal in this initial study was to test whether known spectroscopic binaries have an enhanced, wide tertiary rate, compared to that of other stars. Table \ref{tab:sptcf} lists the frequency of wide tertiary companions in our sample broken down by spectral type of the primary member of the spectroscopic binary system. With the exception of the G and B stars, of which we only observed 4, the typical wide tertiary fraction is $\sim20-25\%$. The overall rate, which we determined by assuming that these systems are drawn from a binomial distribution, is $19.5^{+5.2}_{-3.7}\%$. This is a preliminary rate, as we do not account for incompleteness, selection effects, etc. \citet{tok06} probes the same overall spectral type primaries as we do, but also to smaller separations. In order to compare consistent samples we only select companions found in \citet{tok06} that we could have found in our sample. Since we probe the same primaries we only selected companions in the same separation range ($>10\arcsec$), or 27 companions. This yields a tertiary fraction of $16.8\pm2.9\%$ from a sample of 165 spectroscopic binaries, which is comparable to our findings. Our wide companion fraction can also be compared to that of apparently single stars for a similar range of primary spectral type. We also compared our companion rate to that of apparently single stars. For example, \citet{mz04} found a wide companion rate of $\sim$11\%-12\%, which is still lower than either our own results or those of \citet{tok06}. Thus, both this work and that of \citet{tok06} are consistent in finding an enhanced tertiary companion fraction to spectroscopic binaries with respect to `single' stars. This trend continues into the M dwarf regime as well. \citet{law10} surveyed 36 known wide M dwarf binary systems to look for close companions to either members. They find that $45^{+18}_{-16}\%$ of their wide binaries are actually high-order multiple systems. This agrees with the simulation predictions of \citet{sd03}, \citet{dd04}, and \citet{umb05}, as well as with our results. \citet{mz04} survey nearby solar type stars (FGK) over a wide range of separations and find a substellar companion rate of $\sim1\%$. This is similar, within the uncertainties, to our observed substellar companion rate of $3.8\pm2.2\%$. \citet{mh09} perform an exhaustive survey of 266 FGK stars with both AO imaging and wide field studies and find a substellar companion rate of $3.2^{+3.1}_{-2.7}\%$ for separations of up to $\sim$1600 AU. This rate is statistically identical to the rate we found. The substellar companion fraction we report is most likely a lower limit, as we should find more faint companions when we obtain our second epoch deep imaging. Those data will have comparable sensitivities to our first epoch ($J_{lim}{\sim}18.2$ at 90\% completeness for most fields outside of 10'' to 20'' separations) and will be sensitive to very faint T and later dwarfs. This is because we rely on 2MASS for our first epoch astrometry. 2MASS is fairly complete to distances of 25 - 30 pc for L dwarfs \citep{kc07}. However, T dwarfs, which all have absolute J magnitudes of $\sim 14 - 16.5$, are only detectable to distances of 15 - 20 pc, given the J-band 2MASS limit of $\sim$16. So, at this time, it is difficult to tell if the substellar companion rate is enhanced. Finally we examined possible correlation between the mass of the primaries and the mass of the secondaries by using spectral type as a proxy for mass. There was no obvious correlation, although the majority of the tertiary companions were of the M spectral type for all primary spectral types. This was not true for G type primaries, whose only confirmed companions were L dwarfs. In any case, the large majority of wide tertiary companions noted in this work were of a considerably lower mass than their primaries, which fits with the predictions of \citet{sd03} and \citet{dd04}. \subsection{Background Interlopers} \label{sec:bl} We found two background interlopers in our CPM sample, one distant G dwarf (HIP 101769C) and one background M subdwarf (HIP 72603C). HIP 101769 has a proper motion near the limit of our minimum motion of $0\farcs1$/yr, while HIP 72603 has a proper motion much higher (${\sim}0\farcs2$/yr). It has been found that the number density of moving objects rises as the inverse cube of the magnitude of the proper motion \citep{ls05}. Thus, it is not surprising that the interlopers we found lie at the lower end of our motion spectrum. Further work by \citet{lb07} provides a quantitative mechanism for determining the likelihood that a given CPM candidate is a chance alignment of a background object when the overall motion of the object is $\ge 0{\farcs}15/yr$. This analysis was based on the fact that the number of objects at a given proper motion increases with smaller motions and that the chance of a random alignment increases with greater angular separation. They derived the following formula to quantify these correlations: $${\Delta}X = [(\mu/0.15)^{-3.8}\Delta\theta\Delta\mu]^{\frac{1}{2}}$$ where $\mu$ is the magnitude of the proper motion of the primary in arcseconds per year, $\Delta\theta$ is the difference in angular position between the primary and the candidate CPM companion in arcseconds, and $\Delta\mu$ is the difference in proper motion in arcseconds per year. The quantity ${\Delta}X$ measures the likelihood that a given object is genuine. \citet{lb07} found that, when the value of ${\Delta}X$ is around 1, there is a 50\% chance of the candidate companion being a chance alignment. This increases to well above 90\% for values of ${\Delta}X > 1.2$. The value of ${\Delta}X$ for the two interlopers we find in our sample are 1.2 for HIP 101769C and 3.9 for HIP 72603C. Since the motion of HIP 101769C is below the limit of the analysis in \citet{lb07}, it is not clear how effectively this formula can be applied. All of our other candidates have values under 1. \subsection{Spectroscopic Binary Periods} The comparison between the target sample of this work and that of \citet{tok06} produced some interesting results, particularly when comparing the orbital periods of the target spectroscopic binaries. Tokovinin's program set out with the same goal as ours: examination of the tertiary fraction of spectroscopic binaries as a means of testing binary star formation simulations. They wanted to maximize the chances of detecting tertiaries; so, they selected only spectroscopic binaries that only have periods of less than 30 days. The simulation predictions of \citet{sd03} and \citet{dd04} argue that the Kozai mechanism can tighten these systems by transferring angular momentum from the tight system to a wide third member. Thus, the tighter systems should, preferentially, have a higher tertiary companion rate than wider spectroscopic binaries. In the \citet{tok06} study they do find that tighter spectroscopic binaries tend to have a higher fraction of tertiaries and that the tertiary fraction rises to nearly 100\% for spectroscopic binaries with periods of less than a day. Our selected sample is volume and minimum proper motion limited, while the period of the spectroscopic binary is unconstrained. Table \ref{tab:percf} lists companion fraction as a function of the period of the spectroscopic binaries in the observed sample, while Figure \ref{fig:percf} displays a histogram of the data in Table \ref{tab:percf}. It is broken down in log period bins centered on the values listed with $\pm0.5~dex$ widths. We calculated the fractions, assuming that the data are binomially distributed (Table \ref{tab:percf}). Our number of primaries as a function of binary period is fairly flat from 1 day to 10,000 days. The sample from \citet{tok06} does not examine spectroscopic binaries with periods longer than 30 days, which corresponds to the last three bins of Table \ref{tab:percf} and Figure \ref{fig:percf}. The tertiary companion rate, however, peaks at the smallest period bin for which we have significant data, $50\% \pm 13.3\%$ for spectroscopic binaries with periods between 1 and 10 days. The companion fraction then decreases for spectroscopic binaries with periods between 10 and 100 days, but then increases again to between 15\% and 30\%. Note that the most significant of these variations from our baseline 19.5\% companion rate is the 50\% rate at small periods, which is a ${\sim}2\sigma$ event. Thus, we can marginally confirm the result of \citet{tok06}, which found that the tertiary companion rate drops by about a factor of two from very close binaries (periods less than 7 days) to wide binaries (periods between 7 and 30 days). A larger sample size is needed to provide a more robust measurement of the variation of the companion fraction as a function of spectroscopic binary period. \citet{tok06} also find that there is no correlation between the period of the binary and the separation of the tertiary, which we find as well (see Figure \ref{fig:pvs}). However, our result of an increase in companion fraction for spectroscopic binaries with periods longer than 100 days is quite different from that of \citet{tok06}. We found a significant fraction of wide spectroscopic binaries have tertiary companions. This demonstrates that enhanced wide companion rates also apply to wider spectroscopic binaries (Table \ref{tab:percf}). However, this result is preliminary, as there is a large area of separation space that this work has not yet explored, particularly tertiary companions with separations less than 10\arcsec. It should be noted that \citet{tok06} found nearly half of their tertiary companions in this separation range.
| 12
| 6
|
1206.4289
|
1206
|
1206.5841_arXiv.txt
|
Recent observational surveys of trans-neptunian binary (TNB) systems have dramatically increased the number of known mutual orbits. Our Kozai Cycle Tidal Friction (KCTF) simulations of synthetic trans-neptunian binaries show that tidal dissipation in these systems can completely reshape their original orbits. Specifically, solar torques should have dramatically accelerated the semimajor axis decay and circularization timescales of primordial (or recently excited) TNBs. As a result, our initially random distribution of TNBs in our simulations evolved to have a large population of tight circular orbits. This tight circular population appears for a range of TNO physical properties, though a strong gravitational quadrupole can prevent some from fully circularizing. We introduce a stability parameter to predict the effectiveness of KCTF on a TNB orbit, and show that a number of known TNBs must have a large gravitational quadrupole to be stable.
|
Trans-neptunian binary systems (TNBs) constitute at least 10\% of the objects between 30 and 70 AU \citep{Stephens2006}, and up to 30\% of the Cold Classical Kuiper Belt \citep{Noll2008b}. As of spring 2012, 72 TNBs have been reported in the literature, with full mutual orbits having been reported for 18 objects, partial orbits with ambiguous orbits for 30 more \citep[e.g.][and the list at http://www2.lowell.edu/users/grundy/tnbs]{Noll2008,Grundy2009,Grundy2011,Parker2011}. These observations show that the majority of detected TNB systems have a separation of less than 2\% of the Hill Radius ($r_{Hill}$), defined as: \begin{equation} \label{rHill} r_{Hill} = a_{helio}(1-e_{helio})\sqrt[3]{\frac{M_{binary}}{3 M_{Sun}}} \end{equation} where $a_{helio}$ and $e_{helio}$ are the semimajor axis and eccentricity of the heliocentric orbit. Even more striking is the very small fraction of TNB systems which are widely-separated ($>$10\% $a/r_{Hill}$), despite their being easier to detect. This implies that TNBs are generally in very close mutual orbits, and the fraction of orbits that are very close has only increased with better detection methods. In addition, most known TNBs are of almost equal brightness \citep{Noll2008}, implying near-equal masses. \\\\ Several formation methods have been proposed to create TNBs, though none as yet can fully describe the observed population, nor account for any post-formation orbital evolution. Large impacts are an obvious contender for formation, but tend to produce smaller satellites (and thus less equal mass ratios) than are observed. Indeed, \citet{Canup2005} showed the Charon-forming impact required a very slow relative velocity ($v_{imp}\approx{v_{esc}}\approx0.7$ km/s), and even then only allowed a mass ratio of approximately 10:1. Dynamical captures can also produce TNBs \citep[e.g.][]{Goldreich2002,Lee2007}. These methods do favor near-equal mass ratios (as it provides a deeper gravity well per size of the primary object), but have great preference for producing wide ($>$5\% $r_{Hill}$) binaries on eccentric orbits. \citet{Funato2004} combines a small impact and dynamical capture to efficiently produce TNBs, but only at very high eccentricities. \citet{Nesvorny2010} shows that binaries formed by gravitational collapse also tend to have near-equal mass ratios, but again have wide, moderately eccentric orbits. The unbinding of binaries by impacts \citep{Petit2004} or Neptune encounters \citep{Parker2010} would reduce the number of wide TNBs, but would not correspondingly increase the number of tight systems. The deficit of these wide systems, and the abundance of tight ones, therefore hints at the existence of some non-disruptive post-formation processing of TNB mutual orbits. \\\\ In this paper, we propose that Kozai Cycle Tidal Friction \citep[KCTF, after][]{Eggleton2006} may be the method by which these orbits were tightened and circularized. We will show through several sets of Monte Carlo simulations that KCTF can transform a large fraction of primordial TNB systems into very close and circular orbits. In addition, we show that the tidal evolution this implies means that Kozai cycles are very inefficient at destroying TNB systems. We also show how KCTF is influenced by the physical properties of the TNB system, such as tidal $Q$ and $k_L$, density, $J_2$, rotation rate, and mass ratio. \\\\ \textbf{Figure 1 here}
|
KCTF can significantly transform the orbits of trans-neptunian binaries. At least 90\% of random synthetic TNB systems survive 4.5 Ga of KCTF evolution. A third to half of the surviving TNB systems decay to circular orbits at less than 1\% of their mutual Hill radius. Some of these systems can have values of J/J' similar to impact-generated systems. The remaining systems are stable being eccentric over the lifetime of the solar system. All resulting systems preserve their initial prograde/retrograde preference. \\\\ The inclusion of $J_2$ lowers the effectiveness of KCTF, but does not eliminate it, especially for rubble-pile objects. In addition, $J_2$ creates an island of stability that allows otherwise unstable observed system to be in permanent eccentric orbits. A slower initial rotation rate or 10:1 mass ratio also slightly lower the effectiveness of KCTF, but do not change the basic trends. \\\\ The observed population of TNB orbits fits well to our simulations with $J_2$. These simulations predict that, as high-resolution observational systems improve, a large number of TNBs will be detected with very tight, circular orbits. Indeed, considering the fraction of known wider binaries, tight near equal-mass TNBs may be extremely common.
| 12
| 6
|
1206.5841
|
1206
|
1206.2650_arXiv.txt
|
We present new observations of the \neii\ emission from the ionized gas in \sgraw\ with improved resolution and sensitivity. About half of the emission comes from gas with kinematics indicating it is orbiting in a plane tipped about 25\degree\ from the Galactic plane. This plane is consistent with that derived previously for the circumnuclear molecular disk and the northern arm and western arc ionized features. However, unlike most previous studies, we conclude that the ionized gas is not moving along the ionized features, but on more nearly circular paths. The observed speeds are close to, but probably somewhat less than expected for orbital motions in the potential of the central black hole and stars and have a small inward component. The spatial distribution of the emission is well fitted by a spiral pattern. We discuss possible physical explanations for the spatial distribution and kinematics of the ionized gas, and conclude that both may be best explained by a one-armed spiral density wave, which also accounts for both the observed low velocities and the inward velocity component. We suggest that a density wave may result from the precession of elliptical orbits in the potential of the black hole and stellar mass distribution.
|
The center of the Milky Way Galaxy has been the subject of intense study since the observation of infrared emission from the central star cluster \citep{bn68} and radio wavelength emission from the ionized gas in \sgraw\ \citep{downes71} and the central compact object \sgras\ \citep{balick74}. Being nearly 100 times closer than any other major galactic nucleus, our Galactic center provides the best opportunity to observe the interaction of stars and gas with a super-massive black hole (SMBH). Numerous authors have reviewed the contents and phenomena found in the Galactic center. \citet{morris96}, \citet{mezger96}, and \citet{genzel10} discuss observations of the interstellar gas that is most relevant for this paper. Observations of stellar proper motions and radial velocities \citep{ghez08, gillessen09} give a distance to the Galactic center of 8.3\,kpc (corresponding to an image scale of 25\asec /pc) and a black hole mass of 4.3\ee{6}\,\msun. Stellar spectra and imaging give evidence for recent star formation or the capture of a recently formed star cluster \citep{levin03, gerhard01}. The stellar mass distribution in \sgraw\ is not well known. If there is an equilibrium stellar cusp, as expected for a cluster around a SMBH \citep{bahcall76}, the radial dependence of the stellar density can be described by a broken power law with a slope $\gamma \approx 1.3$ for the cusp and $\gamma \approx 1.8$ outside the cusp \citep{genzel03, schodel07, genzel10}. \citet{merritt10} describes how the absence of a Bahcall-Wolf cusp is plausible assuming that the mass in the inner parsec is traced by old stars which would indicate a low-density core with radius $\approx$0.5\,pc. Some recent papers \citep{buchholz09, do09, bartko10} have suggested that there may be a relatively flat stellar density inside of $\sim$1~pc. The most prominent interstellar matter in the central 1.5\,pc is the ionized gas and the associated warm dust \citep{rieke88}. This gas has the appearance of a clumpy, filamentary multi-armed spiral \citep{lo83,serabyn85}. The mass of the ionized gas is several tens of \msun. Neutral atomic gas is also present in the inner few pc \citep{jackson93}. Although it is more difficult to observe, its mass is a factor $\sim$10 times that of the ionized gas. Beyond $\sim$1.5~pc and extending out to $\sim$10~pc, the interstellar gas is mostly molecular, and is referred to as the circumnuclear disk or CND \citep{becklin82,gusten87,christopher05,montero09,oka11}. Estimates for the mass of the CND range from a few \eten{4}\,\msun, based on millimeter dust emission \citep{mezger89,davidson92,etxaluze11}, to \eten{6}\,\msun, based on virial masses of molecular clumps \citep{christopher05,montero09}. Observations of infrared and radio hydrogen recombination lines (RRLs) \citep{roberts93,herbst93,paumard03,zhao09} and infrared fine-structure lines \citep{wollman77,lacy80,serabyn88,lacy91} provide information on the motion of the ionized gas through Doppler shifts. The overall pattern is consistent with expectations for orbital motions in a potential dominated by the massive black hole: the highest velocities are found within a few arcseconds of \sgras, and velocities tend to decrease going outward. Much of the gas appears to be near a plane tipped $\sim 25^{\circ}$ from the Galactic plane, with redshifts seen toward positive Galactic longitudes and blueshifts generally seen toward negative longitudes. The motions of the molecular gas in the CND are also mostly in the sense of Galactic rotation, but with a roughly flat rotation curve, as distributed mass makes a larger contribution to the gravitational potential farther from the center. Several models have been proposed to explain the gas kinematics. \citet{lacy80} originally saw the gas as being in a number of independently orbiting clouds, but better imaging, especially with the VLA \citep{lo83}, showed that the ionized gas was better described as a collection of streamers, with the `clouds' being peaks in the emission along the streamers. \citet{serabyn85} and \citet{serabyn88} found that Doppler shifts vary smoothly along the streamers. They modeled the `western arc' (see Fig. 1) as the ionized inner rim of the CND in a nearly circular orbit around the center, and the `northern arm' as a flow of gas approaching the center. \citet{lacy91} obtained a complete data cube of the \neii\ emission from the inner 60\asec\,$\times$\,90\asec\ and concluded that the gas kinematics of the western arc and northern arm were better modeled with circular motions, rather than motions along the streamers. They argued that the western arc and northern arm are orbiting in the same plane as the CND and that they could be joined at their north ends to form a single spiral feature. The main problem with their interpretation was the lack of a physical explanation for the spiral. They suggested that it could be a density wave or a spiraling inflow affected by both gravitational and viscous forces, but in both cases it was hard to identify the forces responsible for organizing the gas into a spiral pattern. Observations of infrared and radio hydrogen recombination line emission led various authors \citep{sanders98,vollmer00,liszt03,paumard04} to return to the tidally stretched cloud model. \citet{zhao09} strengthened this model by including proper motions of the ionized gas streamers. They fitted observations of the western arc, northern arm, and eastern arm with elliptical Keplerian orbits in the potential of the central black hole. Non-gravitational forces may also influence the gas distribution and motions. \citet{aitken91,aitken98} and \citet{glasse03} observed polarized emission from the dust in the northern arm and bar region, indicating that mGauss magnetic fields are aligned along the ionized streamers. \citet{aitken98} interpreted variations in the polarization to give a measure of the inclination of the magnetic fields from the plane of the sky. Assuming that the flows are along the field lines, they obtained information about the 3-dimensional structure of the gas orbits. Stellar winds apparently also affect the ionized gas. In several cases bow shocks are seen around stars, presumably as the stars move through the ionized medium or a wind from the central region blows past the stars \citep{serabyn91, geballe04}. We have made new observations of the \neii\ emission from \sgraw\ with improved spectral and spatial resolution, as well as improved sensitivity. In this paper, we present these observations and compare them to the different models of the ionized gas kinematics.
|
Before discussing theoretical models and implications from our observations, we state the conclusions we have drawn that are independent of those models. Approximately half of the ionic line emission from \sgraw\ comes from gas orbiting in a plane tipped about 25\degree\ from the Galactic plane. This plane is coincident within uncertainties with that of the molecular circumnuclear disk. The gas in the disk plane moves on nearly circular orbits, with only a small inward velocity component. The Doppler pattern is not consistent with motion along the northern arm ionized streamer. The observed speeds are close to, but probably somewhat less than expected for orbital motions in the gravitational potential of the central super-massive black hole and the distributed mass, as derived from the orbital motions of stars near the black hole and the distribution of stars. The spatial distribution of the ionized gas in the western arc and northern arm could be described by two ellipses in a plane close to that derived from the [Ne~II] line kinematics, but is somewhat better fitted with a single, approximately linear (Archimedean) spiral.
| 12
| 6
|
1206.2650
|
1206
|
1206.5300_arXiv.txt
|
Turbulent properties of the quiet Sun represent the basic state of surface conditions, and a background for various processes of solar activity. Therefore understanding of properties and dynamics of this `basic' state is important for investigation of more complex phenomena, formation and development of observed phenomena in the photosphere and atmosphere. For characterization of the turbulent properties we compare kinetic energy spectra on granular and sub-granular scales obtained from infrared TiO observations with the New Solar Telescope (Big Bear Solar Observatory) and from 3D radiative MHD numerical simulations ('SolarBox' code). We find that the numerical simulations require a high spatial resolution with 10 - 25~km grid-step in order to reproduce the inertial (Kolmogorov) turbulence range. The observational data require an averaging procedure to remove noise and potential instrumental artifacts. The resulting kinetic energy spectra show a good agreement between the simulations and observations, opening new perspectives for detailed joint analysis of more complex turbulent phenomena on the Sun, and possibly on other stars. In addition, using the simulations and observations we investigate effects of background magnetic field, which is concentrated in self-organized complicated structures in intergranular lanes, and find an increase of the small-scale turbulence energy and its decrease at larger scales due to magnetic field effects.
|
Understanding and characterization of turbulent solar convection is a key problem of heliophysics and astrophysics. The solar turbulence driven by convective energy transport determines the dynamical state of the solar plasma, leads to excitation of acoustic waves \cite{kiti2011}, formation of magnetic structures \cite{Brand2011,kiti2010b} and other dynamical phenomena. Realistic numerical simulations of solar magnetoconvection are an important tool for understanding many observed phenomena, verification and validation of theoretical models, and interpretations of observations. The simulations of this type were started in pioneering works by Stein and Nordlund \cite{stein1998}, with the main idea of constructing numerical models based on first physical principles. The `quiet Sun' describes a background state of the solar surface layers without sunspots and active regions, that is, without large-scale magnetic flux emergence and other strong magnetic field effects, which can significantly change properties of the turbulent convection. Quiet-Sun regions are characterized by weak mean magnetic field of 1 - 10~G, which is usually concentrated in small-scale flux tubes in the intergranular lanes, and observed as bright points in molecular absorption lines. Previous investigations of the solar turbulent spectra from observations were presented by Abramenko {\it al.} \cite{abramenko2001}, Goode {\it al.} \cite{goode2010a}, Matsumoto and Kitai \cite{Matsumoto2010}, Rieutord {\it al.} \cite{Rieutord2010}, Stenflo \cite{Stenflo2012} and others. Comparison of observations with numerical simulation data initially done by Stein and Nordlund \cite{stein1998} showed a good agreement between correlation power spectra obtained from smoothed simulation data and high-resolution observations from the La Palma. Such comparison of results of realistic-type MHD modeling with high-resolution observations gives us an effective way to understanding observed phenomena. Recently, advanced computational capabilities made it possible to construct numerical models of the solar turbulent convection with a high level of realism. On the other hand, modern high-resolution observational instruments with adaptive optics, such as the 1.6-m New Solar Telescope at the Big Bear Solar Observatory \cite{goode2010b} have allowed us to capture small-scale dynamics of the surface turbulence \cite{abramenko2011,goode2010a,yurch2011}. In this paper, we compare the turbulent kinetic energy spectra from observed and simulated data sets for the conditions of quiet-Sun regions, and investigate properties of solar turbulence and background magnetic field effects. We use two types of data: 1) high-resolution observations of horizontal flows from the New Solar Telescope (NST/BBSO, \cite{goode2010a}), and 2) high-resolution 3D radiative MHD and hydrodynamic simulations \cite{kiti2012}. \begin{figure} \centerline{\includegraphics[width=1\textwidth]{kitiashvili_fig1.eps}} \caption{A quiet-Sun region observed in the TiO filter with the New Solar Telescope (NST) on August 3, 2010. Squares in panel {\it a}) show two subregions: subregion $A$ without magnetic bright points, and subregion $B$ with conglomerates of magnetic bright points concentrated in the intergranular lanes. In panel {\it b}), subregion $B$ is shown in detail with overploted velocity field derived by a Local Correlation Tracking (LCT) method.}\label{fig:TiO} \end{figure}
|
Investigation of solar convection is interesting from the point of view of the hydrodynamic turbulent properties of the highly stratified medium, and also for understanding and characterization effects of background magnetic fields on the turbulent energy transport between different scales. Recent numerical simulations have shown that the presence of a weak magnetic field can increase the level of nonlinearity and have different effect at large and small scales due to the increasing inhomogeneity of convective properties and magnetic coupling of plasma motions. In particular, decreasing of the turbulent kinetic energy at the sub-granular scales in the simulations with magnetic field (black curve, Fig.~\ref{fig:comp}{\it a}) can be caused by local suppression of turbulent motions near convective granular edges, where the magnetic field is collapsed in to small-scale concentrations of magnetic field ($\sim 1$~kG). Recent investigation of quiet-Sun data from the Hinode space mission showed strong, relatively high contribution of the collapsed field in the magnetic energy density distribution with a maximum at 80~km scale, and increasing of the magnetic energy density on granule scales (see histogram at Fig.~8 in \cite{Stenflo2012}). Thus, the comparison of the energy density spectra for the hydrodynamic and weakly magnetized convection at the solar surface in Figure~\ref{fig:comp}{\it a} shows a higher kinetic density energy on scales less than 50~km in the presence of magnetic field, and opposite on larger scales. Actually, a similar effect of the collapsed magnetic flux was found by Stenflo \cite{Stenflo2012}, but with an exponential decrease of the energy density on small scales. Thus, on the small scales (less than 50~km) the increase of the kinetic energy density reflects an interplay of the collapsing flux dynamics and, probably, a small-scale dynamo action. Perhaps, the increase of the kinetic energy density on the small scales contributes to quasi-periodic flow ejections into the solar atmosphere by the small-scale vortex tubes as discussed by Kitiashvili {\it et al.} \cite{kiti2012}. This potential relationship needs to be investigated. \begin{figure} \centerline{\includegraphics[width=0.95\textwidth]{kitiashvili_fig5.eps}} \caption{Effect of the background magnetic field (panel {\it a}) and comparison of the kinetic energy spectra for the simulations (with $B_{z0}=10$~G) and observations using the horizontal flow velocities reconstructed by the Local Correlation Tracking method from NST/BBSO observations (panel {\it b}).} \label{fig:comp} \end{figure} As discussed early, the data averaging allows us to filter out noise, and make data sets more homogeneous. However, increasing of the averaging window size can also filter out short-living features, and cause smearing of granules. Therefore, the averaging effect on the energy spectra, when most of energy on smallest scales is filtered, mostly leads to a steeper energy spectra slope. Averaging over two and more minutes makes the slope of the energy spectrum corresponding to the Kolmogorov power law ($k^{-5/3}$, \cite{Kolmogorov1941}). Such behavior of the energy spectra reflects the famous in the turbulence literature Landau's `Kazan remark' \cite{Frisch1995}, in which Landau draw attention to the absence of localized small-scale turbulent fluctuations in the Kolmogorov theory. Only when such fluctuations are filtered the spectrum becomes of the Kolmogorov type, as this happens in our case. Comparison of the kinetic energy spectra calculated from the observational and simulated data sets shows a higher contribution of flows with small wavenumbers in the simulations than in the observed data (Fig.~\ref{fig:comp}{\it b}). The extra power in the simulated data on these scales can come from the geometry of our numerical setup, in which convection is confined in a box with periodic boundary conditions in lateral directions, which can cause cutting of the energy transfer to larger-scale convective modes (e.g. due to inverse cascades). Also, this deviation can be caused by an underestimation of the velocity magnitude due to a degrading spatial resolution of the LCT (Local Correlation Tracking) data analysis procedure. We have also analyzed effects of the averaging procedure with different parameters (Table~1) on the resulting power spectra. The ensemble averaging with a minimal size window ($T_w=20$~s) and window shift ($T_s=10$~s, for 10~s cadence data) filters out most of the noise signal, and shows good qualitative and quantitative agreements with the simulated high-resolution data on the scales less than $\sim 150$~km (Fig.~\ref{fig:comp}{\it b}). In terms of general properties of the energy spectra, the time-averaging in short bins (2 or 5 min) shows good qualitative agreement with the spectral profile obtained from the simulated data for all scales resolved in the observations. Such good qualitative agreement of the kinetic energy spectra between the simulations and filtered observational data can be also due to removal of additional observational artifacts (such as local uncorrelated deformations of images and other instrumental effects), which can have time scales up to several minutes. The comparison of the energy spectra observed on the small scales (with wavenumbers larger than 30~Mm$^{-1}$) with the spectra calculated from the simulation data degraded to the observational resolution shows at all cases an increase of the energy density. For investigation of magnetic field effects, we compared the kinetic energy spectra for the two selected regions (Fig.~\ref{fig:TiO}{\it a}), one of which (region $B$) was filled by magnetic bright points, and another (region $A$) almost did not have these features. Comparison of the energy spectra of these regions shows their almost identical behavior with the total energy smaller for region $B$. Because the difference between the both spectra is mainly in the energy magnitude, we can conclude that there was no significant difference in the turbulent dynamics. Because the background magnetic field is present on the Sun everywhere, in order to get a more clear identification of magnetic effects we compare the spectra from the hydrodynamic and weakly magnetized surface turbulence simulations, and can see changes of the energy balance on different scales due to magnetic effects, namely: suppression of turbulent motions on the granular scales caused by the accumulation of magnetic field concentrations in the intergranular lanes, and, increasing of the kinetic energy density for large wavenumbers, probably, due to the small-scale dynamo action (Fig.~\ref{fig:comp}{\it a}).
| 12
| 6
|
1206.5300
|
1206
|
1206.2702_arXiv.txt
|
Multiple $\Lambda$CDM cosmology is studied in a way that is formally a classical analog of the Casimir effect. Such cosmology corresponds to a time-dependent dark fluid model or, alternatively, to its scalar field presentation, and it motivated by the string landscape picture. The future evolution of the several dark energy models constructed within the scheme is carefully investigated. It turns out to be almost always possible to choose the parameters in the models so that they match the most recent and accurate astronomical values. To this end, several universes are presented which mimick (multiple) $\Lambda$CDM cosmology but exhibit Little Rip, asymptotically de Sitter, or Type I, II, III, and IV finite-time singularity behavior in the far future, with disintegration of all bound objects in the cases of Big Rip, Little Rip and Pseudo-Rip cosmologies.
|
Astronomical observations indicate that our Universe is currently in an accelerated phase \cite{Dat}. This acceleration in the expansion rate of the observable cosmos is usually explained by introducing the so-called dark energy (for a recent review, see \cite{review}). In the most common models considered in the literature, dark energy comes from an ideal fluid with a specific equation of state (EoS) often exhibiting rather strange properties, as a negative pressure and/or a negative entropy, and also the fact that its action was invisible in the early universe while it is dominant in our epoch, etc. According to the latest observational data, dark energy currently accounts for some 73\% of the total mass-energy of the universe (see, for example, Ref.~\cite{Kowalski}). In an attempt at saving General Relativity and to explain the cosmic acceleration, at the same time, one is led to conjecture some exotic dark fluids (although some other variants are still being considered, see e.g. \cite{vari1}). Actually, General Relativity with an ideal fluid can be rewritten, in an equivalent way, as some modified gravity. Also, the introduction of a fluid with a complicated equation of state is to be seen as a phenomenological approach, since no explanation for the origin of such dark fluid is usually available. However, the interesting possibility that the dark fluid origin could be related with some fundamental theory, as string theory, opens new possibilities, through the sequence: string or M-theory is approximated by modified (super)gravity, which is finally observed as General Relativity with an exotic dark fluid. If such conjecture would be (even partially) true, one might expect that some string-related phenomena could be traceable in our dark energy universe. One celebrated stringy effect possibly related with the early universe comes from the string landscape (see, for instance, \cite{land}), which may lead to some observational consequences (see, e.g., \cite{mar}), since it could be responsible for the actual discrete mass spectrum of scalar and spinorial equations \cite{igor}. The equation of state (EoS) parameter $w_\mathrm{D}$ for dark energy is negative: \begin{equation} w_\mathrm{D}=p_\mathrm{D}/\rho_\mathrm{D}<0\, , \end{equation} where $\rho_\mathrm{D}$ is the dark energy density and $p_\mathrm{D}$ the pressure. Although astrophysical observations favor the standard $\Lambda$CDM cosmology, the uncertainties in the determination of the EoS dark energy parameter $w$ are still too large, namely $w=-1.04^{+0.09}_{-0.10}$, to be able to determine, without doubt, which of the three cases: $w < -1$, $w = -1$, and $w >-1$ is the one actually realized in our universe \cite{PDP,Amman}. The phantom dark energy case $w < -1$, is most interesting but poorly understood theoretically. A phantom field violates all four energy conditions, and it is unstable from the quantum field theoretical viewpoint, although it still could be stable in classical cosmology. Some observations hint towards a possible crossover of the phantom divide in the near past or in the near future. A very unpleasant property of phantom dark energy is the appearance of a Big Rip future singularity \cite{kam}, where the scale factor becomes infinite at finite time in the future. A less dangerous future singularity caused by phantom or quintessence dark energy is the sudden (Type II) singularity \cite{barrow} where the scale factor is finite at Rip time. Closer examination shows, however, that the condition $w<-1$ is not sufficient for a singularity occurrence. First of all, a transient phantom cosmology is quite possible. Moreover, one can easily construct models where $w$ asymptotically tends to $-1$ and such that the energy density increases with time, or remains constant, but there is no finite-time future singularity, what was extensively studied in Refs.~\cite{kam,Nojiri-3,barrow,Stefanic,Sahni:2002dx} (for a review, see \cite{review}, and for their classification, \cite{Nojiri-3}). A clear case is when the Hubble rate tends to a constant (a cosmological constant or asymptotically de Sitter space), which may also correspond to a pseudo-Rip situation \cite{Frampton-3}. Also to be noted is the so-called Little Rip cosmology \cite{Frampton-2}, where the Hubble rate tends to infinity in the infinite future (for further details, see \cite{Frampton-3,LR}). The key point is that if $w$ approaches $-1$ quickly enough, then it is possible to have a model in which the time required for the singularity to appear is infinite, so that the singularity never forms in practice. Nevertheless, it can be shown that even in this case the disintegration of bound structures takes place, in a way similar to the Big Rip phenomenon. Such models are known as Little Rip and they have both a fluid and a scalar field description \cite{Frampton-2,As1}. In the present paper we investigate a dark fluid model with a time-dependent EoS which can be considered as simple classical analog of the string landscape \cite{land_odin}. The Casimir effect may lead to a similar picture. Some vacuum states appear which can be implemented with the help of the landscape. Moreover, we will study multiple $\Lambda$CDM cosmology as a classical analog of the Casimir effect (for a review see \cite{Caz1}). This cosmology is also motivated by the string landscape picture. We demonstrate that such multiple $\Lambda$CDM cosmology may lead to various types of future universe, not only the asymptotically de Sitter one, but also to Little Rip cosmology and a finite-time future singularity, of any of the four known types \cite{Nojiri-3}. The equivalent description of multiple $\Lambda$CDM cosmology in terms of scalar theory is also further developed.
|
We have built in this paper several dark energy models, with a time-dependent equation of state, which can be viewed as simple classical analogs of the string landscape. The possible (simultaneous) existence of several cosmological constants can be interpreted as the possible presence of several vacuum states one has to choose from, what could bring into play Casimir effect considerations. Their simultaneous occurrence may indicate a future transition to a $\Lambda$CDM epoch with a different value for the effective cosmological constant. It is very interesting to realize that the freedom we actually have in those models allows us in many cases, on top of providing a reasonable description of the different epochs of the universe evolution, to also adjust for their right behavior in the far future: the universe turns to be (asymptotically) de Sitter or exhibits one of the four types of finite-time future singularities or shows a Little Rip behavior. Moreover, up to some exceptions, it is possible to choose the parameters so that they match the astronomical data providing a very realistic description of $\Lambda$CDM cosmology. This is not difficult to do by assuming that, at the current moment of its evolution, the universe is in a phase corresponding to a given effective cosmological constant. Remarkably, the different models, which correspond to different cosmological constants, could coexist at the same moment, which definitely hints to an intriguing classical analogy with the cosmological landscape picture. From another viewpoint, the rich structure of the cosmological (singular) behavior of the models under discussion indicates that maybe similar phenomena could be typical in the string landscape. The important lesson to be taken from current investigation is that, even if our current universe may look as the one described with the help of an effective cosmological constant, its finite-time future may be singular, so that its evolution might effectively end up. This opens the problem of the interpretation of the more precise observational data to come, which should be tailored with the specific purpose to understand what future is favored by the cosmological bounds this data will undoubtedly impose.
| 12
| 6
|
1206.2702
|
1206
|
1206.2644_arXiv.txt
|
The first direct detection limits on dark matter in the MeV to GeV mass range are presented, using XENON10 data. Such light dark matter can scatter with electrons, causing ionization of atoms in a detector target material and leading to single- or few-electron events. We use 15 kg-days of data acquired in 2006 to set limits on the dark-matter--electron scattering cross section. The strongest bound is obtained at 100\,MeV where $\sigma_e < 3\times 10^{-38} \unit{cm^2}$ at 90\% CL, while dark matter masses between 20\,MeV and 1\,GeV are bounded by $\sigma_e < 10^{-37}\unit{cm^2}$ at 90\% CL. This analysis provides a first proof-of-principle that direct detection experiments can be sensitive to dark matter candidates with masses well below the GeV scale.
|
The results above demonstrate, for the first time, the ability of direct detection experiments to probe DM masses far below a GeV. It is encouraging that with only 15\,kg-days of data, and no attempt to control single-electron backgrounds, the \xenon experiment places meaningful bounds down to masses of a few MeV. It should be emphasized that this analysis lacks the ability to distinguish signal from background. One promising method is the expected annual modulation of the signal. As discussed in~\cite{Essig:2011nj}, additional discrimination may be possible via the collection of individual photons, phonons~\cite{Formaggio:2011jt}, or ions, although at present such technologies have yet to be established. Independently, this type of search could be significantly improved with a better understanding of few-electron backgrounds. A quantitative background estimate was not made in~\cite{Angle:2011th}, making background subtraction impossible. Single-electron ionization signals have been studied, and potential causes discussed, by XENON10~\cite{Aprile:2010bt}, ZEPLIN-II~\cite{2008edwards}, and ZEPLIN-III~\cite{Santos:2011ju}. Possible sources include photo-dissociation of negatively charged impurities, spontaneous emission of electrons that have become trapped in the potential barrier at the liquid-gas interface, and field emission in the region of the cathode. The former two processes would not be expected to produce true two- or three-electron events, although single electron events may overlap in time, giving the appearance of an isolated, double-electron event. With a dedicated study, these backgrounds could be quantitatively estimated and reduced. With larger targets and longer exposure times, ongoing and upcoming direct detection experiments such as XENON100, XENON1T, LUX, and CDMS, should be able to improve on the sensitivity reported here. Such improvements may require optimizations of the triggering thresholds, and will strongly benefit from additional studies of the backgrounds. \vskip 0.2mm \begin{center} {\bf Acknowledgements} \end{center} \vskip -1mm R.E.~acknowledges support from NSF grant PHY-0969739. J.M.~is supported by a Simons Postdoctoral Fellowship. T.V.~is supported in part by a grant from the Israel Science Foundation, the US-Israel Binational Science Foundation and the EU-FP7 Marie Curie, CIG fellowship. \vskip -5mm
| 12
| 6
|
1206.2644
|
|
1206
|
1206.0477_arXiv.txt
|
{} {In order to study the acceleration and propagation of bremsstrahlung-producing electrons in solar flares, we analyze the evolution of the flare loop size with respect to energy at a variety of times. A GOES M3.7 loop-structured flare starting around 23:55 on 2002~April~14 is studied in detail using \textit{Ramaty High Energy Solar Spectroscopic Imager} (\textit{RHESSI}) observations.} {We construct photon and mean-electron-flux maps in 2-keV energy bins by processing observationally-deduced photon and electron visibilities, respectively, through several image-processing methods: a visibility-based forward-fit (FWD) algorithm, a maximum entropy (MEM) procedure and the uv-smooth (UVS) approach. We estimate the sizes of elongated flares (i.e., the length and width of flaring loops) by calculating the second normalized moments of the intensity in any given map. Employing a collisional model with an extended acceleration region, we fit the loop lengths as a function of energy in both the photon and electron domains.} {The resulting fitting parameters allow us to estimate the extent of the acceleration region which is between $\sim 13~\rm{arcsec}$ and $\sim 19~\rm{arcsec}$. Both forward-fit and uv-smooth algorithms provide substantially similar results with a systematically better fit in the electron domain.} {The consistency of the estimates from these methods provides strong support that the model can reliably determine geometric parameters of the acceleration region. The acceleration region is estimated to be a substantial fraction ($\sim 1/2$) of the loop extent, indicating that this dense flaring loop incorporates both acceleration and transport of electrons, with concurrent thick-target bremsstrahlung emission.}
|
Solar flares are known to produce large quantities of accelerated particles, in particular electrons in the deka-keV to deci-MeV range. However, the location and physical properties of the acceleration region are yet to be well constrained. An intrinsic complication is that the radiation produced by energetic particles emanates not only from the acceleration region itself, but also from other locations in the flare into which the accelerated particles propagate. Indeed, the oft-used ``thick-target'' model \citep{brown71} exploits this very complication by deriving properties of the hard X-ray emission that are completely independent of the location, extent, or physical properties of the acceleration region. Hence, determination of the properties of the acceleration region from spatially-integrated observations of flare emission is not straightforwardly possible. The reader is referred to recent reviews on electron properties inferred from hard X-rays \citep{kontar2011review} and their implications for electron transport \citep{holman2011review}. With the availability of high-quality hard X-ray imaging spectroscopy data from the \textit{RHESSI} instrument \citep{linetal02}, the situation has much improved \citep[see, e.g.,][]{emslie2003rhx}. Higher energy electrons are able to propagate further from the acceleration region and hence produce hard X-ray emission over a greater spatial extent than in lower-energy bands. \citet{xuetal08} and \citet{kohabi11} analyzed a set of events characterized by simple coronal flare loop sources located near the solar limb. In order to determine the spatial properties of the flare loops they fitted the {\textit{RHESSI}} visibilities with the geometric parameters of the loops and determined the size of the acceleration regions by fitting the source extents as a function of the photon energy with a collisional acceleration and propagation model. The present paper extends this kind of analysis. Specifically, for the simple coronal loop event observed by \textit{RHESSI} on 2002~April~14, we study the variation of source extent with energy not only in the {\textit{photon}} energy domain, but also, for the first time, in the {\textit{electron}} domain, using the procedure for generating weighted mean electron flux maps first enunciated by \cite{pianaetal07}. This extension of the model to the electron domain not only admits a simpler description of the source size with energy $E$ but also allows us to exploit the ``rectangular'' nature of the spectral inversion process (in bremsstrahlung, photons with a given energy $\epsilon$ are produced by electrons with all higher energies $E \geq \epsilon $), making it possible to reconstruct electron maps at energies higher than the maximum photon energy observed \citep[see][]{kontar2004}. Moreover, the regularization algorithm used to invert count visibilities into electron visibilities introduces a natural smoothing over energy $E$, which leads to a smoother behavior of source size with energy and so a more reliable estimate of pertinent parameters such as the acceleration region length. For a given time interval, we use both photon and electron maps to examine the form of the variation of loop length with energy. The analysis employs three different visibility-based imaging algorithms for both photon and electron maps: visibility forward-fit \citep{schmahl2007}, maximum entropy \citep{cornwell1985, bong2006}, and uv-smooth \citep{massone2009}. In the visibility-forward-fit procedure, the loop sizes are determined as model parameters. For the other methods, we use the standard deviation -- square root of the second normalized moment -- of the intensity map as a measure of the loop half-length. We then use the "tenuous acceleration region" model of \cite{xuetal08} to derive the longitudinal and lateral extents of the acceleration site. In Section~\ref{visibilities}, we describe the inversion algorithms used to derive electron flux visibilities and the imaging techniques employed to create the corresponding electron maps for a given time interval and electron energy range. In Section~\ref{data}, we fit the variation of loop size with electron energy $E$ to a simple parametric model \citep{xuetal08} in order to determine the longitudinal and lateral extents of the acceleration region. In Section~\ref{result}, we compare the values obtained through different imaging techniques and from different map domains (photon and electron). The inferred length of the acceleration region ($\sim 15$~arcsec) is approximately half the total length of the loop. This suggests that the standard model of solar flares where electrons are initially accelerated at a reconnection site near/above the loop top \citep[e.g.,][]{kopp1976} is not appropriate for certain types of flares. In such flares, the acceleration instead takes place over a large region inside the flare loop.
|
We have analyzed an extended coronal hard X-ray source in the event of 2002~April~14 with photon maps constructed from count visibilities (two-dimensional spatial Fourier transforms of the source geometry). Furthermore, using a regularized spectral inversion technique generating electron visibilities, we have studied the dependence of the electron flux image on electron energy $E$. The source lengths derived from both photon and electron maps generally grow quadratically with energy, in agreement with a collisional model involving an extended acceleration region \citep{xuetal08, kohabi11,bian2011}. Fitting this model allows estimation of the length and volume of the acceleration region: $\sim 15$ arcsec and $\sim600$ arcsec$^3$, respectively. We compare the results obtained by different algorithms (FWD and UVS) and in both photon and electron visibility domains. The plausible and consistent estimates of the acceleration-region lengths and widths strongly suggest that the proposed model is reliable. The systematic change of the behavior of loop length with time indicates that the loop density may increase throughout the event, due to chromospheric "evaporation" \citep{acton1982}. The inclusion of a thermal component to the current model is required for a comprehensive description of the chromospheric evaporation process. Detailed and statistical studies of more cases of dense-loop flares will be carried out in future work, in which we will also investigate other properties of the acceleration region, e.g., the loop density and its evolution with time.
| 12
| 6
|
1206.0477
|
1206
|
1206.6067_arXiv.txt
|
We report the first \chandra\ detection of emission out to the virial radius in the cluster Abell~1835 at $z=0.253$. Our analysis of the soft X-ray surface brightness shows that emission is present out to a radial distance of 10~arcmin or 2.4~Mpc, and the temperature profile has a factor of ten drop from the peak temperature of 10~keV to the value at the virial radius. We model the \chandra\ data from the core to the virial radius and show that the steep temperature profile is not compatible with hydrostatic equilibrium of the hot gas, and that the gas is convectively unstable at the outskirts. A possible interpretation of the \chandra\ data is the presence of a second phase of \emph{warm-hot} gas near the cluster's virial radius that is not in hydrostatic equilibrium with the cluster's potential. The observations are also consistent with an alternative scenario in which the gas is significantly clumped at large radii.
|
The large-scale halo of hot gas provides a unique way to measure the baryonic and gravitational mass of galaxy clusters. The baryonic mass can be measured directly from the observation of the hot X-ray emitting intra-cluster medium (ICM), and of the associated stellar component \citep[e.g.][]{giodini2009,gonzales2007}, while measurements of the gravitational mass require the assumption of hydrostatic equilibrium between the gas and dark matter. Cluster cores are subject to a variety of non-gravitational heating and cooling processes that may result in deviations from hydrostatic equilibrium, and in inner regions beyond the core the ICM is expected to be in hydrostatic equilibrium with the dark matter potential. At the outskirts, the low-density ICM and the proximity to the sources of accretion results in the onset of new physical processes such as departure from hydrostatic equilibrium \citep[e.g.,][]{lau2009}, clumping of the gas \citep{simionescu2011}, different temperature between electrons and ions \citep[e.g.,][]{akamatsu2011}, and flattening of the entropy profile \citep{sato2012}, leading to possible sources of systematic uncertainties in the measurement of masses. The detection of hot gas at large radii is limited primarily by its intrinsic low surface brightness, uncertainties associated with the subtraction of background (and foreground) emission, and the ability to remove contamination from compact sources unrelated to the cluster. Thanks to its low detector background, \suzaku\ reported the measurement of ICM temperatures to \rtwo\ and beyond for a few nearby clusters \citep[e.g.][]{akamatsu2011,walker2012a,walker2012b,simionescu2011,burns2010,kawaharada2010, bautz2009,george2009}; to date \a1835\ has not been the target of a \suzaku\ observation. In this paper we report the \chandra\ detection of X-ray emission in \a1835\ beyond \rtwo, using three observations for a total of 193~ksec exposure time, extending the analysis of these \chandra\ data performed by \cite{sanders2010}. The radius $r_{\Delta}$ is defined as the radius within which the average mass density is $\Delta$ times the critical density of the universe at the cluster's redshift for our choice of cosmological parameters. The virial radius of a cluster is defined as the equilibrium radius of the collapsed halo, approximately equivalent to one half of its turnaround radius \cite[e.g.][]{lacey1993, eke1998}. For an $\Omega_{\Lambda}$-dominated universe, the virial radius is approximately $r_{100}$ \citep[e.g.][]{eke1998}. \a1835\ is the most luminous cluster in the \cite{dahle2006} sample of clusters at $z=0.15-0.3$ selected from the \emph{Bright Cluster Survey}. The combination of high luminosity and availability of deep \chandra\ observations with local background make \a1835\ and ideal candidate to study its emission to the virial radius. \a1835\ has a redshift of $z=0.253$, which for $H_{0}=70.2$~km~s$^{-1}$~Mpc$^{-1}$, $\Omega_{\Lambda}=0.73$, $\Omega_M=0.27$ cosmology \citep{komatsu2011} corresponds to an angular-size distance of $D_A=816.3$~Mpc, and a scale of 237.48 kpc per arcmin.
|
In this paper we have reported the detection of emission from \a1835\ with \chandra\ out to the cluster's virial radius. The cluster's surface brightness is significantly above the background level out to a radius of approximately 10 arcminutes, which correspond to $\sim$2.4 Mpc at the cluster's redshift. We have investigated several sources of systematic errors in the background subtraction process, and determined that the significance of the detection in the outer region (450-600") is $\geq 4.7$~$\sigma$, and the emission cannot be explained by fluctuations in the background. Detection out to the virial radius is also implied by the \xmm\ temperature profile reported by \cite{snowden2008}. The \chandra\ superior angular resolution made it straightforward to identify and subtract sources of X-ray emission that are unrelated to the cluster. In addition to a large number of point sources, we have identified X-ray emission from two low-mass clusters that were selected from the SDSS data, MAXBCG J210.31728+02.75364 \citep{koester2007} and WHL J140031.8+025443 \citep{wen2009}. The two clusters have photometric and spectroscopic redshifts that make them likely associated with \a1835. These are the only two SDSS-selected clusters that are in the vicinity of \a1835. The outer regions of the \a1835\ cluster have a sharp drop in the temperature profile, a factor of about ten from the peak temperature. The sharp drop in temperature implies that the hot gas cannot be in hydrostatic equilibrium, and that the hot gas would be convectively unstable. A possible scenario to explain the observations is the presence of \emph{warm-hot} gas near the virial radius that is not in hydrostatic equilibrium with the cluster's potential, and with a mass budget comparable to that of the entire ICM. The data are also consistent with an alternative scenario in which a significant clumping of the gas at large radii is responsible for the apparent negative gradients of the mass and entropy profiles at large radii.
| 12
| 6
|
1206.6067
|
1206
|
1206.4915_arXiv.txt
|
In early October 2008, the Soft Gamma Repeater \sgr~(\esrc, \ax, \psr) became active, emitting a series of bursts which triggered the \fe~Gamma-ray Burst Monitor (GBM) after which a second especially intense activity period commenced in 2009 January and a third, less active period was detected in 2009 March-April. Here we analyze the GBM data of all the bursts from the first and last active episodes. We performed temporal and spectral analysis for all events and found that their temporal characteristics are very similar to the ones of other SGR bursts, as well the ones reported for the bursts of the main episode (average burst durations $\sim170$\,ms). In addition, we used our sample of bursts to quantify the systematic uncertainties of the GBM location algorithm for soft gamma-ray transients to $\lesssim 8\degrees$. Our spectral analysis indicates significant spectral evolution between the first and last set of events. Although the 2008 October events are best fit with a single blackbody function, for the 2009 bursts an Optically Thin Thermal Bremsstrahlung (OTTB) is clearly preferred. We attribute this evolution to changes in the magnetic field topology of the source, possibly due to effects following the very energetic main bursting episode.
|
Soft Gamma Repeaters (SGRs) together with Anomalous X-ray Pulsars (AXPs) comprise a small group of X-ray pulsars with many observational similarities. They have slow spin periods clustered in a narrow range ($P\sim$ $2-12$\,s), and relatively large period derivatives ($\dot P \sim 10^{-13}-10^{-10}$\,s\,s$^{-1}$). Their inferred surface dipole magnetic fields of $10^{14}-10^{15}$\,G (\citealt{kou98,kou99}), place these sources at the extreme end of the distribution of magnetic fields in astrophysical objects. Such objects were predicted theoretically by \citet{duncan92}, \citet{bohdan92}, and by \citet{usov92}; the former team named such high $B-$field sources ``magnetars''. Phenomenologically, a distinct difference between AXPs and SGRs is manifested by their bursting activity behavior. SGRs have been observed to undergo active episodes with a multitude of short, soft bursts with energies upwards $10^{36}$ erg, reaching over $10^{45}$ erg in the case of the very rare Giant Flares. AXPs, on the other hand, are not such prolific and energetic bursters. As a result, SGRs were historically discovered from their burst activity, while AXPs were set aside from rotation-powered X-ray pulsars by their timing properties with no bursts. This changed in 2002, when \cite{gavriil02} discovered bursts from the AXP\,1E1048.1$-$5937; today almost all AXPs have also been shown to emit SGR-like bursts, albeit fainter. As a group, magnetars are persistent X-ray emitters with X-ray luminosities ranging between $10^{32}-10^{36}$\,\ergs, larger than those obtained by rotational energy losses, supporting the hypothesis that magnetic field dissipation powers their X-ray emission. Alternative models, such as e.g., accretion from a fossil supernova fall-back disk, have also been invoked to explain the magnetar emission; for comprehensive reviews see \cite{woo06} and \cite{mer08} and references therein. \sgr~is a source that has undergone multiple identity changes. The source was discovered in 1980 with the \einst~observatory (\einst~source: \esrc) during a search for X-ray counterparts of {\it COS-B} unidentified $\gamma$-ray sources \citep{lam81}. The Advanced Satellite for Cosmology and Astrophysics (\asca) confirmed the detection during a Galactic Plane survey in 1998 (\asca~source: AX J155052-5418, \citealt{sug01}). \cite{gel07} proposed, on the basis of {\it XMM-Newton} and {\it Chandra} X-ray Observatory ({\it CXO}) observations in 2004 and 2006, a potential magnetar/SNR association between this X-ray source and the Galactic radio shell G$327.24-0.13$. The location in the center of the SNR candidate, the relatively soft X-ray spectrum and the X-ray variability favored a magnetar model explanation, however the {\it XMM} observation only set an upper-limit for the peak-to-peak pulsed fraction. The crucial piece of evidence was finally provided by radio observations (PSR\,J$1550 - 5418$; \citealt{cam07}). Their measurement of the spin period of 2.07 s and period derivative of $2.3 \times 10^{-11}$ s s$^{-1}$ led to an estimate for the surface magnetic dipole field of $B \sim 2.2 \times 10^{14}$~G, which confirmed the magnetar nature of the source. Due to the lack of bursting activity, the source was initially characterized as an AXP \citep{cam07}. However, in 2008/2009 the source entered an extremely active period emitting a plethora of SGR-like bursts, in three active episodes, of which the first and the last were the least prolific. Based on the main episode behavior, during which several hundreds of bursts were emitted during a 24-hour period, similar to the burst ``storms'' of SGRs $1627-41$ \citep{esposito08}, SGR $1900+14$ \citep{gogus99,israel08} and SGR $1806-20$ \citep{gogus00}, the source was renamed as \sgr~(\citealt{pal09}, \citealt{kou09}). A detailed study of the 2008 October \swift~X-ray bursts from SGR\,J$1550-5418$ is presented in \cite{isr10}. A detailed analysis of $\sim200$ bursts detected within a few hours on January 22 with the International Gamma-Ray Astrophysics Laboratory (\INT), is presented in \cite{mer09} and \cite{sav10}. Finally, \cite{ter09} present the analysis of over 250 bursts (50 keV to 5 MeV) detected with the {\it Suzaku}/Wide-band All-sky Monitor (WAM), while the observations of the \swift/BAT, Konus-Wind, and {\it RHESSI} are published by \cite{gro09}, \cite{gol09}, and \cite{bel09}, respectively. \chandra~and {\it RXTE} observations, performed after the 2008 and 2009 outbursts, provided evidence of a decoupling between magnetar spin and radiative properties \citep{ng11}. The pulsar spin-down was observed to increase by a factor of 2.2 during the first 2008 event, in absence of a corresponding spectral change. During the more energetic 2009 events no such variation was found. A comprehensive analysis of multiple instrument X-ray data by \cite{ber11} recorded since the 1980 discovery of the source, showed that the X-ray flux history can be grouped into three levels: low, intermediate and high. The observed persistent spectra harden when transitioning from the low to the high state, with the power-law component becoming flatter and the temperature of the blackbody increasing. During the high flux state the pulsed fraction decreases with energy and shows an anti-correlation with the X-ray flux. The Gamma-ray Burst Monitor (GBM) onboard {\it Fermi} detected all three active episodes of the source. A detailed temporal and (integrated) spectral study of the main (second) burst episode (2009 January 22 to 29) comprising 286 bursts was published by \cite{AvdH12}. A search in the \fe~/LAT data in the $0.1-10$ GeV energy range did not reveal significant gamma-ray emission \citep{abd09}. During this episode, the first GBM trigger on 2009 January 22, showed a $\sim 150$~s long persistent emission with intriguing timing and spectral properties. \cite{kan10} identified coherent pulsations up to $\sim 110$~keV at the spin period of the neutron star and an additional (to a power-law) blackbody component required to model the enhanced emission spectra. The favored emission scenario is a surface hot spot with the dimensions of the magnetically confined plasma near the neutron star surface (roughly a few $\times 10^{-5}$ of the neutron star area). \cite{tie10} claimed two best fit source distances of 3.9~kpc and $\sim 5$~kpc, using the X-ray rings around \sgr~ produced by dust-scattering. Here we adopt a distance of 5~kpc. Such an extensive burst activity with hundreds of bursts within a few days has only been observed from three more sources in the past: SGRs\,$1806-20, 1900+14$ and $1627-41$. However, lower level burst activity prior to the main bursting episode was only seen from SGRs\,$1900+14$ and $1806-20$. Detailed spectral analyses of the bursts from these sources showed that the spectral properties of the earlier bursts were not distinct from those occurring during their peak activity episodes (G{\"o}{\u g}{\"u}{\c s} et al. 1999, 2000). In contrast, in SGR\,J$1550-5418$ the ``before'' and ``after'' burst episodes (2008 October and 2009 March -- April ) were significantly different than the main ``storm'' of activity: the burst rates and event intensities were much lower. We report here our spectral analysis results of the first and last bursting episode from \sgr, where we see for the first time a clear distinction between the spectral shapes of events detected before and after the main active episode of the source. Therefore, we find here an intriguing new type of behavior that was not observed before in other SGRs. In \S\ref{sec:obs} we present an overview of the source burst activity as seen with GBM, while \S\ref{sec:analysis} presents detailed temporal and spectral analysis of the bursts. Since this is the first soft transient source detected with GBM, we discuss here also the GBM location accuracy for soft sources. We discuss our results and in particular the implications of the spectral differences between the two episodes in \S\ref{sec:discussion}.
|
\label{sec:discussion} Our main new finding in this work is that the observed GBM spectrum of the bursts in the energy range $8-200\;$keV has changed from being BB-like in the 2008 October active period to a broader, steeper OTTB-like form with less curvature during the 2009 March activity episode (see Table~\ref{tbl-2}). In particular, the photon index below the peak, $\lambda$ that is defined through $dN/dE \propto E^\lambda$ has changed from $\lambda \sim 1$ during 2008 October to $\lambda\sim -1$ during 2009 March . In this context, it is interesting to note that during the most active bursting period of \sgr, in 2009 January , the GBM burst spectra were typically not well fit by a single BB spectrum, but were instead equally well fit by an OTTB, Comptonized or BB+BB spectrum \citep{AvdH12}. When including also Swift/XRT data in the spectral fit of those 2009 January bursts for which these were available, however, a BB+BB spectrum is usually preferred \citep{Lin12} in which the cool BB component has a factor of $\sim 2$ or so smaller fluence than the hot BB component (interestingly enough, when fit to a Comptonized spectrum, the implied values of $\lambda$ typically ranged between $\sim -1$ and $\sim 0$). The 2008 October are bursts best fit by a BB component of temperature $\sim 11-14\;$keV and effective area $\sim 0.2-2\;$km$^2$, which are similar to the hot BB component found for the 2009 January bursts. It is possible that a cool BB component could potentially still be present in the 2008 October bursts. If we assume a cool BB to hot BB flux ratio similar to that of the 2009 January bursts, then in order to avoid clear detection in the GBM spectra presented here the cool BB temperature typically needs to be below a few keV, which would correspond to an effective area of $\gtrsim 10^3\;$km$^2$. In this scenario both the hot and cold BB components in the 2008 October bursts would be near the cool end of the corresponding components from the BB+BB spectral fits for the 2009 January bursts. The differences between the spectroscopy of the bursts studied here and for the bursts in 2009 January clearly indicates evolutionary patterns on the timescale of a few months or so. This behaviour might arise, for example, from a change in the details of the energy release or containment between different episodes. The rather small effective areas we obtain for the BB spectrum ($\sim 0.2-2\;$km$^2$) imply a small emission region, which is therefore probably close to the NS surface. Such a proximity to the surface would therefore tend to result in a relatively high effective opacity, due to large plasma densities. The opacity is mainly controlled by the scattering cross section of the ordinary or O-mode photons being near the Thomson value. O-mode photons are those where the photon electric field vector lies in the plane defined by their momenta {\bf k} and the local magnetic field. Photons in the extra-ordinary polarization mode (or E-mode, where the photon electric field vector is normal to the local {\bf k}--{\bf B} plane), experience a dramatically reduced scattering cross section that is suppressed because the photon energy is typically far below the cyclotron energy (e.g., \citealt{her79}). These E-mode photons contribute little to the overall opacity, which being high, results in a local quasi-thermodynamic equilibrium (i.e., LTE) that should drive the spectrum towards a BB or BB+BB form. Radiative transfer effects in a strong magnetic field ($B\gg B_{\rm QED}$) can cause a deviation from a pure BB spectrum (e.g., see \citealt{ulmer1994,lyubarski2002}) due to the increase in E-mode opacity with photon energy, that enables lower energy (E-mode) photons to escape from a larger depth within the emission region, where the temperature (that is established by the O-mode photons) is higher, thus resulting in a softer than thermal spectral slope below the peak, where $dN/dE\propto E^\lambda$ with $\lambda \sim 0$. Somewhat harder low-energy spectral slopes might still be possible, e.g., due to resonant ion cyclotron absorption, which should be sensitive to the local value of the magnetic field. In order for such an absorption to reach sufficiently high photon energies, a rather high local value of the magnetic field is required in the emission region ($\gtrsim 10^{15}\;$G), which for \sgr~is in excess of the surface dipole magnetic field strength (of $\sim 2.2\times 10^{14}\;$G) inferred from its $P\dot{P}$. Thus, this naturally suggests higher multipoles (e.g.,\citealt{tho02}). A strong local magnetic field from quadrupole and higher multipole configurations could effectively confine the hot emitting plasma to relatively small closed magnetic flux tubes near the stellar surface. This is consistent with the small effective areas inferred from our spectroscopic analysis. \footnote{Some hint for the presence of higher multipoles might be found in the irregular shape of the pulse profiles of \sgr, which change as a function of energy and time (e.g., \citealt{kan10,Lin12}), but this is by no means conclusive.} Thus, the spectral slope below the peak energy might be related to the local field strength and topology near the magnetic dissipation region that gives rise to the bursting episode. This might potentially be the factor that is common to different bursts within the same active period, but varies between different activity episodes, i.e. field topology evolves significantly on timescales of a month or so. It is also possible that the trigger from the crustal regions may relocate to different colatitudes during this evolution, thereby precipitating a sampling of disparate field topologies within the magnetospheric dissipation zones involved. The broader OTTB-like spectra of the 2009 March bursts might reflect a Comptonized spectrum from an emitting region with a modest optical depth. As discussed, e.g, in \cite{Lin11,Lin12} \citep{1979rpa..book.....R}, the simplest form of such a model yields $\lambda = 1/2-\sqrt{9/4+4/y_B}$ where $y_B= 4kT_e/(m_ec^2)\max[\tau_B,\tau_B^2]$ is the magnetic Compton y-parameter, $\max[\tau_B,\tau_B^2]$ is the mean number of scatterings per photon by the hot electrons, and $\tau_B$ is the effective optical depth for scattering that in our case is significantly modified by the strong magnetic field (and is thus dubbed the magnetic optical depth). For \sgr, the inferred peak energies for the 2009 March bursts, typically $E_{\rm peak}\sim 30-45\;$keV, suggest $4kT_e/(m_ec^2) \sim 0.23-0.35$, which would imply $\lambda \sim -0.8$ and $\sim -0.95$ for $\tau_B \sim 5$ and $\sim 10$, respectively.\footnote{Note that relativistic corrections for such large temperatures, such as Klein-Nishina reductions, only influence these inferred indices to a modest extent; see \cite{Lin11} and references therein.} Thus such modest values of $\tau_B$ would result in $y_B\gg 1$ so that $\lambda$ approaches the value of $-1$, approximately coinciding with the lower energies of the inferred OTTB-like spectrum of the 2009 March bursts. Much larger optical depths would result in saturated Comptonization or true thermalization, and a spectrum closer to a BB, though still generally different from a BB as mentioned above. Moreover, the 2009 March bursts are equally well fit by a BB+BB spectrum, so that in principal it is possible that the underlying spectrum during all the three bursting periods discussed above (2008 October , 2009 January and 2009 March ) are in fact BB+BB or multi-blackbody, which was discussed in detail in \cite{AvdH12}. Different bursts within the same activity period may exhibit a relationship between the spectrum and the timing of the bursts. A given active period might be triggered by the yielding of the crust to magnetic stresses at a particular location on the NS. The magnetic field structure in that region could affect the details of the energy release and confinement of the hot plasma that is produced, and thereby influence the resulting spectrum of the bursts. If the emitting region is small and near the surface then it might be obscured during certain rotational phases (since the burst duration is smaller than the rotational period), hence resulting in a non-uniform distribution of bursts with the rotational phase, as was found by \cite{Lin12}. If the bursts indeed span a reasonable range of rotational phases, then it is likely that they sample substantially different viewing angles with respect to the well-localized emission region. This would then suggest that the viewing angle is not the dominant factor in determining the overall spectral shape.
| 12
| 6
|
1206.4915
|
1206
|
1206.1701_arXiv.txt
|
We study the melting of a $K^-$ condensate in hot and neutrino-trapped protoneutron stars. In this connection, we adopt relativistic field theoretical models to describe the hadronic and condensed phases. It is observed that the critical temperature of antikaon condensation is enhanced as baryon density increases. For a fixed baryon density, the critical temperature of antikaon condensation in a protoneutron star is smaller than that of a neutron star. We also exhibit the phase diagram of a protoneutron star with a $K^-$ condensate. \pacs{26.60.+c, 21.65.+f, 97.60.Jd, 95.30.Cq}
|
Recent observation of a 2$M_{\odot}$ neutron star puts stringent conditions on the equation of state (EoS) of dense matter in neutron star interior \cite{Demo}. It has been long conjectured that neutron star interior might contain exotic phases of dense matter such as hyperons, Bose-Einstein condensates of antikaons and quarks. In certain cases, exotic components of matter make the EoS softer resulting in maximum masses of neutron stars below 2$M_{\odot}$. For example, the equations of state involving hyperons lead to maximum neutron star mass below 2$M_{\odot}$ when the hyperon-scalar meson coupling is determined from existing hypernuclei data and hyperon-vector meson couplings are estimated from SU(6) quark model \cite{mish,cbb}. Recently, it has been demonstrated that the neutron star mass of 2$M_{\odot}$ could be achieved by making hyperon-vector meson coupling stronger \cite{dcj1,dcj2,schm}. On the other hand, it was shown that the EoS including antikaon condensates might result in 2$M_{\odot}$ neutron stars if the magnitude of the attractive antikaon optical potential depth is 140 MeV or less \cite{pal}. The idea of antikaon condensation in dense baryonic matter formed in heavy ion collisions as well as in neutron stars, started with the seminal work of Kaplan and Nelson \cite{Kap}. They investigated the problem within the $SU(3)_L\times SU(3)_R$ chiral perturbation theory. Later detailed studies on antikaon condensation in neutron star interior were carried out in the chiral perturbation theory \cite{Bro92,Tho,Ell,Lee,Pra97} as well as meson exchange models \cite{mish,pal,Gle99,Mut,Kno,Li,Bani1,Bani2,Bani3,Bani4}. The first order phase transition from nuclear matter to antikaon condensed matter was either studied using Maxwell construction or Gibbs' phase equilibrium rules. It was observed that the threshold density for $K^-$ condensation was sensitive to the nuclear EoS as well as the strength of the attractive antikaon optical potential depth. Here we are interested in the melting of antikaon condensate in newly born hot and neutrino-trapped protoneutron stars. The study of antikaon condensation continued at finite temperatures in connection with the metastability of protoneutron stars \cite{Pons} as well as the dynamical evolution of the condensation \cite{Muto}. The critical temperature of antikaon condensation in hot neutron stars, just after emission of trapped neutrinos, was investigated for the first time \cite{Bani5}. We have recently investigated the thermal nucleation of droplets of antikaon condensed matter in neutron stars after neutrinos were emitted as well as in protoneutron stars \cite{Bani6,Bani7}. Further we made use of the results of our earlier calculation on the critical temperature of antikaon condensation in the thermal nucleation of antikaon droplets in neutron stars \cite{Bani5,Bani6}. However, there are no calculations on the critical temperature of antikaon condensation in hot and neutrino-trapped protoneutron stars. This might have important implications to understand whether the droplet of antikaon condensed phase would melt or not during the thermal nucleation in protoneutron stars. This motivates us to investigate the critical temperature of $K^-$ condensation in protoneutron stars. The organisation of the paper is the following. We discuss the model to compute the composition and EoS involving $K^-$ condensate at finite temperature in Section II. Results are discussed in Section III. Section IV gives a summary.
|
We use the GM1 parameter set for nucleon-meson coupling constants which reproduce the nuclear matter saturation properties i.e. binding energy -16 MeV, saturation density ($n_0$) 0.153 $fm^{-3}$, asymmetry energy coefficient 32.5 MeV, effective nucleon mass ($m_N^*/m_n$) 0.70 and incompressibility $K = 300$ MeV \cite{Bani2,Gle91}. Kaon-vector meson couplings are obtained from the quark model and isospin counting rules \cite{Gle99,Bani5}. Further the kaon-scalar meson coupling is estimated from the real part of antikaon optical potential depth at normal nuclear matter density \cite{pal,Gle99,Bani2}. It was already noted that the value of antikaon optical potential depth varied widely from $U_K = -180 \pm 20$ MeV as obtained from the analysis of kaonic atom data \cite{Fri94,Fri99,Gal07} to $\sim -60$ MeV in the chiral model \cite{Tol}. Earlier we observed that the EoS involving the $K^-$ condensate became very soft when the antikaon potential was highly attractive. Such an EoS resulted in maximum neutron star mass below 2$M_{\odot}$ \cite{pal}. After the discovery of 2$M_{\odot}$ neutron star, the EoS including exotic matter is severely constrained. Therefore we perform this calculation for $U_K = -120$ MeV which results in a maximum neutron star mass of 2.08$M_{\odot}$ \cite{pal}. Kaon-scalar coupling constants are taken from Table II of Ref. \cite{Bani2}. We consider a set of values for entropy per baryon $S$ in this calculation. Further we take lepton fraction $Y_L = Y_e + Y_{\nu_e} = 0.35$ in the protoneutron star. These correspond to several snapshots in the evolution of neutrino-trapped and hot protoneutron stars. For a fixed entropy per baryon, the temperature varies from the center to the surface in the protoneutron star. This is demonstrated in Figure 1. We highlight this for entropy per baryon $S=2$. The temperature increases with baryon density in this case. Here, we also show the corresponding scenario in a hot and neutrino-free neutron star for $S=2$. Higher temperature is obtained in the neutron star. Further we note that as soon as the antikaon condensate appears in the protoneutron star, there is a drop in the temperature compared with the case without the condensate. Populations of different particle species in the $\beta$-equilibrated protoneutron star matter are shown with baryon density for $S=2$ case in Figure 2. We observe that the matter is populated with thermal kaons well before the onset of the $K^-$ condensate. This also leads to the enhancement of the proton fraction. The threshold density of $K^-$ condensation in the hot and neutrino-trapped protoneutron star matter for entropy per baryon $S=2$ is 4.2$n_0$. On other hand, $K^-$ condensation in the neutrino-trapped matter at zero temperature sets in at 3.68$n_0$. Similarly, threshold densities of antikaon condensation in neutrino-free neutron stars with $U_{K}=-120$ MeV are 3.16$n_0$ for $S=2$ and 3.05$n_0$ for $S=0$. The role of finite temperature is to shift the threshold of antikaon condensation to higher density. After the appearance of the condensate, $K^-$ density ($n_K^C$) increases rapidly resulting in higher proton number density in the protoneutron star as evident from Fig. 2. We discuss the thermal effects on the EoS and protoneutron star masses. Pressure versus energy density is plotted for entropy per baryon $S=2$ and $S=0$ in Figure 3. We do not find any appreciable change in the EoS due to thermal effects. This result has important significance for the calculation of thermal nucleation of the antikaon condensed phase in protoneutron stars and justifies the use of a zero temperature EoS in that case \cite{Bani7}. Maximum protoneutron star masses for $S=2$ and $S=0$ cases are 2.228 and 2.214$M_{\odot}$, respectively. The corresponding central density for $S=2$ case is 5.3$n_0$ and it is 5.59$n_0$ for $S=0$. Next we investigate the melting of the condensate in protoneutron star matter as the interior temperature increases. As we follow the evolution of a protoneutron star through several snapshots, we consider different values of entropy per baryon. We demonstrate the variation of entropy per baryon with temperature for several fixed values of baryon densities and $U_K = -120$ MeV in Figure 4. The condensate density ($n_K^C$) is a function of both baryon density and temperature. The condensate disappears above a certain temperature known as critical temperature ($T_C$). To obtain this critical temperature, we show the ratio of the condensate density ($n_K^C(T)$) at finite temperature to that ($n_K^C(T=0)$) of zero temperature as a function of temperature for several fixed baryon densities and $U_K = -120$ MeV in Figure 5. In each case corresponding to a fixed baryon density, the condensate density diminishes with increasing temperature resulting ultimately in the meltdown of the condensate at the critical temperature of the condensation. We find that the critical temperature increases as baryon density increases. In an earlier calculation, we estimated the critical temperature of $K^-$ condensation for hot and neutrino-free neutrons stars \cite{Bani5}. When we compare the critical temperatures in both cases for a certain baryon density for example 4.4$n_0$, the critical temperature in the protoneutron star has a smaller value than that of the neutron star as evident from Figure 6. This may be attributed to more heat content in the protoneutron star than that of the neutron star. For example, at temperature $T=40$ MeV, entropy per baryon corresponding to $Y_L=0.35$ and $Y_{\nu_e}=0$ are 1.6 and 1.3, respectively. This finding might be very useful in understanding the thermal nucleation of droplets of antikaon condensed matter in protoneutron stars \cite{Bani7}. As we know the critical temperature as a function of baryon density, we can construct a phase diagram of protoneutron star matter involving a $K^-$ condensate. The phase diagram as temperature versus baryon density is shown in Figure 7. The solid line representing critical temperatures separates the condensed phase (shaded region) from the hadronic phase.
| 12
| 6
|
1206.1701
|
1206
|
1206.5228_arXiv.txt
|
An occulter is a large diffracting screen which may be flown in conjunction with a telescope to image extrasolar planets. The edge is shaped to minimize the diffracted light in a region beyond the occulter, and a telescope may be placed in this dark shadow to view an extrasolar system with the starlight removed. Errors in position, orientation, and shape of the occulter will diffract additional light into this region, and a challenge of modeling an occulter system is to accurately and quickly model these effects. We present a fast method for the calculation of electric fields following an occulter, based on the concept of the boundary diffraction wave: the 2D structure of the occulter is reduced to a 1D edge integral which directly incorporates the occulter shape, and which can be easily adjusted to include changes in occulter position and shape, as well as the effects of sources---such as exoplanets---which arrive off-axis to the occulter. The structure of a typical implementation of the algorithm is included.
|
One of the major goals of the coming decades is to directly image a terrestrial exoplanet. Direct imaging allows spectral information about the planet's atmosphere to be extracted from the spectrum of reflected light, including the potential presence of biomarkers, gases whose presence and abundance may indicate the presence of life. Detecting planets directly is a difficult task, however, for two reasons: intensity ratio (\emph{contrast}) and angular resolution. An Earth-twin which orbits a Sun-twin at 1AU emits $10^{10}$ times less flux than the parent star in the visible band; if the system was located at 10 parsecs from Earth, the angular separation of the two objects would be 100mas, which for most proposed space telescopes puts the separation under $4 \lambda/D$. Specialized methods are required to remove this flux at these small working angles. One proposed method of suppressing the starlight is an \emph{occulter}, a spacecraft with a shaped edge flown in front of the telescope to block the starlight before it arrives at the telescope. The occulter size (tens of meters) and distance (tens of thousands of kilometers) are chosen so the angular extent of the occulter is smaller than some desired angle, on the order of 100 milliarcseconds, so exoplanets in orbit about the star will still be visible. The edge is shaped to control diffraction, with the form chosen to suppress the light to a factor of $10^{10}$ across the telescope aperture and over a wide spectral passband. A major advantage of this approach, compared to an internal coronagraph, is that it significantly loosens the tolerances on the optical quality and thermal stability of the telescope optics, as well as eliminating the need for a wavefront control system for active correction. Conversely, the occulter itself must be built, deployed, and flown in formation to its own set of tolerances, and demonstrating this capability is an area of active research \mycitet{Kas11}. These tolerances include limits on permanent errors in the occulter shape due to manufacturing and deployment errors; transient shape deformations from thermal fluctuations and vibration; and position and orientation changes of the occulter relative to the telescope-target axis from formation flying \mycitep{Sha10, Gla10}. The scale of an occulter---and the distance it must maintain with respect to its target---precludes optical testing on the ground. All of these effects must be modeled in order to build an error budget and verify that an occulter can satisfy these requirements. The capture of the large dynamic range in the resulting field---$10^{5}$ or greater in amplitude---drives the accuracy of the propagator used for the models, while the modeling of time-varying thermal shape deformations and closed-loop formation flying in broadband light drive the speed of the calculation. Some of the most in-depth simulated observations can take from hours to days to run, and the ability to quickly and accurately model propagation past an occulter is thus a major enabling factor in the characterization of occulter system performance. Section \ref{sec:current} gives an overview of the current techniques; the new technique presented in this work is described in Sections \ref{sec:bdw} and \ref{sec:vecpot}, and a suggested implementation scheme, along with a pseudocode overview of the algorithm and some computational results, are given in Section \ref{sec:imp}.
|
We present a new approach to modeling fields following planet-finding occulters, based on the boundary-diffraction-wave formulation laid out by Miyamoto and Wolf. It has proved to be simple to modify the shape and recalculate with this approach to include various types of occulter errors, as well as fast to evaluate, improving on the current best method by 2-3$\times$ with no loss in precision. An efficient implementation has been laid out, and we hope it proves useful to others in the field.
| 12
| 6
|
1206.5228
|
1206
|
1206.0061_arXiv.txt
|
{ We present H$\alpha3$ (acronym for H$\alpha-\alpha\alpha$), an H$\alpha$ narrow-band imaging survey of $\sim 400$ galaxies selected from the HI Arecibo Legacy Fast ALFA Survey (ALFALFA) in the Local Supercluster, including the Virgo cluster.} {By using hydrogen recombination lines as a tracer of recent star formation, we aim to investigate the relationships between atomic neutral gas and newly formed stars in different environments (cluster and field), morphological types (spirals and dwarfs), and over a wide range of stellar masses ($\sim 10^{7.5}-10^{11.5}$ M$_\odot$).} {We image in H$\alpha$+[NII] all the galaxies that contain more than $10^{7}$ M$_\odot$ of neutral atomic hydrogen in the sky region $\rm 11^h < R.A. <16^h\, ; \,4^o< Dec. <16^o; 350<$cz$<2000$ $\rm km ~s^{-1}$ using the San Pedro Martir 2m telescope. This survey provides a complete census of the star formation in HI rich galaxies of the local universe.} {We present the properties of the galaxy sample, together with H$\alpha$ fluxes and equivalent widths. We find an excellent agreement between the fluxes determined from our images in apertures of 3 arcsec diameter and the fluxes derived from the SDSS spectral database. From the H$\alpha$ fluxes corrected for galactic and internal extinction and for [NII] contamination we derive the global star formation rates (SFRs). } {}
|
The large amount of multi-wavelength data from recent and ongoing surveys is providing a wealth of information on the different phases of the interstellar medium (ISM), the stellar content and the present day star formation rates (SFRs) in nearby galaxies. Complemented with results from numerical simulations and theory, these observations contribute to our understanding of the basic process that regulates the life of a galaxy: the conversion of gas into stars. However, crucial questions remain open on which gas phase (on which scale) is ultimately responsible for new star formation, on which tracers for the SFR are unbiased, and on the role of very massive stars and of the environment in shaping what is the observed luminosity in local galaxies. Half a century has passed since Schmidt (1959) discovered a foundamental relation between the surface density of the star formation and that of the gaseous component in galaxies\footnote{The 50th anniversary from the original Schmidt (1959) paper was celebrated during the conference SFR@50 held in Spineto in 2009}, today known as the Kennicutt-Schmidt (KS) law (Schmidt 1959, Kennicutt 1989, 1998). Since then, a large number of theoretical and observational studies have addressed the origin of this correlation. Modern observations reveal a relation between molecular gas and star formation rate surface density (Wong \& Blitz 2002, Kennicutt et al. 2007, Bigiel et al. 2008) within the optical radius where CO seems to be a reliable tracer of molecular hydrogen. While this contrasts the original form of the KS law in which a correlation between the SFR and the much more extended atomic gas is evident over several order of magnitude in SFR, this finding is consistent with the basic picture of star formation in giant molecular clouds. But it is unclear whether molecular hydrogen drives this correlation (Krumholz et al. 2011, Glover \& Clark 2011) and departures from this observed relation are still a matter of debate (Fumagalli \& Gavazzi 2008, Bigiel et al. 2010, Schruba et al. 2011). Whereas there is unanimous consent that high luminosity late-type galaxies display low specific star formation rate (SSFR), i.e. SFR per unit stellar mass, as expected from \emph{downsizing} (e.g. Gavazzi et al. 1996), the behavior of dwarf galaxies, whose SSFR spans a range exceeding two orders of magnitude (Lee et al. 2007), is poorly understood. In addition, the SFR inferred from the H$\alpha$ hydrogen recombination line in these systems or in the outskirt of disks, systematically underpredicts the one derived from the UV light (Meurer et al. 2009, Lee et al. 2009) to the point that doubts have been cast on the universality of the initial mass function (IMF; e.g. Meurer et al. 2009) and on the reliability of hydrogen recombination lines to trace star formation (Pflamm-Altenburg et al. 2007). However, uncertainties in the dust extinction (Boselli et al. 2009), star formation history (Weisz et al. 2011) and a stochastic star formation rate (Fumagalli et al. 2011) can explain equally well the observed luminosity even for a universal IMF. \begin{figure*}[!t] \centering \includegraphics[width=19cm]{gavazzi_fig1.ps} \caption{Bottom panel. Sky distribution of 409 HI selected galaxies observed in the present survey, 383 with $F_{HI} > 0.7 {\rm~Jy~km~s^{-1}}$ (filled circles) and 26 with $<$ 0.7 $\rm~Jy~km~s^{-1}$ (big empty circles). Red symbols refer to 233 new sources observed in 2006-2009 whose fluxes are presented in this paper. Top panel. 68 HI targets that matches our selection criteria but that were not observed because: 8 lie too close to bright stars; 38 are either debris of ram pressure stripped gas or their associated galaxy is too faint to be seen on SDSS plates (triangles); 20 that will be consider in future runs (crosses). The two vertical broken lines mark the adopted boundaries of the Virgo cluster.} \label{campione} \end{figure*} Similarly, the debate on the role of the environment in shaping the star formation properties of galaxies is still open (see a review by Boselli \& Gavazzi 2006). While it is observed that atomic (e.g. Gavazzi et al. 2002b, Cortese \& Hughes 2009, Rose et al. 2010) and, in highly perturbed systems, molecular (Vollmer et al. 2008, Fumagalli et al.2009, Vollmer et al. 2009) gas depletion result in a low level of star formation, simulations of ram-pressure stripping have suggested different degrees of enhancement in the SFR of perturbed galaxies (e.g. Kronberger et al. 2008, Kapferer et al. 2009, Tonnesen \& Bryan 2009). And studies of the H$\alpha$ morphology in galaxies within rich groups of clusters show a mix of global suppression and truncation of the H$\alpha$ disks (Vogt et al. 2004, Koopmann \& Kenney 2004, Fumagalli \& Gavazzi 2008, Welikala et al. 2008, Rose et al. 2010), but a definitive assessment of the relative importance of these different perturbations is still lacking. To address some of these open issues, we have recently completed H$\alpha3$, an H$\alpha$ narrow-band imaging survey at the 2.1m telescope of the San Pedro Martir (SPM) Observatory. Our sample includes $\sim 400$ nearby galaxies, selected from the ongoing blind HI Arecibo Legacy Fast ALFA Survey (ALFALFA; Giovanelli et al. 2005) in the Spring sky of the Local Supercluster, including the Virgo cluster, in the velocity window $350<cz<2000$ $\rm km~s^{-1}$. Together with ancillary multifrequency data and complemented by similar surveys (Meurer et al. 2006) or with optically selected samples (James et al. 2004, Kent et al. 2008), these observations provide a complete census of the SFR in the local universe traced by hydrogen recombination lines (see also Bothwell et al. 2009). The present paper, first of a series, presents the basic properties of the H$\alpha3$ dataset in the Local Supercluster (Sect. \ref{sample}). After a description of observations (Sect. \ref{observ}) and data reduction (Sect. \ref{data}), we list previously unpublished H$\alpha$ fluxes and equivalent widths for 235 galaxies. Summary and future prospects follow in Sect. \ref{summary}, while in Appendix \ref{atlas} we give an Atlas of the images of the sampled galaxies.\\ Paper II of this series will contain the analysis of the integrated quantities (global SFR) produced by H$\alpha3$ in the Local Supercluster, and will investigate the relationships between atomic neutral gas and newly formed stars in different environments (cluster and field), morphological types (spirals and dwarfs), and over a wide range of stellar masses ($\sim 10^{7.5}-10^{11.5}$ M$_\odot$). \\ Paper III will contain the extension of H$\alpha3$ to the Coma Supercluster ($\rm 10^h < R.A. <16^h\, ; \,24^o< Dec. <28^o; 3900<cz<9000$ $\rm km ~s^{-1}$).\\ The analysis of the H$\alpha$ morphology from H$\alpha3$ in both the Local and the Coma Superclusters will be carried out in Paper IV, which will address the comparison of the effective radii at H$\alpha$ and $r$ band as a function of morphological type, and the determination of other structural parameters such as the Concentration index, the Asymmetry and the Clumpiness parameters introduced by Conselice (2003). Throughout the paper we adopt $H_0=73 \rm ~km~s^{-1}~Mpc^{-1}$.
|
\label{summary} This is the first paper of a series devoted to H$\alpha3$, the H$\alpha$ narrow-band imaging survey of galaxies carryed out with the San Pedro Martir 2.1m telescope (Mexico), selected from the HI Arecibo Legacy Fast ALFA Survey (ALFALFA). The first sample includes $\sim 400$ targets in the Local Supercluster for the sky region $\rm 11^h < R.A. <16^h\, ; \,4^o< Dec. <16^o; 350< \rm cz<2000$ $\rm km ~s^{-1}$ including the Virgo cluster. At the distance of Virgo (17 Mpc) and given the sensitivity of ALFALFA ~ the targets selected for the H$\alpha$ follow-up contain more than $10^{7.7}$ M$_\odot$ of neutral atomic hydrogen. H$\alpha3$, complete for $M_{HI}>10^{8}$ M$_\odot$, provides the full census of the star formation in HI rich galaxies of the local universe over a broad range of stellar masses, from dwarf galaxies with $10^{7.5}$ $M_\odot$ up to giants with $10^{11.5}$ $M_\odot$. Not unexpectedly, only an handful number of detections is identified with galaxies on the red sequence, while the majority are late-type, from giant spirals (Sa-Sd) to dwarf Irr-BCDs. In this paper, we present the properties of the H$\alpha$ galaxy sample, together with H$\alpha$ fluxes and equivalent widths for the yet unpublished subsample observed between 2006 and 2009. The integrated H$\alpha$ fluxes are corrected for galactic and internal extinction and for [NII] contamination providing us with the global star formation rates (SFR). Given the sensitivity of the present H$\alpha$ ~observations, we detect galaxies with an unobscured SFR density above $3x10^{-9}$ M$_\odot$ yr$^{-1}$ pc$^{-2}$ at $1\sigma$. The analysis of the integrated quantities (global SFR) produced by H$\alpha3$ will be carryed out in Paper II of this series (Gavazzi et al. 2012). By using hydrogen recombination lines as a tracer of recent star formation, we aim to investigate the relationships between atomic neutral gas and newly formed stars in different environments (cluster and field), morphological types (spirals and dwarfs), and over a wide range of stellar masses ($\sim 10^{7.5}-10^{11.5}$ M$_\odot$). Paper III will contain the extension of H$\alpha3$ to the Coma supercluster ($\rm 10^h < R.A. <16^h\, ; \,24^o< Dec. <28^o; 3900<cz<9000$ $\rm km ~s^{-1}$). Being approximately 6 times more distant than Virgo, galaxies selected by ALFALFA at Coma contain about 35 times more HI than those at Virgo. Hence ALFALFA will be complete for $\geq 10^{9.5}$ M$_\odot$, thus for giant galaxies. The cost of missing completely the population of dwarf galaxies will be compensated by the fact that at $cz>5000$ the shredding problem affecting the SDSS completeness is much less severe than at Virgo, hence making it possible to extract a catalogue of optically selected candidates from the SDSS database. This will allow us to investigate in details the differences between the optical and the radio selection functions. \appendix
| 12
| 6
|
1206.0061
|
1206
|
1206.6192_arXiv.txt
|
{The Universe has a gravitational horizon, coincident with the Hubble sphere, that plays an important role in how we interpret the cosmological data. Recently, however, its significance as a true horizon has been called into question, even for cosmologies with an equation-of-state $w\equiv p/\rho\ge -1$, where $p$ and $\rho$ are the total pressure and energy density, respectively. The claim behind this argument is that its radius $R_{\rm h}$ does not constitute a limit to our observability when the Universe contains phantom energy, i.e., when $w<-1$, as if somehow that mitigates the relevance of $R_{\rm h}$ to the observations when $w\ge -1$. In this paper, we reaffirm the role of $R_{\rm h}$ as the limit to how far we can see sources in the cosmos, regardless of the Universe's equation of state, and point out that claims to the contrary are simply based on an improper interpretation of the null geodesics. } \begin{document}
|
A newly recognized horizon (dubbed the ``Cosmic Horizon" in [1]) is beginning to play a crucial role in our understanding of how the Universe evolves. The distance to this horizon is the Universe's gravitational radius, i.e., the radius $R_{\rm h}$ at which a proper sphere encloses sufficient mass-energy to turn it into a Schwarzschild surface for an observer at the origin of the coordinates (see also [2]). Regardless of which specific cosmological model one adopts, $R_{\rm h}$ is always given by the expression \begin{equation} R_{\rm h}={2GM(R_{\rm h})\over c^2}\;, \end{equation} where $M(R_{\rm h})=(4\pi/3) R_{\rm h}^3\,\rho/c^2$, in terms of the total energy density $\rho$. Thus, $R_{\rm h}$ is given as $({3c^4/8\pi G\rho})^{1/2}$ which, for a flat universe, may also be written more simply as $R_{\rm h}(t)=c/H(t)$, where $H(t)$ is the (time-dependent) Hubble constant. Although not defined in this fashion, the Hubble radius $c/H(t)$ is therefore clearly a manifestation of the gravitational horizon $R_{\rm h}$ emerging directly from the Robertson-Walker metric written in terms of the proper distance $R=a(t)r$, the expansion factor $a(t)$, and co-moving radius $r$. What this means, of course, is that the Hubble sphere is not merely an empirical artifact of the expanding universe, but actually encloses a volume centered on the observer containing sufficient mass-energy for its surface to function as a {\it static} horizon. That also means that the speed of expansion a distance $R_{\rm h}$ away from us must be equal to $c$, just as the speed of matter falling into a compact object reaches $c$ at the black hole's horizon---and this is in fact the criterion used to define the Hubble radius in the first place. Very importantly, it may also help the reader to know that since the gravitational radius coincides identically with the Hubble radius, all of the known equations and constraints (e.g., 2.4 below) satisfied by the latter must also be satisfied by the former. Since its introduction, however, some confusion regarding the properties of $R_{\rm h}$ has crept into the literature, due to a misunderstanding of the role it plays in our observations. For example, it has sometimes been suggested (see, e.g., the earlier paper [3], and the more recent work [4]) that sources beyond $R_{\rm h}(t_0)$ are observable today (at cosmic time $t_0$), which is certainly not the case. To eliminate misconceptions such as this, some effort was made [5] to elaborate upon what the gravitational radius $R_{\rm h}$ is---and what it is not. Though first defined in [1], an unrecognized form of $R_{\rm h}$ actually appeared in de Sitter's own account of his spacetime metric [6]. In reference [5], photon trajectories were calculated for various well-studied Friedmann-Robertson-Walker (FRW) cosmologies, including $\Lambda$CDM, demonstrating that the null geodesics reaching us at $t_0$ have never crossed $R_{\rm h}(t_0)$. (To be very precise, what we mean by this is that the proper distance $R_{\gamma}$ of photons travelling along these null geodesics never attained a value exceeding $R_{\rm h}[t_0]$.) More generally---and more conclusively---they proved that $R_{\rm h}(t_0)$ is a real limit to our observability for any cosmology that does not contain phantom energy [7], i.e., any cosmology whose equation of state is $p\ge-\rho$, in terms of the total pressure $p$ and total energy density $\rho$. (This is a very simple proof and, for completeness, we will reproduce it in the discussion below.) In spite of this, the issue has been revisited [8] with the argument that a Universe with phantom energy violates this limit, thereby demoting $R_{\rm h}$ from its acknowledgment as a true horizon. Of course, even if it were true that null geodesics could reach us from beyond $R_{\rm h}$ when $p<-\rho$, that would simply be highlighting a peculiarity of phantom cosmologies, which have other interesting features, such as the acausal transfer of energy. But even in that scenario, it would have nothing to say about the role played by $R_{\rm h}$ in delimiting what we can see in any cosmology that does not contain phantom energy. The purpose of this paper is to reiterate what the physical meaning of the Universe's gravitational radius is and to demonstrate that, even in cases where $p<-\rho$, no null geodesics reaching us have ever crossed the maximum extent of our cosmic horizon.
|
We have reaffirmed the physical significance of the Universe's gravitational horizon, which also defines the size of the Hubble sphere. For any non-phantom cosmology, we have proven that its current value, $R_{\rm h}(t_0)$, is the maximum proper distance to any source producing light in the past that is reaching us today. In a Universe with unbounded energy growth, $R_{\rm h}$ eventually shrinks to zero when the Universe ends its life in a ``big rip." But even in this case, we have demonstrated that no null geodesic reaching us at $R_\gamma=0$ will have ever attained a proper distance greater than the maximum extent of $R_{\rm h}$. When $w\ge -1$, $R_{\rm h}$ grows indefinitely, and therefore its maximum extent simply happens to be its value today, $R_{\rm h}(t_0)$. When $w<-1$, however, $R_{\rm h}$ shrinks at late times and its maximum extent therefore occurred in our past. Under no circumstances will any null geodesic ever reach us after having propagated to proper distances exceeding our cosmic horizon.
| 12
| 6
|
1206.6192
|
1206
|
1206.1298_arXiv.txt
|
{ We reconsider the role of pre-main sequence (pre-MS) Li depletion on the basis of new observational and theoretical evidence: i) new observations of H$\alpha$ emissions in young clusters show that mass accretion could be continuing till the first stages of the MS, ii) theoretical implications from helioseismology suggest large overshooting values below the bottom of the convective envelopes. We argue here that a significant pre-MS $^7$Li destruction, caused by efficient overshoot mixing, could be followed by a matter accretion after $^7$Li depletion has ceased on MS thus restoring Li almost to the pristine value. As a test case we show that a halo dwarf of 0.85 $M_{\odot}$ with an extended overshooting envelope starting with an initial abundance of A$(Li) = 2.74$ would burn Li completely, but an accretion rate of the type $ 1 \times 10^{-8} e^{-t/3\times 10^6}$ $M_{\odot}$ yr$^{-1}$ would restore Li to end with an A$(Li) = 2.31$. A self-regulating process is required to produce similar final values in a range of different stellar masses to explain the PopII Spite plateau. However, this framework could explain why open cluster stars have lower Li abundances than the pre-solar nebula , the absence of Li in the most metal poor dwarfs and a number of other features which lack of a satisfactory explanation.
|
\begin{figure}[t!] \resizebox{\hsize}{!}{\includegraphics[clip=true,angle=-90]{fig_1.eps}} \caption{\footnotesize Li observations in PopII stars. The red line marks the WMAP Li prediction} \label{label:fig_pop2} \end{figure} \begin{figure}[t!] \resizebox{\hsize}{!}{\includegraphics[clip=true,angle=-90]{fig_2.eps}} \caption{\footnotesize Li observations in Hyades, black dots and squares, and Pleiades, red dots. The dotted line shows the level of meteoritic $^7$Li abundance } \label{label:fig_clusters} \end{figure} \subsection{The PopII-WMAP puzzle} The WMAP value for $\Omega_b h^2$ yields $\eta_{10} = 10^{10} (n_{b}/n_{\gamma})_0$ = $6.16 $ (Komatsu et al 2011), light elements in the framework of SBBN. The predicted primordial $^7$Li abundance is A(Li)=2.72 , in the scale where A(Li)=$\log[N(Li)/N(H)]+12$, (Coc et al 2012). On the other hand, almost 3 decades of Li observations in PopII stars provided an abundance of $A$(Li)$\approx$ 2.26 (cfr Molaro 2008), missing the WMAP prediction by about a factor 3 as emphasized in Fig \ref{label:fig_pop2}. The data in the figure are drawn from the 1D non-LTE abundances with IRFM temperatures from Bonifacio and Molaro (1997) and Sbordone et al (2010). The solution of the Li problem was widely debated at this conference. Very different approaches have been considered from the nuclear reaction rates in the SBBN (Coc et al 2012) to new physics beyond the standard model (Olive 2012, Kajino 2012) or depletion in stars by diffusion (Richard 2012, Korn 2012). \subsection{The PopI-meteoritic puzzle} The PopII-WMAP problem is mirrored at high metallicity by the difference between the meteoritic $^7$Li abundance and that observed in the open clusters. The present $^7$Li abundance at the time of the formation of the solar system as obtained from meteorites is $A$(Li)$=3.34 \pm0.02$ (Anders and Grevesse 1989). For recently born stars, the initial $^7$Li abundance has been estimated in young T-Tauri stars where no Li depletion could yet have taken place A$(Li)\, = 3.2 \pm 0.1$ (Cunha et al 1995). The hotter F stars of slightly older clusters such as Pleiades ( $\approx$ 100 Myrs ) and Hyades (670 Myrs) show a top value of $A$(Li)$\approx 3.0$, Thorburn 1994, Soderblom 1993) and never reach the meteoritic value as shown in Fig \ref{label:fig_clusters}. The behavior of several other young clusters is quite similar to the Pleyades but the data are omitted in the figure for clarity (Sestito and Randich 2005). Considering that the present Li abundance could be higher than the meteoritic value due to Li Galactic enrichment, the disagreement between PopII and WMAP and cluster plateau's and meteoritic values could be considered of comparable scale. We examine here the possibility that they may share a common origin dealing with the pre-MS $^7$Li evolution.
|
Pre-MS Li depletion is presently considered to explain the scatter and the depletion of the low temperature side of Li diagrams in young clusters but thought to be minimal for the hotter members of cluster members (Tognelli et al 2012). It is generally considered absent in PopII dwarfs. D'Antona \& Mazzitelli (1984) allow a rather small amount of Li depletion in pre-MS of the order of 0.15 dex. Instead, we argue for an effective Li burning along the pre-MS phase. The apparent contrast with the observations which do not show clear evidence for pre-MS depletion could be removed if the destruction is substantially balanced by Li accretion occurring in the late stages of the pre-MS evolution after Li has been burned in the stellar convective zones. We have shown a test case of a 0.85 $M_{\odot}$ PopII dwarf with [Fe/H]=-2-2 with an original Li at the WMAP value. A self-regulating process, not yet identified, should be at work in order to produce almost the same dilution factor for the variety of stellar masses which are on the Spite plateau or in the hotter stars of the open clusters. The onset of the stellar wind could counteract the lithium restoration both by removing the outer layers and also by dissipating the accreting disk. This is a working hypothesis but we note that it has several specific assets, namely: \begin{itemize} \item{Providing a possible explanation of the disagreement between PopII Li abundances and WMAP predictions.} \item{Providing a possible explanation of why models with no pre MS Li depletion fail in explaining open clusters Li diagrams and in particular of why F stars Li abundances are always below the solar-meteoritic value.} \item{Failure or increase of the accretion process could provide an explanation for the few Pop II $^7$Li depleted stars ( $\approx$ 3\%) or of the few Pop II star showing a $^7$Li abundances at the WMAP level, respectively. Among the latter there is BD +23 3912 with $A(Li)$=2.60 which stands out the others in Fig \ref{label:fig_pop2}(Bonifacio and Molaro 1997).} \item{ The extreme metal poor dwarfs of Caffau et al (2011) and Frebel et al (2005) without detectable Li could have been formed in a fragmentation process and never accreted material after a complete $^7$Li depletion in the pre-MS phase. Caffau et al (2012) by using a set of different isochrones to match the observed colors estimate a mass in the range 0.62-0.70 $M_{\odot}$. Since for $M<0.6\,M_{\odot}$ all the Li is burned in pre-MS, the absence of Li could be explained by an extended pre-MS depletion in a star with an initial mass below this threshold. In fact the absence of Li in these stars can be considered as evidence of pre-MS Li depletion in extreme metal poor stars. The melting of Li abundances for [Fe/H] $\le$ -3.0 observed by Sbordone et al (2010) could be another feature of the mass decrease at lower metallicity for a given effective temperature. Towards the low metallicity tail stars have progressively smaller masses and the mass accretion could be not enough to restore Li at the level of the more massive stars. } \item{It could provide an explanation of the Li paradox in the metal poor spectroscopic binaries. The PopII spectroscopic binaries CS 22876-032 (Gonzales Hernandez et al 2008) and G 166-45 (Aoki et al 2012) are composed by dwarfs with a temperature characteristic of the plateau but show slightly different Li abundances. The hotter is on the Spite Plateau but the cooler show a lower Li abundance of $\approx$ 0.4 dex and $\approx$ 0.2 dex respectively. The masses of the CS 22876-032 system are $m_1 =0.70 \pm 0.02$ and $m_2 = 0.64 \pm 0.02$ and those of the G166-45 system are $m_1 =0.76 \pm 0.02$ and $m_2 = 0.67 \pm 0.02$. The cooler has also a smaller mass and very close to the full convective limit. Thus it is quite possible that with the same accreted mass the restoring of Li has been lower in the smaller mass star due to the relatively larger convection zone.} \end{itemize}
| 12
| 6
|
1206.1298
|
1206
|
1206.3124_arXiv.txt
|
{Accreting X-ray pulsars are among the most luminous objects in the X-ray sky. In highly magnetized neutron stars ($B\sim10^{12}$\,G), the flow of matter is dominated by the strong magnetic field. The general properties of accreting X-ray binaries are presented, focusing on the spectral characteristics of the systems. The use of cyclotron lines as a tool to directly measure a neutron star's magnetic field and to test the theory of accretion are discussed. We conclude with the current and future prospects for accreting X-ray binary studies.
|
Accreting X-ray binaries were discovered in the 70's (\citealt{giacconi71_1}, \citealt{tananbaum72}) and are among the most luminous objects in the X-ray sky. Forty years after their discovery, more than 400 X-ray binaries have been observed, and there is today a deeper knowledge of the physical processes involved in the accretion of matter onto the compact object. In X-ray binaries, matter is accreted from a donor star onto a compact object (white dwarf, neutron star or black hole). X-ray emission originates as a result of the conversion of the gravitational energy of the accreted matter into kinetic energy. According to the nature of the donor star, X-ray binaries can be classified as High-Mass X-ray Binaries (hereafter HMXBs) or Low-Mass X-Ray Binaries (hereafter LMXBs). Typically, HMXBs have young optical companions of spectral type O or B and mass $M\gtrsim5M_{\odot}$, and high magnetic fields $B\sim10^{12}\,$$\mathrm{G}$. LMXB systems have older optical companions, with masses in general $M\leq 1\,M_{\odot}$, and lower magnetic fields $B\sim10^{9-10}\,$G. X-ray binaries are numerous objects in the Galaxy, with 114 HMXBs in the Galaxy and 128 HMXBs in the Magellanic Clouds, and 187 LMXBs in the Galaxy and Magellanic Clouds \citep{liu05,liu06,liu07}. In accreting neutron stars, due to the inclination of the neutron star's rotational axis with respect to the magnetic axis, the X-ray emission appears pulsated to a distant observer. \\ \indent In this review, we concentrate on the properties of X-ray binaries with a highly magnetized neutron star ($B\sim10^{12}\,$$\mathrm{G}$). The basic aspects of accreting X-ray pulsars are described, and the current status of the observations and theory of these systems is presented.
|
We have reviewed the general properties of accreting X-ray pulsars, in particular the spectral formation and cyclotron lines. Many aspects of accreting X-ray pulsars have not been discussed in this review, like pulse profile formation. Modeling of pulse profiles has been performed for instance by \citet{wang81}, \citet{meszaros85}, and \cite{leahy91}. An alternative pulse profile decomposition method has been proposed by \cite{kraus95}, that allows to infer geometrical parameters of the neutron star, and to reconstruct the emission beam pattern that can be compared to model predictions (see \cite{kraus03} and e.g. \citealt{sasaki10}). Also not discussed in this review are the relativistically broadened lines that have been observed in several neutron star X-ray binaries. These can be used to constrain neutron star parameters and the equation of state, see, e.g., \cite{cacket08} but also \cite{done2010}. Forty years after the discovery of the first accreting neutron star, the continuum formation is now better understood, but still many questions are open. There are now 16 neutron stars with direct magnetic field determinations -- now enough data available for both, individual studies, and study as a class. The cyclotron line behavior with the X-ray luminosity allows deeper probing of accretion column theory. Good numerical models for cyclotron line formation are available, and the behavior of the lines is roughly in agreement with predictions of Monte Carlo computations. In the future, \textsl{MAXI} \citep{matsuoka09} and Swift/BAT \citep{barthelemy05} will continue to monitor the X-ray sky and discover neutron star outbursts. \textsl{ASTROSAT} \citep{OBrien2011} will allow routine measurements of broadband spectral shape and magnetic fields, \textsl{NuStar} \citep{harrison05} will start to resolve the shape of cyclotron lines, while \textsl{LOFT} \citep{feroci10} and \textsl{Athena} will allow studies at characteristic variability timescale. With the current and future missions, we have reached a very exciting era in which high quality data are available together with physical models that will allow a deeper understanding of the extreme physics involved in the accretion of matter onto a neutron star.
| 12
| 6
|
1206.3124
|
1206
|
1206.5794.txt
|
Statistically analysing Johnson UBVR observations of \object{V1285 Aql} during the three observing seasons, both activity level and behaviour of the star are discussed in respect to obtained results. We also discuss the variation out-of-flare due to rotational modulation. 83 flares were detected in U band observations of the season 2006. First of all, depending on statistical analyses using the Independent Samples t-Test, the flares were divided into two classes as the fast and the slow flares. According to the results of the test, there is a difference of about 73 s between the flare-equivalent durations of slow and fast flares. The difference should be the difference mentioned in the theoretical models. Secondly using the One Phase Exponential Association function, the distribution of the flare-equivalent durations versus the flare total durations was modelled. Analysing the model, some parameters such as $Plateau$, $Half-Life$ values, the mean average of the flare-equivalent durations, maximum flare rise and total duration times are derived. The $Plateau$ value, which is an indicator of the saturation level of white-light flares, was derived as 2.421$\pm$0.058 s in this model, while $Half-Life$ is computed as 201 s. Analyses showed that observed maximum value of flare total duration is 4641 s, while observed maximum flare rise time is 1817 s. According to these results, although computed energies of the flares occurring on the surface of \object{V1285 Aql} are generally lower than those of other stars, the length of its flaring loop can be higher that those of more active stars. Moreover, the variation out-of-flare activity was analysed with using three methods of time series analysis, a sinusoidal-like variation with period of $3^{d}.1265$ was found for rotational modulation out-of-flare for the first time in literature. Considering the variations of V-R colour, these variations must be because of some dark spot(s) on the surface of that star. In addition, using the ephemeris obtained from time series analyses, the distribution of the flares was examined. The phase of maximum mean flare occurrence rates and the phase of rotational modulation were compared to investigate whether there is any longitudinal relation between stellar flares and spots. The analyses show that there is a tendency of longitudinal relation between stellar flares and spot(s). Finally, it was tested whether slow flares are the fast flares occurring on the opposite side of the stars according to the direction of the observers as mentioned in the hypothesis developed by \citet{Gur86}. The flare occurrence rates reveal that both slow and fast flares can occur in any rotational phases.
|
Flares and flare processes observed on the surfaces of UV Ceti type stars have not been perfectly understood yet, although they are heavily studied subjects of astrophysics \citep{Ben10}. In this study, we obtained large data set in UBVR bands from the observations of \object{V1285 Aql}. Depending on these large photometric data, which is very useful for a statistical analysis of the flare properties, we have obtained some remarkable results. Observed star, \object{V1285 Aql}, is classified as a UV Ceti type star from spectral type dM3e in SIMBAD data base. According to \citet{Vee74}, the star seems to belong to the young disk population of the galaxy and is classified as a young flare star. The flare activity of \object{V1285 Aql} was discovered for the first time by \citet{Sha70}. Apart from flare activity, \citet{And88} showed that \object{V1285 Aql} exhibits some sinusoidal-like variations out-of-flare with period of 30 s. However, later it was found that the star exhibits the same variations with period of 1.2 and 1.4 minutes \citep{And89}. In fact, it is a debate issue whether \object{V1285 Aql} exhibits any rotational modulation, or not. Moreover, the period of the equatorial rotation is found to be $2^{d}.9$, this is an other debate issue for \object{V1285 Aql} \citep{Doy87, Ale97, Mes01}. \object{V1285 Aql} was observed in U band for flare patrol in 2006, and 83 white-light flares were detected in U band. Considering the studies of \citet{Har69, Osa68, Mof74, Gur88} and following the method described by \citet{Dal10}, first of all, we analysed large U band flare data in order to classify flares. This is because the classification of the flare light variations is important due to modelling the event \citep{Gur88, Ger05}. In the literature, white-light flare events observed on the surfaces of UV Ceti type stars were usually classified into two types as slow and fast flares \citep{Har69, Osa68}. On the other hand, both \citet{Osk69} and \citet{Mof74} classified flares in more than two types. According to Kunkel, the observed flare light variations should be a combination of slow and fast flares \citep{Ger05}. Finally \citet{Dal10} developed a rule, which is depending on the ratios of flare decay times to flare rise times. According to \citet{Dal10}, if the decay time of a flare is 3.5 times longer than its rise time at least, the flare is a fast flare. If not, the flare is a slow flare. They demonstrated that the value of 3.5 is a boundary limit between the two types of flare. In the second step, we analysed flare data set to find general properties of flare events occurring on the surface of \object{V1285 Aql}. The method described by \citet{Dal11a} was followed for these analyses. The energy limit and some timescales of the flare events occurring on a star are as important as the types of these event. In the literature, \citet{Gur88} stated about two processes as thermal and non-thermal processes, and mentioned that there must be a large energy difference between these two types of flares. Moreover, \citet{Ger72, Lac76, Wal81, Ger83, Pet84} and, \citet{Mav86} studied on the distributions of flare energy spectra of UV Ceti type stars. There are significant differences between energy levels of stars from different ages. Depending on the processes of Solar Flare Event, however, flare activity seen on the surfaces of dMe stars is generally modelled. This is why the magnetic reconnection process is accepted as the source of the energy in these events \citep{Ger05, Hud97}. According to both some models and observations, it is seen that some parameters of magnetic activity can reach the saturation \citep{Ger05, Sku86, Vil83, Vil86, Doy96a, Doy96b}. Recently, \citet{Dal11a} have been examined the distributions of flare equivalent durations versus flare total durations. In the analyses, the distributions of flare-equivalent durations were modelled by the One Phase Exponential Association function (hereafter the OPEA). In the models, it is seen that flare-equivalent durations can not be higher than a specific value and it is no matter how long the flare total duration is. According to \citet{Dal11a}, this level, the $Plateau$ parameter, is an indicator for the saturation level of the flare process occurring on the surface of the program stars in some respects. In fact, white-light flares are detected in some large active regions such as compact and two-ribbon flares occurring on the surface of the Sun \citep{Rod90, Ben10}. It is possibly expected that the energies or the flare-equivalent durations of white-light flares can reach the saturation. Moreover, it is well known that some samples of UV Ceti stars exhibit sinusoidal-like variations out-of-flare activity. The stars such as, EV Lac, V1005 Ori are well known samples \citep{Dal11b}. In this respect, apart from the flare patrol, \object{V1285 Aql} was observed in BVR bands from 2006 to 2008 in order to examine whether there is any sinusoidal-like variation due to rotational modulation. The variation out-of-flare activity was analysed with using three different methods of the time series analyses. In fact, a sinusoidal-like variation due to rotational modulation was found out-of-flare activity.
|
\subsection{Flare Activity and Flare Types} In this study, we detected 83 flares in U band observations of \object{V1285 Aql}. The samples of the fast and slow flares, whose rise times are equal, were determined as the suitable sets to analysis with using the t-Test. The results of the statistical t-Test analyses show that there are some distinctive differences between two data sets. The slope of the linear fit is 0.932 for slow flares, which are low energy flares, and it is 1.150 for fast flares, which are high energy flares. The values are almost close to each other. It demonstrates that the flare-equivalent durations versus the flare rise times increase in similar ways for both groups. When the averages of equivalent durations for two types of flares were computed in the logarithmic scale, it was found that the average of equivalent durations is 1.479 for slow flares, and it is 2.205 for fast flares. The difference of 0.536 between these values in the logarithmic scale is equal to the 73.384 s difference between the equivalent durations. As can be seen from Equation (2), this difference between average equivalent durations affects the energies in the same way. Therefore, there is a difference of 73.384 between the energies of these two types of flares. This difference must be the difference mentioned by \citet{Gur88}. On the other hand, according to \citet{Dal10}, this difference between two flare types is about 157 s. Although the value found in this study is about half of the value found by \citet{Dal10}, but there are several reasons for this difference. The value given by \citet{Dal10} is an average derived from five flare stars. Moreover, there are EV Lac and EQ Peg among these five stars in the study of \citet{Dal10}. These two stars could cause some difference on the values, because they are seen to be dramatically different among other stars, as seen in Figures 5 and 6 given by \citet{Dal11a}. There must be some differences in the flare processes occurring on the surfaces of these stars. Comparing the $y-intercept$ values of the linear fits, it is seen that there is a 0.100 times difference in the logarithmic scale, while there is a 0.536 times difference between general averages. Also considering Figure 5, it is seen that equivalent durations of fast flares can increase more than the equivalent durations of slow flare toward long rise times. Some other effects should be involved in the fast flare process for long rise times. These effects can make fast flares seem more powerful than they actually are. There is another difference between these two types due to both the lengths of their rise times and their amplitudes. The lengths of rise times can reach to 1817 s for slow flares, but are not longer than 510 s for fast flares. In addition, when the flare amplitudes are examined for both types of flares, an adverse difference is seen contrary to rise times. While the amplitudes of slow flares reach to $0^{m}.480$ at most, the amplitudes of fast flares can exceed $1^{m}.430$. Finally, computing the ratios of flare decay times to flare rise times for two types of flares, it is seen that the ratios do not exceed a specific value for all the slow flares. On the other hand, the ratios are always above this specific value for the fast flares. It means that the type of an observed flare can be determined by considering this value of the ratio. The limit value of this ratio is about 2.0 for the flares detected from \object{V1285 Aql}. However, \citet{Dal10} demonstrated that the limit values of the ratio of flare decay time to flare rise time is 3.5. Like the difference seen between the values of the equivalent durations given by \citet{Dal10} and the values of the equivalent durations given in this study for the fast and slow flares, the limit values of the ratios of flare decay times to flare rise times are also different. This must be again because of the same reasons. Providing that the limit value of the ratios of flare decay times to flare rise times between flare types, the fast flare rate is 58.5$\%$ of the 83 flares observed in this study, while the slow flare rate is 41.5$\%$. It means that one of every two flares is the fast flare, other one is slow flare. This result diverges from what \citet{Gur88} stated. According to \citet{Gur88}, slow flares with low energies and low amplitudes make up 95$\%$ of all flares, and the remainders are fast flares. It must be noted that comparing the correlation coefficients of the linear fits, it is seen that the correlation coefficient of the slow flares is quite higher than the fast flares. The same results was revealed by \citet{Dal10}. The difference is due to the equivalent durations of fast flares taking values in a wide range. Considering the non-thermal processes dominated in the fast flare events, \citet{Dal10} tried to explain it with the magnetic reconnection processes. \subsection{The Saturation Level in the Detected White-Light Flare} The distributions of flare-equivalent durations versus flare total duration were modelled by the OPEA function expressed by Equation (5) for 83 flares detected in U band observations of \object{V1285 Aql}. The regression calculations demonstrated that the best model function is the OPEA function for the distributions of flare-equivalent durations versus flare total duration. The derived model shows that flare-equivalent durations increase with the flare total duration until a specific total duration value, and then the flare-equivalent durations are constant no matter how long the flare total duration is. According to the OPEA model of the flares detected from \object{V1285 Aql}, observed flare-equivalent durations start to reach maximum value in the value of 402.2 s of the flare total durations (note that the $Half-Life$ value is 201.1 s). The maximum value of the flare-equivalent durations is 2.421 s in logarithmic scales for the flare detected from \object{V1285 Aql}. It means that the flare processes occurring on the surface of \object{V1285 Aql} do not generate a flare more powerful than the specific value. This specific value of the flare-equivalent durations can be defined as a saturation level for the white-light flares occurring on the surface of \object{V1285 Aql}. The white-light flares occur in the regions, where the compact and two-ribbon flare events are seen \citep{Rod90, Ben10}. In the analyses, we used data obtained by the same method and the same optical system. In addition, we used the flare-equivalent durations instead of the flare energies. In fact, the derived $Plateau$ values depend only on the power of the white-light flares. Considering the $Plateau$ values, the flare-equivalent durations cannot be higher than a particular value no matter how long the flare total duration is. Instead of the flare duration, some other parameters, such as magnetic field flux and/or particle density in the volumes of the flare processes, must be more efficient in determining the power of the flares. Considering thermal and non-thermal flare events, both these parameters can be more efficient. However, Both \citet{Doy96a} and \citet{Doy96b} suggested that the saturation in the active stars does not have to be related to the filling factor of magnetic structures on the stellar surfaces or the dynamo mechanism under the surface. It can be related to some radiative losses in the chromosphere, where the temperature and density are increasing in the case of fast rotation. This phenomenon can occur in the chromosphere due to the flare process instead of fast rotation, and this causes the $Plateau$ phase to occur in the distributions of flare-equivalent duration versus flare total duration. On the other hand, the $Plateau$ phase cannot be due to some radiative losses in the chromosphere with increasing temperature and density. This is because \citet{Gri83} demonstrated the effects of radiative losses in the chromosphere on the white-light photometry of the flares. According to \citet{Gri83}, the negative H opacity in the chromosphere causes the radiative losses, and these are seen as pre-flare dip in the light curves of the white-light flares. Unfortunately, considering the results of \citet{Dal11a}, it is seen that the $Plateau$ values vary from one star to the next. This indicates that some parameters or their efficacies, which make the $Plateau$ increase, are changing from star to star. It is seen that \object{V1285 Aql} is among its analogues. According to Standard Magnetic Reconnection Model developed by \citet{Pet64}, there are several important parameters giving shape to flare events, such as Alfv\'{e}n velocity ($\nu_{A}$), $B$, the emissivity of the plasma ($R$) and the most important one, the electron density of the plasma ($n_{e}$) \citep{VanB88, VanA88}. All these parameters are related with both heating and cooling processes in a flare event. \citet{VanA88, VanB88} have defined the radiative loss timescale ($\tau_{d}$) as $E_{th}/R$. Here $E_{th}$ is the total thermal energy, while $R$ is emissivity of the plasma. $E_{th}$ depends on the magnetic energy, which is defined as $B^{2}/8\pi$, and $R$ depends on the electron density ($n_{e}$) of the plasma. $\tau_{d}$ is firmly correlated with $B$ and $n_{e}$, while $\tau_{r}$ is proportional to a larger loop length ($\ell$) and smaller $B$ values. Consequently, it is seen that both the shape and power of a flare event depend on mainly two parameters, $n_{e}$ and $B$. In addition, obtained maximum flare duration for \object{V1285 Aql} flares is 4641 s. This duration is in agreement with those found by \citet{Dal11a}. Observed maximum duration is 2940 s for \object{EV Lac}, and 3180 s for \object{EQ Peg}. The flares of both stars are dramatically lower than that of \object{V1285 Aql}. Maximum flare duration of \object{V1285 Aql} is almost 1.5 times of them. This case reveals some clues about the flaring loop geometry on these stars \citet{Ree02, Ima03, Fav05, Pan08}. \subsection{Rotational Modulation and the Flare Distributions versus Rotational Phase} \object{V1285 Aql} was observed in BVR bands apart from U band. Using the DFT method \citep{Sca82}, the data sets of BVR bands were separately and together analysed for each observing season. It was found that the photometric period of the sinusoidal-like variation due to rotational modulation out-of-flare is $3^{d}.1269$ in B band, $3^{d}.1265$ in V band, $3^{d}.1268$ in R band. All the results obtained from the DFT method were tested by PDM and CLEANest methods. Examining the light variations, it is seen that the amplitudes are lower in some degree, but all the amplitudes are higher than the level of $3\sigma$. Besides, there are some variations in the colour curves, too. Considering the B-V index of \object{V1285 Aql}, the sinusoidal-like variation due to rotational modulation out-of-flare must be due to the heterogeneous temperature distribution on the surface of \object{V1285 Aql}. There must be some dark stellar spots on the surface. The variation of the V-R colour also supports this case. However, we do not see any variation above the level of $3\sigma$ in the B-V colour. In this study, it was found that B-V colour index is $1^{m}.469$. It was given as $1^{m}.53$ by \citet{Pet91}, while it was given as $1^{m}.75$ by \citet{Mes01}. Different B-V indexes have been given in three studies for \object{V1285 Aql}. This could be due to both the methods and parameters used to find B-V index. If the methods and/or parameters are a little bit different, the obtained B-V can be slightly different. On the other hand, the main reason of these differences must be the magnetic activity seen on the surface of the star. The level of the flare activity observed on the star is very high. Although the same activity level is not seen in the light curves for the stellar rotational modulation, but all the surface of the star could be covered by dark stellar spots. If this is the case, it means that the level of the magnetic activity is rather higher than it is observed in the light curves out-of-flares. This caused to vary the observed B-V colour indexes from one study to next. Apart from B-V colour index of the star, found photometric period is also a bit different from the period found by \citet{Doy87}. The photometric period found in this study is about $3^{d}.127$, while the period found by \citet{Doy87} is $2^{d}.19$. This difference must be due to the variations of the locations of the spotted area(s). It is possible that the photometric period might be changing because of this. Using the models shown with dashed lines in Figure 8, which were derived with analysing of each V band data set, the minimum, maximum, mean levels and the amplitudes of the light curves were computed. Examining these parameters listed in Table 5, the levels of the brightness slowly varies season to season. The variations are slow. This must be because of the small developing of the structures on the surface of the star. There are many studies about whether the flares of UV Ceti type stars, which exhibit BY Dra Syndrome, are occurring at the same longitudes of stellar spots, or not \citep{Dal11b}. Having the same longitudes of flare and spots is an expected case for these stars, because solar flares are mostly occurring in the active regions, where spots are located on the Sun \citep{Ben10}. In the respect of Stellar-Solar Connection, a result of the $Ca$ $II$ $H\&K$ Project of Mount Wilson Observatory \citep{Wil78, Bal95}, if the areas of flares and spots are related on the Sun, the same case might be expected for the stars. In fact, \citet{Mon96} found some evidence to demonstrate this relation. Besides, \citet{Let97} found a variation of both the rotational modulation and the phase distribution of flare occurrence rates in the same way for the observations in the year 1970. On the other hand, no clear relation between stellar flares and spots was found by \citet{Bop74, Pet83} and \citet{Dal11b}. However, \citet{Pet83} did not draw firm conclusions because of being a non-uniqueness problem. In this study, using the method described in Section 2.5, we derived the distribution of flares versus rotational phase for all the flares detected in the observing season 2006. The derived distribution is shown in Figure 9. According to the distribution, the phase of MFOR is seen between $0^{P}.40$ and $0^{P}.50$. Almost 6 flares were detected per hour in this phase interval, while 2 flares were detected per hour at most in all other phase intervals. This is a more important result, because this phase interval is where the minimum phase of the sinusoidal-like variation due to rotational modulation out-of-flare is seen in the V band light curves of the observing season 2006. The phase of the sinusoidal-like variation is $0^{P}.47$. Using the inverse Compton event, \citet{Gur86} developed the Fast Electron Hypothesis. According to this hypothesis, UV Ceti type stars should generate only fast flares on their surface. However, the shapes of the flare light variations can be seen like a slow flare in respect to direction of observer \citep{Gur86}. According to the Fast Electron Hypothesis, it should be expected that both the fast and slow flares get groups, which are separated $0^{P}.50$ from each other. In this study, using the method described by \citet{Dal11b}, 48 fast and 35 slow flares were defined and their phases were computed. As seen from the analyses, the slow flares occur more frequently around $0^{P}.30$, while the fast flares are occurring more frequently around $0^{P}.45$. Unfortunately, the difference between the phases of two flare types is $0^{P}.15$ instead of $0^{P}.50$. Apart from this unexpected value, both the fast and the slow flares occur almost all phase intervals. In conclusion, some parameters can be computed from flares observed in photoelectric photometry and if the behaviours between these parameters can be analysed by suitable methods, flare types can be determined. In this study, we analysed the distributions of equivalent durations versus flare rise times by using a t-Test. Finally, it is seen that using the ratios of flare decay times to flare rise times, flares can be classified. It is seen that there are considerable differences between these two types of flares. Moreover, analyses demonstrated the detected flares have some critical energy level. There is no flare, whose energy is much more than this level. In addition, analyses demonstrated that \object{V1285 Aql} exhibits stellar spot activity apart from flares. Comparing the phase distribution of the flares with the phase of the sinusoidal-like variation demonstrated that the flares have a tendency to occur in the same longitudes with stellar spots. On the other hand, according to the statistical analyses, the slow flares and fast flares are not separated with some definite rules from each other. In this respect, extending the B-V range of observed stars, required to obtain more data, which should be obtained from many different stars and flare patrols spanning many years, in order to obtain more reliable results. %% If you wish to include an acknowledgments section in your paper, %% separate it off from the body of the text using the
| 12
| 6
|
1206.5794
|
1206
|
1206.7106_arXiv.txt
|
We present an approach to spectropolarimetry which requires neither moving parts nor time dependent modulation, and which offers the prospect of achieving high sensitivity. The technique applies equally well, in principle, in the optical, UV or IR. The concept, which is one of those generically known as channeled polarimetry, is to encode the polarization information at each wavelength along the spatial dimension of a 2D data array using static, robust optical components. A single two-dimensional data frame contains the full polarization information and can be configured to measure either two or all of the Stokes polarization parameters. By acquiring full polarimetric information in a single observation, we simplify polarimetry of transient sources and in situations where the instrument and target are in relative motion. The robustness and simplicity of the approach, coupled to its potential for high sensitivity, and applicability over a wide wavelength range, is likely to prove useful for applications in challenging environments such as space.
|
The polarization of light provides a versatile suite of remote sensing diagnostics. In astronomy, polarization is used to study the Sun and Solar System, stars, dust, supernova remnants, and high-energy extragalactic astrophysics\cite{SnikKeller2012}. The astrophysical mechanisms by which polarized light is produced range from scattering phenomena to the interaction between high energy charged particles, and magnetized plasmas. Beyond astronomy, polarization is used in remote sensing, medical diagnostics, defense, biophysics, microscopy, and fundamental experimental physics, e.g. \cite{Goldstein2011}. Accurate, precision polarimetric methods usually require rapidly modulating, often fragile, parts and are inherently monochromatic, e.g. photoelastic modulators (PEMs), ferroelectric liquid crystals or liquid crystal variable retarders (LCVRs) in tandem with phase locked photomultipliers, or synchronized charge shuffling on a charge-coupled device (CCD) detector for area detection\cite{SnikKeller2012}. Lower accuracy techniques typically require sequential measurements of the target using rotating waveplates and polarization analyzers. Here we describe a method to encode polarimetric information over a wide spectrum in a single data frame, using static optics. This approach alleviates errors introduced by the need to match sequentially acquired data, and eliminates the need for fragile or rapid modulation, yet may be able to accomplish high accuracy, precision measurements. The methods, of course, have their own implicit sensitivities and concerns, as we discuss below. A particular interest of the authors, which serves as a useful illustrative example, is the use of precision circular polarization spectroscopy as a remote sensing biosignature and a potentially valuable tool in searches for biological processes elsewhere in the Universe. The circular polarization spectrum is sensitive to the presence of molecular homochirality, a strong biosignature, through the combined optical activity and homochirality of biological molecules\cite{Sparks2009pnas, Sparks2009jqsrt}. Biologically-induced degrees of circular polarization have been found in the range $10^{-2}$ to $10^{-4}$ for a variety of photosynthetic samples, with an important correlation between the intensity spectrum and polarization spectrum\cite{Sparks2009pnas}. Hence, precision full Stokes polarimetry and wide spectral coverage are required. Furthermore, the target scene and instrumentation may be in rapid relative motion, compounding the difficulties of acquiring the data using traditional polarimetric techniques. A large number of photons must be accumulated in a short period of time. The techniques presented in this paper may provide a means to make this type of polarization measurement, in addition to providing a robust method for acquiring less precise spectropolarimetry in a straightforward fashion. Furthermore, the approach is applicable across a wide wavelength range, and, as well as in the visible, can work equally well in the ultraviolet, where for example chiral electronic signatures are generally strongest, to the infrared, where polarimetry goes hand in hand with probes into the geometry and physical characteristics of dusty regions of the universe. A variety of similar concepts are available under the generic title of ``channeled polarimetry''\cite{Goldstein2011}. These typically fall into two classes: channeled imaging polarimetry (CIP) and channeled spectropolarimetry (CS), following terminology of \cite{Goldstein2011}. To simplify, the CS methods typically encode the polarization information as an amplitude modulation directly on the spectrum, derived from a polarization optic whose retardance is a function of wavelength. As an example, the spectral modulation principle for linear spectropolarimetry\cite{Snik2009} can reach a precision of at least $2\times 10^{-4}$\cite{vanHarten2011}. The CIP methods, by contrast, use a polarization optic whose retardance is spatially varying, so that the polarization information is encoded as a set of spatial fringes onto an image\cite{Oka2003}. These two approaches, as well as a number of technical issues that arise in each case, are described in some detail in \cite{Goldstein2011}. Previous authors have used multiorder retarders, birefringent wedges, pairs of birefringent wedges, and Savart plates individually or in combination for these two applications \cite{Snik 2009, Oka2003, Serkowski1972, Nordseick1974, Oka1999, Oka2006, Mujat2004, Wakayama2008, Howard2008, Kudenov2008, Snik2010}. Typically, the polarization information is extracted from the data using Fourier methods. Another approach to single-shot imaging polarimetry and spectropolarimetry is the wedged double Wollaston device, which yields multiple images on a detector with polarization axes at different angles and allows retrieval of the Stokes parameters through combinations of the images \cite{Geyer1996,Oliva1997,Pernechele2003}. The approach explored in the current paper is to disperse the spectral and polarimetric information along two orthogonal directions, a ``spectral'' dimension for the spectroscopy and a ``spatial'' dimension for the polarimetry. The amplitude modulation of the encoding of the polarization information is independent of the choice of spectral resolution. The two aspects of the measurement, the spectroscopy and the polarimetry, may be optimized independently. The complete spectropolarimetric information is encoded on a single data frame, and may be derived using straightforward analytical techniques. Poisson photon counting statistics play a critical role in astronomical polarimetry. To measure a polarization degree of $10^{-n}$, it is necessary to collect (at least) $10^{2n}$ photons. For example, to measure $p\approx 10^{-4}$, it is necessary to accumulate $10^8$ photons. A typical astronomical CCD has a well-depth $\sim 10^5$ electrons per pixel, requiring $10^3$ pixel readouts. If the data are needed in, say, 1~s in one pixel, this multi-readout approach becomes prohibitive. A solution is to spread the illumination across many pixels, as is done for high signal-to-noise-ratio photometry with the Hubble Space Telescope \cite{Brown2001}. Making a virtue of necessity, if we use optics which spread the light of a spectrum perpendicular to the spectrum, then we can exploit the width of the broadened spectrum to encode the polarimetric data. Sections 2 and 4 discuss a variety of configurations that accomplish this goal. Sec.~2 starts with linear polarization (equivalently any two of the Stokes polarization parameters), followed by a discussion of our analysis methods in Sec.~3. Configurations that enable full Stokes spectropolarimetry are presented in Sec.~4. Sec.~5 describes practical implementation, sensitivities, and an approach based on calibration. Sec.~6 provides an example application. Finally, we make some conclusions in Sec.~7. The different embodiments of the underlying approach described in Secs.~2 and 4 highlight different aspects of the method. In the end, we anticipate that the most useful realizations of the concept will be the double wedge for linear polarimetry, Sec.~2B, and the double-double wedge for full Stokes spectropolarimetry, Sec.~4.B.3. The other subsections introduce new ideas incrementally, while these two sections capture the final products for the two types of polarimetry. We use the conventional Stokes vector formalism to quantify the polarization of light with $S\equiv (I,Q,U,V)$ where $I$ is the total intensity; $Q, U$ decribe the linear polarization and $V$ the circular polarization. The normalized Stokes parameters $(q,u,v)=(Q,U,V)/I$ represent the fractional polarization state. The degree of polarization is given by $p=\sqrt{q^2+u^2+v^2}$ and the direction of linear polarization by $\psi = {1\over 2}\tan^{-1}(u/q)$.
|
We have described an approach to polarization measurement that uses no moving parts and that relies on simple, robust optical components. Either linear polarimetry or full Stokes polarimetry can be carried out. The method depends on the use of an area detector, such as a CCD, with the light spread across a region of the detector. If the system can be made photon-limited in sensitivity, this spreading of the light improves the polarimetry, since typical CCD well-depths are only of order $10^5$. With modest spreading of the light, a single photon limited frame should be able to reach precision of order $10^{-4}$ in polarization. The influence of departures from ideal circumstances still remains largely to be explored. Hence, we do not know at this stage whether this approach will be able to achieve extremely high accuracy. However the robustness and simplicity of the components involved offers cause for optimism. Other approaches, such as the spectral modulation method for linear polarimetry \cite{Snik2009}, offer alternative methods for static polarimetry in hostile environments. Compared to that approach, the methods presented here yield a cleaner separation of the spectroscopy and polarimetry, at the expense of additional detector surface area requirements. The methods may be applied in the UV or IR as well as in the visible wavelength range. Since the entire polarization information is contained within a single data frame, the method is well-suited to measuring the polarization of transient sources and scenes where the polarimeter and target are in rapid relative motion. Since the optics are robust, simple and require no moving parts, we anticipate that these methods will prove useful for application in space. \vspace{0.5in} \noindent {\bf Acknowledgments:} We acknowledge support from the STScI JWST Director's Discretionary Research Fund JDF grant number D0101.90152. STScI is operated by the Association for Universities for Research in Astronomy, Inc., under NASA contract NAS5-26555. Patent pending, all rights reserved. \vspace{1.0in} \noindent {\bf Appendix: General Linear Least Squares Methods} \vspace{12pt} We follow Bevington\cite{Bevington1968,Bevington2002} and let the general problem to be solved be \begin{equation*} y(x_i)=ai_c(x_i)+bq_c(x_i)+cu_c(x_i)+dv_c(x_i), \end{equation*} where measurements $y_i$, either the intensity $I$ in the single beam case or $(I_{\parallel}-I_{\perp})/(I_{\parallel}+I_{\perp})$ in the dual beam case, are made at points $x_i$ and $y_i=y(x_i)+\epsilon_i$, with $y$ the true underlying value and $\epsilon_i$ its error (assumed random, independent) at location $x_i$. The terms $i_c$, $q_c$, $u_c$, and $v_c$ are trigonometric functions that encode the Stokes parameters $I$, $Q$, $U$, and $V$ or $q$, $u$, and $v$, and their coefficients $a$, $b$, $c$, and $d$ are the Stokes parameters to be derived. The mapping and specific functions depend on the chosen configuration, but all configurations discussed here can be expressed in this way. Sometimes the functions are identically zero, implying no sensitivity to that parameter. The $\chi^2$ function is then \begin{equation*} \chi^2=\sum^{N}_{i=1}\epsilon_i^2/\sigma_i^2=\sum^{N}_{i=1}{1\over\sigma_i^2}[y_i-y(x_i)]^2=\sum^{N}_{i=1}{1\over\sigma_i^2}[y_i-ai_c(x_i)-bq_c(x_i)-cu_c(x_i)-dv_c(x_i)]^2, \end{equation*} and to solve, we set the partial derivatives of $\chi^2$ with respect to each of $a$, $b$, $c$, and $d$ equal to zero: \begin{eqnarray*} {\partial\chi^2\over\partial a}&=0=-2\sum {1\over\sigma_i^2} i_c\left(y_i-ai_c-bq_c-cu_c-dv_c\right),\\ {\partial\chi^2\over\partial b}&=0=-2\sum {1\over\sigma_i^2} q_c\left(y_i-ai_c-bq_c-cu_c-dv_c\right),\\ {\partial\chi^2\over\partial c}&=0=-2\sum {1\over\sigma_i^2} u_c\left(y_i-ai_c-bq_c-cu_c-dv_c\right),\\ {\partial\chi^2\over\partial d}&=0=-2\sum {1\over\sigma_i^2} v_c\left(y_i-ai_c-bq_c-cu_c-dv_c\right).\\ \end{eqnarray*} We require the curvature matrix $\mathbf{B}$ and summation vector $s_y$, \begin{equation*} {\bf B}\equiv\left( \begin{array}{cccc} \sum{1\over\sigma_i^2}i_c^2 & \sum{1\over\sigma_i^2}i_cq_c & \sum{1\over\sigma_i^2}i_cu_c & \sum{1\over\sigma_i^2}i_cv_c \\ \sum{1\over\sigma_i^2}i_cq_c & \sum{1\over\sigma_i^2}q_c^2 & \sum{1\over\sigma_i^2}q_cu_c & \sum{1\over\sigma_i^2}q_cv_c \\ \sum{1\over\sigma_i^2}i_cu_c & \sum{1\over\sigma_i^2}q_cu_c & \sum{1\over\sigma_i^2}u_c^2 & \sum{1\over\sigma_i^2}u_cv_c \\ \sum{1\over\sigma_i^2}i_cv_c & \sum{1\over\sigma_i^2}q_cv_c & \sum{1\over\sigma_i^2}u_cv_c & \sum{1\over\sigma_i^2}v_c^2 \\ \end{array} \right), \end{equation*} \begin{eqnarray*} s_y\equiv\left(\sum{1\over\sigma_i^2}i_cy_i, \sum{1\over\sigma_i^2}q_cy_i, \sum{1\over\sigma_i^2}u_cy_i, \sum{1\over\sigma_i^2}v_cy_i\right), \end{eqnarray*}, respectively. With this terminology, the least squares equations become \begin{equation*} s_y={\mathbf B}\cdot\left( \begin{array}{c} a\\ b\\ c\\ d\\ \end{array} \right). \end{equation*} We solve for the vector $(a,b,c,d)$, \begin{equation*} {\bf a}=(a,b,c,d)={\bf B}^{-1}\cdot s_y. \end{equation*} Following standard procedures, e.g. \cite{Bevington1968},\cite{Bevington2002}, ignoring covariances, the uncertainties on these parameters are \begin{equation*} \sigma^2_{a_i}=B_{ii}^{-1} \end{equation*} where $a_i$ represents $a$, $b$, $c$, or $d$, and $[B^{-1}]_{ii}$ is the corresponding diagonal term of ${\bf B}^{-1}$. Our application is precision polarimetry, for which it is presumed (\romannumeral1)~the degree of polarization is small, and (\romannumeral2)~light levels are relatively high. Hence, the intensity across the spatial segment is approximately constant and obeys Poisson counting statistics. That is, we assume the uncertainty is the same for each bin, $\sigma_i = \sigma =(N_{\gamma}/nx)^{(1/2)}$ where $N_{\gamma}$ is the total number of detected photons, and $nx$ is the number of bins across which the photons are distributed, i.e., the number of $x$ sampling points. For cases where the trigonometric functions that are embodied by $i_c$, $q_c$, $u_c$, $v_c$ are orthogonal (we approximate the summations by integrals over integer numbers of periods), ${\bf B}$ is diagonal. Hence its inverse is also diagonal, and the Stokes parameter solutions are independent of one another. We choose as a simple example the single wedge with its fast axis at 45$\degrees$ to the slit direction, which, in turn, defines the direction for Stokes $Q$. In general, we use Mueller matrix algebra to solve for the system. As in Sec. 2.A above, Eq.~(1) gives the expression for the intensity at points $x_i$: $y_i\equiv I(x_i)=0.5(I-Q\cos\phi_1-U\sin\sin\phi_i)$ where $\phi_i=2\pi (x_i/X)$. In the formalism above, $y_i=ai_c(x_i)+bq_c(x_i)+cu_c(x_i)+dv_c(x_i)$ so $i_c(x_i)=0.5$, $q_c(x_i)=0.5\cos(2\pi x_i/X)$, $u_c(x_i)=0.5\sin(2\pi x_i/X)$ and $v_c(x_i)=0$. In the absence of $V$ the matrix ${\bf B}$ is reduced to the $3\times 3$ matrix \begin{equation*} {\bf B}\equiv{1\over\sigma^2}\left( \begin{array}{ccc} \sum i_c^2 & \sum i_cq_c & \sum i_cu_c \\ \sum i_cq_c & \sum q_c^2 & \sum q_cu_c \\ \sum i_cu_c & \sum q_cu_c & \sum u_c^2 \\ \end{array} \right). \end{equation*} The summations run across $nx$ pixels. Applying the expressions for the $q_c$, $u_c$, and $v_c$, we have \begin{equation*} {\bf B}={1\over 4\sigma^2}\left( \begin{array}{ccc} nx & \sum \cos(2\pi x_i/X) & \sum \sin(2\pi x_i/X) \\ \sum \cos(2\pi x_i/X) & \sum \cos^2(2\pi x_i/X) & \sum \cos(2\pi x_i/X)\sin(2\pi x_i/X) \\ \sum \sin(2\pi x_i/X) & \sum \cos(2\pi x_i/X)\sin(2\pi x_i/X) & \sum \sin^2(2\pi x_i/X) \\ \end{array} \right). \end{equation*} We assume the summations cover an integer number of periods and approximate the sums using integrals, using $\int_0^Xf(x)dx\approx \Delta x\sum_1^{nx} f(x_i)$, where $\Delta x = X/nx$ (the width of a bin in $x$). Hence, $\sum f(x_i)\approx {nx\over X}\int f(x)dx$. Thus, it can be shown for this example that \begin{equation*} {\bf B}={nx\over 8\sigma^2}\left( \begin{array}{ccc} 2 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \\ \end{array} \right), \end{equation*} and \begin{equation*} {\bf B}^{-1}={4\sigma^2\over nx}\left( \begin{array}{ccc} 1 & 0 & 0 \\ 0 & 2 & 0 \\ 0 & 0 & 2 \\ \end{array} \right). \end{equation*} Now, we can go back to the expression for $s_y$ and solve for the Stokes parameters, noting that $\sum y_i = N_{tot}$, the total number of photons collected, and $s_y = {1\over 2\sigma^2} \left[\sum y_i, \sum \cos(2\pi x_i/X)y_i, \sum \sin(2\pi x_i/X)y_i \right]$, omitting the zero $V$ term. Hence the solutions from $(a,b,c,d)={\bf B}^{-1}\cdot s_y$ are \begin{eqnarray*} I\equiv a &= \left({N_{tot}\over 2\sigma^2}\right)\left({4\sigma^2\over nx}\right)={2N_{tot}\over nx}=2\langle y\rangle,\\ Q\equiv b &= \left({4\over nx}\right)\sum y_i \cos(2\pi x_i/X),\\ U\equiv c &= \left({4\over nx}\right)\sum y_i \sin(2\pi x_i/X). \end{eqnarray*} Similarly, we can derive the uncertainties of the Stokes parameters. The uncertainty $\sigma$ is given by $\sigma^2=N_{tot}/nx$ and is assumed to be constant. Hence, reading directly from the expression for ${\bf B}^{-1}$, we have \begin{eqnarray*} \sigma(I)\equiv\sigma_a &={2\sqrt{N_{tot}}\over nx},\\ \sigma(Q)\equiv\sigma_b &={2\sqrt{2N_{tot}}\over nx},\\ \sigma(U)\equiv\sigma_c &={2\sqrt{2N_{tot}}\over nx}. \end{eqnarray*} The uncertainties in the normalized Stokes parameters $q$ and $u$ are \begin{equation*} \sigma(q)=\sigma(u)=\sqrt{{2\over N_{tot}}}. \end{equation*} Ignoring bias terms, it follows that the uncertainty on the degree of polarization is \begin{equation*} \sigma(p)=\sqrt{{2\over N_{tot}}}. \end{equation*} The expressions for the trigonometric functions $i_c$, $q_c$, $u_c$, $v_c$ depend on the configuration and whether a dual beam formalism is adopted or not. We derived the expressions for these functions using Mueller matrix algebra for a selection of configurations, as presented in Table~1. In a similar fashion to this example, though with more complex manipulations, we can analytically invert the corresponding curvature matrix to derive both the solution and the uncertainty estimates, taking only the diagonal terms as the uncertainty. There are cases where the off-diagonal terms of ${\bf B}$ are non-zero, as discussed in the text. \begin{table} \caption{Coefficients of Stokes parameters for different wedge configurations} \begin{center} {\tiny \begin{tabular}{llrrrr} \hline Wedges$^{\hbox{a}}$ & Beam & $i_c$ & $q_c$ & $u_c$ & $v_c$ \\ \hline $qw$ & single &0.5 & $0.5\cos\phi$ & $0.5\sin\phi$ & \\ &dual & & $\cos\phi$ & $\sin\phi$ & \\ $qww^\prime$ & single & 0.5 & $0.5\cos 2\phi$ & $0.5\sin 2\phi$ & \\ &dual& & $\cos 2\phi$ & $\sin 2\phi$ & \\ $wW$ & single& 0.5 & $0.5(\cos\phi\cos 2\theta+\sin\phi\sin 2\phi\sin 2\theta)$ & $0.5\cos 2\phi\sin 2\theta$ & $0.5(\cos\phi\sin 2\phi\sin 2\theta-\sin\phi\cos 2\theta)$ \\ & dual & & $\cos\phi\cos 2\theta+\sin\phi\sin 2\phi\sin 2\theta$ & $\cos 2\phi\sin 2\theta$ & $\cos\phi\sin 2\phi\sin 2\theta-\sin\phi\cos 2\theta$ \\ $wW^\prime$ & single & 0.5 & $0.5(\cos\phi\cos 2\theta+\sin\phi\sin(\zeta -2\phi)\sin 2\theta)$ & $0.5\cos(\zeta-2\phi)\sin 2\theta$ & $0.5(\cos\phi\sin(\zeta -2\phi)\sin 2\theta-\sin\phi\cos 2\theta)$ \\ & dual& & $\cos\phi\cos 2\theta+\sin\phi\sin(\zeta -2\phi)\sin 2\theta$ & $\cos(\zeta-2\phi)\sin 2\theta$ & $\cos\phi\sin(\zeta -2\phi)\sin 2\theta-\sin\phi\cos 2\theta$ \\ $ww^\prime WW^\prime$ & single & 0.5 & $0.5(\cos 2\phi\cos 2\theta+\sin 2\phi\sin 4\phi\sin 2\theta)$ & $0.5\cos 4\phi\sin 2\theta$ & $0.5(\cos 2\phi\sin 4\phi\sin 2\theta-\sin 2\phi\cos 2\theta)$ \\ & dual& & $\cos 2\phi\cos 2\theta+\sin 2\phi\sin 4\phi\sin 2\theta$ & $cos 4\phi\sin 2\theta$ & $\cos 2\phi\sin 4\phi\sin 2\theta-\sin 2\phi\cos 2\theta$ \\ \end{tabular}} \end{center} \footnotesize{ $^a$Notation for wedge configurations: $q$ denotes optional quarter wave retarder, $w$ denotes thickness gradient $\phi_w= 2\pi x/X$ and $W$ denotes twice the thickness gradient $\phi_W= 4\pi x/X$. A primed symbol denotes antiparallel wedge direction relative to unprimed. Wedges $w$ have fast axis at $45\degrees$ to the slit, $w^\prime$ at $-45\degrees$; wedges $W$ have fast axis at $0\degrees$, except in combination $WW^\prime$, when they are at $0\circ$ and $90\degrees$ respectively.}\normalsize \end{table} \begin{table} \caption{Error estimates for normalized Stokes parameters for different wedge configurations} \begin{center} {\tiny \begin{tabular}{llccc} \hline Wedges&Beam & $\sigma(q)$ & $\sigma(u)$ & $\sigma(v)$ \\ \hline $qw$ & single & $(2/ N_{tot})^{1/2}$ & $(2/ N_{tot})^{1/2}$ & \\ &dual &$(2/ N_{tot})^{1/2}$ & $(2/ N_{tot})^{1/2}$ \\ $qww^\prime$ &single & $(2/ N_{tot})^{1/2}$ &$(2/ N_{tot})^{1/2}$ & \\ &dual &$(2/ N_{tot})^{1/2}$ & $(2/ N_{tot})^{1/2}$ \\ $wW$&single & $2(2/N_{tot})^{1/2}(3+\cos 4\theta +2\sin 4\theta)^{-1/2}$ &$(2/N_{tot})^{1/2}/|\sin 2\theta|$ & $2(2/N_{tot})^{1/2}(3+\cos 4\theta -2\sin 4\theta)^{-1/2}$ \\ &dual& $2(2/N{tot})^{1/2}(3+\cos 4\theta +2\sin 4\theta)^{-1/2}$ & $(2/N_{tot})^{1/2}/|\sin 2\theta|$& $2(2/N{tot})^{1/2}(3+\cos 4\theta -2\sin 4\theta)^{-1/2}$ \\ $wW^\prime$&single & $4/N_{tot}^{1/2}\left({3+\cos 4\theta + 2\cos\zeta\sin 4\theta\over 15+12\cos 4\theta + 5\cos 8\theta}\right)^{1/2}$ & $(2/N_{tot})^{1/2}/|\sin 2\theta|$& $4/N_{tot}^{1/2}\left({3+\cos 4\theta - 2\cos\zeta\sin 4\theta\over 15+12\cos 4\theta + 5\cos 8\theta}\right)^{1/2}$\\ &dual & $4/N_{tot}^{1/2}\left({3+\cos 4\theta + 2\cos\zeta\sin 4\theta\over 15+12\cos 4\theta + 5\cos 8\theta}\right)^{1/2}$ &$(2/N_{tot})^{1/2}/|\sin 2\theta|$ & $4/N_{tot}^{1/2}\left({3+\cos 4\theta - 2\cos\zeta\sin 4\theta\over 15+12\cos 4\theta + 5\cos 8\theta}\right)^{1/2}$ \\ $ww^\prime WW^\prime$&single& $2(2/N{tot})^{1/2}(3+\cos 4\theta +2\sin 4\theta)^{-1/2}$ & $(2/N_{tot})^{1/2}/|\sin 2\theta|$& $2(2/N{tot})^{1/2}(3+\cos 4\theta -2\sin 4\theta)^{-1/2}$ \\ &dual& $2(2/N{tot})^{1/2}(3+\cos 4\theta +2\sin 4\theta)^{-1/2}$ & $(2/N_{tot})^{1/2}/|\sin 2\theta|$& $2(2/N{tot})^{1/2}(3+\cos 4\theta -2\sin 4\theta)^{-1/2}$ \\ \end{tabular}} \end{center} \end{table} \begin{table} \caption{Error estimates for unnormalized Stokes parameters for different wedge configurations} \begin{center} {\tiny \begin{tabular}{llccccccc} \hline Weges&Beam & $\sigma(I)$ & $\sigma(Q)$ & $\sigma(U)$ & $\sigma(V)$ \\ \hline $qw$ & single & ${2N_{tot}^{1/2}/ nx}$ & $2(2N_{tot})^{1/2}/ nx$ & $2(2N_{tot})^{1/2}/ nx$ & \\ &dual && & & \\ $qww^\prime$ &single & ${2N_{tot}^{1/2}/ nx}$ &$2(2N_{tot})^{1/2}/ nx$ & $2(2N_{tot})^{1/2}/ nx$& \\ &dual& & & & \\ $wW$&single & $2N_{tot}^{1/2}/ nx$ &${4(2N_{tot}/(3+\cos 4\theta +2\sin 4\theta))^{1/2}/ nx}$ &${2(2N_{tot})^{1/2}/(nx|\sin 2\theta|)}$ & ${4(2N_{tot}/(3+\cos 4\theta -2\sin 4\theta))^{1/2}/ nx}$ \\ &dual& & & & \\ $wW^\prime$&single & $2N_{tot}^{1/2}/ nx$& ${8\over nx}\left(N_{tot}{3+\cos 4\theta + 2\cos\zeta\sin 4\theta\over 15+12\cos 4\theta + 5\cos 8\theta}\right)^{1/2}$ &$2(2N_{tot})^{1/2}/(nx|\sin 2\theta|)$ &${8\over nx}\left(N_{tot}{3+\cos 4\theta - 2\cos\zeta\sin 4\theta\over 15+12\cos 4\theta + 5\cos 8\theta}\right)^{1/2}$\\ &dual & & & & & & \\ $ww^\prime WW^\prime$&single& $2N_{tot}^{1/2}/ nx$& ${4(2N_{tot}/(3+\cos 4\theta +2\sin 4\theta))^{1/2}/ nx}$ &${2(2N_{tot})^{1/2}/(nx|\sin 2\theta|)}$& ${4(2N_{tot}/(3+\cos 4\theta -2\sin 4\theta))^{1/2}/ nx}$ \\ &dual& & & & \\ \end{tabular}} \end{center} \end{table} \clearpage
| 12
| 6
|
1206.7106
|
1206
|
1206.2723_arXiv.txt
|
Motivated by the recent no-go result of homogeneous and isotropic solutions in the nonlinear massive gravity, we study fixed points of evolution equations for a Bianchi type--I universe. We find a new attractor solution with non-vanishing anisotropy, on which the physical metric is isotropic but the St\"uckelberg configuration is anisotropic. As a result, at the background level, the solution describes a homogeneous and isotropic universe, while a statistical anisotropy is expected from perturbations, suppressed by smallness of the graviton mass.
| 12
| 6
|
1206.2723
|
||
1206
|
1206.0726_arXiv.txt
|
Observations of distant galaxies play a key role in improving our understanding of the Epoch of Reionization (EoR). The observed Ly$\alpha$ emission line strength -- quantified by its restframe equivalent width (EW) -- provides a valuable diagnostic of stellar populations and dust in galaxies during and after the EoR. In this paper we quantify the effects of star formation stochasticity on the predicted Ly$\alpha$ EW in dwarf galaxies, using the publicly available code SLUG (used to `Stochastically Light Up Galaxies'). We compute the number of hydrogen ionizing photons, as well as flux in the Far UV for a set of models with star formation rates (SFR) in the range $10^{-3}$-$1$ \Msun\hs yr$^{-1}$. From these fluxes we compute the luminosity, $L_{\alpha}$, and the EW of the Ly$\alpha$ line. We find that stochasticity alone induces a broad distribution in $L_{\alpha}$ and EW at a fixed SFR, and that the widths of these distributions decrease with increasing SFR. We parameterize the EW probability density function (PDF) as an SFR--dependent double power law. We find that it is possible to have EW as low as $\sim$EW$_{0}/4$ and as high as $\sim 3\times$EW$_{0}$, where EW$_{0}$ denotes the expected EW in the absence of stochasticity. We argue that stochasticity may therefore be important when linking drop-out and narrow-band selected galaxies, when identifying population III galaxies, and that it may help to explain the large EW (EW$\gsim 100-200$ \AA) observed for a fraction of Ly$\alpha$ selected galaxies. Finally, we show that stochasticity can also affect the inferred escape fraction of ionizing photons from galaxies. In particular, we argue that stochasticity may simultaneously explain the observed anomalous ratios of the Lyman continuum flux density to the (non-ionizing) UV continuum density in so-called Lyman-Bump galaxies at $z=3.1$, as well as the absence of such objects among a sample of $z=1.3$ drop-out galaxies.
|
\label{sec:introduction} The Epoch of Reionization (EoR) represents a milestone in the evolution of our Universe, during which hydrogen gas in most of its volume changed from fully neutral to fully ionized. Observations of the cosmic microwave background support that reionization started at redshift $z\sim 20$ while quasars observations demonstrate that the process was completed at $z\sim 5-6$ \citep[e.g.][]{Mesinger10,Pritchard10}. One key to new insights in this area are observations of high redshift galaxies, as they will allow for the construction of a complete physical picture of the reionization process. Observations of distant Lyman Break Galaxies (LBGs) and Lyman $\alpha$ Emitters (LAEs) in the redshift range $4<z<10$ have provided the most prominent windows on the star formation history at these early epochs \citep[see e.g.][and references therein]{Robertson10}. LBGs are galaxies that have been selected using broad-band surveys, which are sensitive to the `Lyman break' in the spectra of star forming galaxies. This break is a consequence of typical effective temperatures of O \& B stars, and of interstellar and intergalactic absorption of flux blueward of the Ly$\alpha$ wavelength \citep{Steidel96}. Surveys for LBGs (or `drop-out' galaxies) are most sensitive to UV rest-frame continuum emission of galaxies. LAEs are galaxies that have been selected from narrow-band surveys which are sensitive to high-redshift galaxies that have strong Ly$\alpha$ line emission \citep[e.g.][]{Rhoads00}. The Ly$\alpha$ line originates in HII regions that surrounded O \& B stars. As a result, the Ly$\alpha$ line flux (and strength) in LAEs are more directly connected to the ionizing photon budget \citep[e.g.][]{2003A&A...397..527S}. The `strength' of the Lyman $\alpha$ emission line is quantified by its rest-frame equivalent width (EW), which corresponds to the ratio between the line intensity and the continuum. The Ly$\alpha$ EW is an important physical quantity associated with galaxies during the EoR for (at least) three reasons : ({\it i}) narrowband surveys for LAEs are typically sensitive to galaxies for which the Ly$\alpha$ EW exceeds some threshold value \citep[which can range between EW$\sim 20-65$ \AA, see e.g.][]{Ouchi08}. The Ly$\alpha$ EW is clearly an important physical quantity that links the LBG and LAE populations; ({\it ii}) the `first' galaxies that formed from pristine gas are expected to be characterised by unusually large EW (well in excess of EW$\sim 200$ \AA), which may be one of their most prominent observational signatures \citep{2003A&A...397..527S,2009MNRAS.399...37J,Raiter10}; ({\it iii}) recent observations of LAEs \& LBGs in the redshift range $z=3-7$ have revealed an apparent non-monotonic evolution in the EW distribution, with significantly less Ly$\alpha$ being observed from galaxies beyond redshift $z>6$ \citep{Stark11a,Stark11b,Pentericci11,Ono12,Schenker12}. The redshift evolution in the EW distribution at $z>6$ is likely to be connected with the reionization process rather than with secular evolution process in the galaxies \citep[][also see \cite{Shimizu11}]{Forero12}. Although current observations do not allow for robust conclusions, analyzing EW distributions provides a powerful technique to probe the population of star forming galaxies -- as well as the ionization state of the IGM -- during and after reionization has been completed \citep[e.g.][also see \cite{Dayal2012}]{Dijkstra11}. Recently, \citet{2011ApJ...741L..26F} showed that at low levels of star formation (SFR $< 1 M_{\odot}$ yr$^{-1}$), the effects of the statistical description of the stellar Initial Mass Function (IMF) and the Cluster Mass Function (CMF) emerge. Among the expected effects -- gathered under the name of `stochasticity' -- the most relevant for our discussion is that the flux ratio between ionizing and Far UV photons fluctuates around the mean expected value. \citet{2011ApJ...741L..26F} show how the clustering of star formation together with the sampling of the Initial Mass Function (IMF) cause the ratio of the H$\alpha$ flux and FUV flux density to fluctuate around the mean significantly for local dwarf galaxies. This effect should be also noticeable in the ratio of Ly$\alpha$ flux to FUV flux density i.e. the EW. In other words, stochasticity may affect the {\it intrinsic} Ly$\alpha$ EW-PDFs of faint LBGs ($-18 \lsim M_{\rm UV} \lsim -13$). Because Ly$\alpha$ EW plays an important role in understanding high--redshift galaxy populations and constraining the reionization process, it is important to understand quantitatively the impact of stochasticity. Furthermore, observational constraints on the `escape fraction' of ionizing photons from star forming galaxies are derived from the observed ratio of LyC flux density to FUV flux density, which is subject to the clustering of star formation and stochastic sampling of the IMF, especially at SFR$\lsim 1\hs M_{\odot}$ yr$^{-1}$. It has long been realized that such galaxies can provide the bulk of the ionizing photons needed to reionize the IGM and to maintain it ionized \citep[at a fixed escape fraction, see e.g. Fig~31 of][]{Barkana01}. A more recent analysis by \cite{2012arXiv1201.0757K} echoes these findings. \citet{2012arXiv1201.0757K} show that in plausible scenarios for reionization, star forming galaxies with very faint magnitudes $M_{\lim}\sim -13$ (corresponding to SFR $\sim 10^{-2}$\Msun yr$^{-1}$) contribute significantly to the overall budget of ionizing photons. While high--redshift galaxies with such low star formation rates remain too faint to be detected even by future facilities, the observation of galaxies with SFRs$=10^{-2}-1\hs M_{\odot}$ yr$^{-1}$ at lower redshifts $z<4$ can help us to understand the physics of such systems. For instance, the escape fraction of ionizing photons from galaxies with SFR within the upper end of that range have already been constrained by observations at $z\sim 3.0$ \citep[][who constrained the escape fraction down to SFR$\sim 1 M_{\odot}$ yr$^{-1}$]{2009ApJ...692.1287I}. The possible dependence of these estimates on stochasticity provides an additional motivation to study the consequences of the stochastic sampling of the IMF. In this paper we set out to quantify the effects of stochasticity in the EW distribution of dwarf Ly$\alpha$ emitting galaxies. We implement highly simplified models for dwarf galaxies at high redshift to isolate the stochastic effect. The SFH is constant, single metallicity, without any extinction effects. We stress that our objective is not to explain or interpret current observations of LAEs and LBGs, but instead simply to isolate and highlight the impact of stochasticity on predicted values for the Ly$\alpha$ EW. This paper is structured as follows. In \S \ref{sec:ew} we summarize the expected effects on the equivalent width stochasticity and present the numerical experiments we perform to quantify this effect. In \S \ref{sec:results} we present the raw results that help us in \S \ref{sec:discussion} to asses the impact of stochasticity in different aspects of the interpretation of observations of LAEs and LBGs at high redshift. Finally, we summarize our conclusions in \S \ref{sec:conclusions} Throughout this paper we will study the `Lyman continuum' ($\lambda \sim 700$\AA), ionizing continuum ($\lambda<912$\AA), and Far UV (FUV) continuum ($\sim 1500$\AA), where all wavelengths are rest frame. Whenever we refer to UV we imply FUV. Also, the Ly$\alpha$ equivalent widths are measured in the galaxies' restframe.
|
\label{sec:conclusions} In this paper we have investigated the quantitative impact of stochastic sampling of the stellar initial mass function (IMF) on the Ly$\alpha$, and ionizing photon emissivity of dwarf galaxies. We have used the SLUG code to simulate the spectral energy distribution for restframe wavelengths $\lambda \lsim 2000$ \AA\hs of galaxies with time-averaged star formation rates in the range $(10^{-3}-1)$ \Msun \hs yr$^{-1}$. From our predicted Far UV continuum flux density (at $\langle \lambda \rangle =1500$ \AA) and the ionizing flux density (at $\langle \lambda \rangle <912$ \AA) we construct probability distribution function (PDF) for the variable ${\mathcal M}$ that represents the ratio of the measured restframe equivalent width (EW) of the Ly$\alpha$ line to the expected constant value in the absence of stochasticity (EW$_0$), i.e. $\mathcal{M}\equiv$EW/EW$_0$. We find that the $\mathcal{M}$--PDF can be represented by a double power law in ${\mathcal M}$, whose parameters are highly dependent on the SFR. We emphasize that the results derived from these experiments are sensitive to the degree of the clustering properties of the stars. In the case of $f_{c}=0$ where the star formation is completely unclustered, the dispersion around the mean is reduced by a factor of 10 \citep{dasilva12}. The results are also sensitive to the star formation history assumed: in the case of a bursty star formation history, one could expect the scatter around the mean for $\mathcal{M}$ to be higher than what we derive here. Our results show that it is possible for galaxies to have both extremely low and high ${\mathcal M}$ values -- especially at increasingly low SFR -- and investigate implications of this result. In particular, we show how the existence of galaxies with low ${\mathcal M}$ values can induce an observational bias if galaxies are selected by cuts in the equivalent width (as in narrowband surveys for LAEs). This bias may cause dwarfs with lower SFRs $<0.1$ \Msun yr$^{-1}$ more likely to be missed in narrowband surveys. On the other hand, the existence of galaxies with high ${\mathcal M}$ implies that a high EW cannot unambiguously be interpreted as a defining characteristic of primordial population III galaxies \citep{2003A&A...397..527S,2009MNRAS.399...37J,2011ApJ...731...54P}. Finally, we use the framework presented in Dijkstra \& Wyithe 2012 to quantify the impact of stochasticity on the predicted luminosity function and EW-PDF of a sample of Ly$\alpha$ emitters (LAEs, i.e. narrowband selected galaxies) at a fixed redshift. We find that while stochasticity does not appreciably affect the LAE abundance modeling, it may help to explain the large EW (EW$\gsim 100-200$ \AA) observed for a fraction of LAEs. Finally, we have also shown that stochasticity can affect inferred constraints on the escape fraction of ionizing radiation, most strongly in galaxies that form stars at rates SFR$\lsim 1 M_{\odot}$ yr$^{-1}$. Given that such galaxies are thought to dominate the reionization process at $z>6$, it is important to consider the effects of stochasticity when using `local' observations to put constraints on their ionizing luminosities and escape fractions. In particular, the observed anomalies in the ratio of LyC to non-ionizing UV radiation in so-called `Lyman-Bump' galaxies at $z=3.1$ reported by \cite{2009ApJ...692.1287I} can be explained naturally in the context of the stochastic sampling of the IMF, without resorting to unusual stellar populations. Moreover, the absence of such objects in a sample of $15$ LBGs at $z\sim 1.3$ \citep{2010ApJ...723..241S} is also expected as these galaxies are forming stars at rates at which the stochastic sampling of the IMF becomes negligible.
| 12
| 6
|
1206.0726
|
1206
|
1206.7089_arXiv.txt
|
The chemical composition of ultra high energy cosmic rays is still uncertain. The latest results obtained by the Pierre Auger Observatory and the HiRes Collaboration, concerning the measurement of the mean value and the fluctuations of the atmospheric depth at which the showers reach the maximum development, $X_{max}$, are inconsistent. From comparison with air shower simulations it can be seen that, while the Auger data may be interpreted as a gradual transition to heavy nuclei for energies larger than $\sim 2-3\times10^{18}$ eV, the HiRes data are consistent with a composition dominated by protons. In Ref. \cite{Wilk:11} it is suggested that a possible explanation of the observed deviation of the mean value of $X_{max}$ from the proton expectation, observed by Auger, could originate in a statistical bias arising from the approximated exponential shape of the $X_{max}$ distribution, combined with the decrease of the number of events as a function of primary energy. In this paper we consider a better description of the $X_{max}$ distribution and show that the possible bias in the Auger data is at least one order of magnitude smaller than the one obtained when assuming an exponential distribution. Therefore, we conclude that the deviation of the Auger data from the proton expectation is unlikely explained by such statistical effect.
|
The nature of the primary cosmic rays is intimately related to the astrophysical objects capable of accelerating these particles to such high energies. Also, propagation in the intergalactic medium depends on the composition, which affects the resulting spectral distribution of the flux observed at Earth. A knowledge of the composition is also very important for primary energy reconstruction and for anisotropy studies. One of the most important limitations of composition analyses comes from the lack of knowledge of the hadronic interactions at the highest energies. Composition studies are based on the comparison of experimental data with Monte Carlo simulations of atmospheric cosmic rays showers, which makes use of hadronic interaction models which extrapolate the available low energy accelerator data to the energies of the cosmic rays. One of the most sensitive parameters to the mass of the primary cosmic ray is the atmospheric depth at which the showers reach their maximum development. Lighter primaries generate showers that are more penetrating, producing larger values of $X_{max}$. Also, the fluctuations of this parameter are smaller for heavier nuclei. The Pierre Auger Observatory and the HiRes experiment are able to observe directly the longitudinal development of the showers by means of fluorescence telescopes. Therefore, in both experiments, the $X_{max}$ parameter of each observed shower can be reconstructed from the data taken by the telescopes. The mean value and the standard deviation of $X_{max}$, as a function of primary energy, obtained by Auger \cite{Auger:10} and HiRes \cite{Hires:10} appear to be inconsistent. From the comparison with simulations, the Auger data suggest a transition to heavier nuclei starting at energies of order of $2-3\times10^{18}$ eV, whereas, the HiRes data are consistent with protons in the same energy range. In Ref. \cite{Wilk:11} a new parameter, the difference between the mean value and the standard deviation of $X_{max}$, was introduced in order to reconcile the Auger and HiRes results. This new parameter has the advantage of being much less sensitive to the first interaction point than the mean value and the standard deviation separately. From a comparison of the experimental values of this parameter, obtained by Auger and HiRes, with simulated data, they infer that the composition of the cosmic rays is dominated by protons. They say that the energy dependence of the distribution of $X_{max}$, observed by Auger, seems to be caused by an unexpected change in the depth of the first interaction point, which can be explained by a rapid increase of the cross section and/or increase of the inelasticity. Both possibilities require an abrupt onset of new physics in this energy range, which makes them questionable. They also suggest that the deviation of the distribution of $X_{max}$ from the proton expectation, present in the Auger data, could be originated in the statistical techniques used to analyze the data. In particular, they suggest that the deviation of the mean value of $X_{max}$ from the proton expectation could be explained by a bias originated from the exponential nature of the $X_{max}$ distribution and the decreasing number of events as a function of primary energy. In this work we show that, considering a better description of the $X_{max}$ distribution, the bias in the determination of the mean value of $X_{max}$ become more than one order of magnitude smaller than the one obtained for the exponential distribution. We find that the value of the bias in the last energy bin (the one with the smallest number of events) of the Auger data, published in Ref. \cite{Auger:10}, is $\lesssim 1.5$ g cm$^{-2}$, which is much smaller than the systematic errors on the determination of the mean value of $X_{max}$ estimated in Ref. \cite{Auger:10}.
|
In this work we studied in detail statistical bias in the determination of the mean value of $X_{max}$, suggested in Ref. \cite{Wilk:11}, as a possible explanation of the deviation of Auger data from the proton expectation. We used two different functions to fit the $X_{max}$ distribution obtained from simulations: ($i$) the convolution of an Exponential distribution with a Gaussian and ($ii$) a shifted-Gamma distribution. We find that the bias obtained by using these two functions is more than one order of magnitude smaller than the corresponding one of the Exponential distribution, the one used in Ref. \cite{Wilk:11}. We find that the values of the bias, obtained for the convolution of the Exponential function with the Gaussian, are larger because it presents a more extended tail to larger values of $X_{max}$ than the shifted-Gamma distribution. We also find that the bias diminishes when a Gaussian (symmetric) uncertainty on the determination of $X_{max}$ is included. We also calculated the expected bias, as a function of primary energy, using the actual number of events in each energy bin of the Auger data, published in Ref. \cite{Auger:10}, for both hadronic interaction models considered in this work, QGSJET-II and EPOS 1.99. We find that the largest value of the bias, corresponding to the bin with the smallest number of events, is smaller than $1.5$ g cm$^{-2}$, much less than the systematic errors on the determination of $\langle X_{max} \rangle$ estimated in Ref. \cite{Auger:10}. \appendix
| 12
| 6
|
1206.7089
|
1206
|
1206.5874_arXiv.txt
|
We study properties of waves of frequencies above the photospheric acoustic cut-off of $\approx$ 5.3 mHz, around four active regions, through spatial maps of their power estimated using data from the \textit{Helioseismic and Magnetic Imager} (HMI) and \textit{Atmospheric Imaging Assembly} (AIA) onboard the \textit{Solar Dynamics Observatory} (SDO). The wavelength channels 1600 \AA~ and 1700 \AA~ from AIA are now known to capture clear oscillation signals due to helioseismic {\em p}-modes as well as waves propagating up through to the chromosphere. Here we study in detail, in comparison with HMI Doppler data, properties of the power maps, especially the so called ``acoustic halos'' seen around active regions, as a function of wave frequencies, inclination, and strength of magnetic field (derived from the vector-field observations by HMI) and observation height. We infer possible signatures of (magneto)acoustic wave refraction from the observation-height dependent changes, and hence due to changing magnetic strength and geometry, in the dependences of power maps on the photospheric magnetic quantities. We discuss the implications for theories of {\it p}-mode absorption and mode conversions by the magnetic field.
|
\label{sec:intro} Enhanced power of high-frequency waves surrounding strong-magnetic-field structures such as sunspots and plages is one of several intriguing wave-dynamical phenomena observed in the solar atmosphere. This excess power known as ``acoustic halo'', first observed in the early 1990's at photospheric \cite{brownetal92} as well as chromospheric \cite{braunetal92,tonerlabonte93} heights, is at frequencies above the photospheric acoustic cut-off of $\approx$ 5.3 mHz, in the range of 5.5 -- 7 mHz, and over regions of weak to intermediate strength (50 -- 250 G) photospheric magnetic field. A good number of observational studies \cite{hindmanandbrown98, thomasandstanchfield00, jainandhaber02, finsterleetal04, morettietal07, nagashimaetal07} since then have brought out additional features, and we refer the reader to \inlinecite{khomenkoandcollados09} for a succint summary of them as known prior to 2009. On the theoretical side, no single model describing all of the observed features has been achieved yet, although there have been several focussed efforts \cite{kuridzeetal08,hanasoge08,shelyagetal09,hanasoge09,khomenkoandcollados09}. However, a large number of studies centered around modeling acoustic wave -- magnetic-field interactions over heights from the photosphere to chromosphere, with relevance to high-frequency power excess observed around sunspots, have been carried out \cite{rosenthaletal02,bogdanetal03,bogdanandjudge06,cally06,schunkerandcally06, khomenkoandcollados06,khomenkoandcollados08,jacoutotetal08,khomenkoetal09,vigeeshetal09,khomenkoandcally12}. A central theme of all the above theoretical studies, except that of \inlinecite{jacoutotetal08}, has been the conversion of acoustic wave modes (from below the photosphere) into magnetoacoustic wave modes (the fast and slow waves) at the magnetic canopy defined by the plasma $\beta$=1 layer. Enhanced acoustic emission by magnetically modified convection over weak and intermediate field regions, suggested as one possible mechanism by \inlinecite{brownetal92} and further advocated by \inlinecite{jainandhaber02}, was found to be viable, through 3D numerical simulations of magneto-convection, by \inlinecite{jacoutotetal08}. It should perhaps be noted here that Hindman and Brown (1998) suggested some form of field-aligned incompressible wave motions as agents for excess power over magnetic regions, implied by their finding, from SOHO/MDI observations, of visibility of these halos in photospheric Doppler velocities but not in continuum intensities. This suggestion, however, seems to contradict chromospheric-intensity observations \cite{braunetal92, morettietal07}, which show that the halos at these heights, in terms of their dependence on magnetic field strength as well as frequency behaviour, are clearly related to the photospheric halos; Hindman and Brown's reasoning that the chromospheric Ca K intensities observed by \inlinecite{braunetal92} have large cross-talk from Doppler shifts are however contradicted by the simultaneous velocity and intensity observations made by \inlinecite{morettietal07}. In a recent study, \inlinecite{schunkerandbraun11} have brought out a few new properties, {\it viz.} i) the largest excess power in halos is at horizontal magnetic field locations, in particular, at locations between opposite-polarity regions, ii) the larger the magnetic-field strength the higher the frequency of peak power, and iii) the modal ridges over halo regions exhibit a shift towards higher wavenumbers at constant frequencies. Though none of the proposed theoretical explanations or mechanisms causing the power halos are able to match all of the observed properties, and hence provide an acceptable theory, the mechanism based on MHD fast-mode refraction in canopy-like structure of strong expanding magnetic field studied by Khomenko and Collados (2009) appears to match some major observed features. This theory also predicts certain other observable features that we will address here. From what has been learned so far, from observations as well as theoretical studies, it is clear that transport and conversion of energy between magneto-acoustic wave modes, which are driven by acoustic waves and convection from below the photosphere and mediated by the structured magnetic field in the overlying atmosphere, provide a plausible approach for identifying the exact mechanism. A crucial disgnostic of such wave processes requires probing several heights in the atmosphere simultaneously with magnetic-field information. The instruments HMI and AIA onboard SDO, with photospheric Doppler and vector magnetic field information from the former and the upper photospheric and lower chromospheric UV emissions in the 1700 \AA~ and 1600 \AA~ wavelength channels of the latter, provide some interesting possiblities for such studies. We exploit this opportunity, and make a detailed analysis of high-frequency power halos around four different active regions, over at least five different heights from the photosphere to chromosphere. \begin{figure} \centerline{\includegraphics[width=1.5\textwidth,height=0.8\textheight,clip=]{fig1.pdf} } \caption{Average total magnetic field [$B$] (top panel, pixels above 200 G are saturated in the grey scale), a snapshot of continuum intensity [$I_{\rm{c}}$] (middle) and average magnetic-field inclination [$\gamma$] (bottom) of the four active regions studied. Averages of $B$ and $\gamma$ shown here are over the length of time (14 hours) used for power map estimation. Areas covered between white dashed lines and boundary axes in the top panel, in each region, are the largely non-magnetic ones used for estimation of quiet-Sun power of waves studied here. Each active region here covers a square area of 373 $\times$ 373 Mm$^{2}$. } \label{fig1} \end{figure}
|
\label{sec:discuss} The physics of interactions between acoustic waves and magnetic fields in the solar atmosphere leads to a rich variety of observable dynamical phenomena, the understanding of which is crucial to mapping the thermal and magnetic structuring in height of the solar atmosphere. A large body of theoretical and observational studies of such physics exists and a significant fraction of which, as explained in Section 1, is relevant for the detailed analyses we have made here of the high-frequency power halos around active regions. The spatial reorganization of otherwise uniform injection of acoustic waves of frequency above the photospheric cut-off of $\approx$ 5.3 mHz from the subsurface and photospheric layers by the overlying and arching magnetic field of active regions and strong field flux elements is clearly brought out in our analyses of wave power distribution estimated from observables spanning a height range of 0 -- 430 km above the photosphere. In accordance with the central theme of theoretical investigations referred to in Section 1 ({\it e.g.} \inlinecite{rosenthaletal02,bogdanetal03,khomenkoandcollados09}, see also the review by \inlinecite{khomenko09}), our results presented and discussed in detail in Section 3 clearly bring out the importance of interactions between fast-acoustic waves from the lower atmospheric high-$\beta$ regions and the expanding and spreading magnetic field canopy that separates the low-$\beta$ region above. Although we have not modeled the higher atmospheric magnetic field from the observed vector field at photospheric heights and have not estimated and mapped the $\beta$ = 1 layer, results obtained from an analysis of photospheric $B$- and $\gamma$-dependence of power maps, especially for the newly identified high-frequency secondary power-halo peaking at about $\nu$ = 8 mHz (Figures 2(b), 5, and 6), appear to agree with the theory and simulations performed by \inlinecite{khomenkoandcollados09}: refraction and subsequent reflection of fast magneto-acoustic waves around the $\beta$ = 1 layer as agents of additional wave energy deposition at these layers and hence the power excess. Such refracted fast magneto-acoustic waves, possibly depending on the angles of incidence of acoustic waves prior to conversion, may converge towards the stronger-field base region that fans out from the sunspots and other strong-field structures, and cause a focussing effect thereby increasing wave amplitudes and thus the compact halos seen in maps from $v$ (Figure 2(b)). The reduction in power encircling the compact halos itself is again identified as a signature wave refraction under two possible scenarios: i) the refraction and horizontal propagation region ({\it i.e.}, the intermediate 100 -- 200 G field $\beta$ = 1 layer) being located at a height significantly above the $v$-formation height (of 140 km) or/and ii) vanishing velocity signals in the vertical direction due to horizontal propagation around the observation height (for $v$). The spatially extended and weak halo surrounding this reduced power region is suggested to result from the field-aligned slow magneto-acoustic wave due to the fast-to-slow mode conversion. In regions where opposite polarity canopy fields meet to produce neutral lines, it is conceivable that both the refracted fast waves and converted slow waves moving in opposite directions toward the neutral lines dump or focus wave energy causing greater power seen at these locations \cite{schunkerandbraun11}. Other major results of this study can be summarised as follows: i) visibility of enhanced power is a strong function of height in the atmosphere; absence of power excess in continuum intensity [$I_{\rm{c}}$] is due to the corresponding height being significantly below the wave-mode conversions happening around the $\beta$ = 1 layers, and hence not due to any incompressive wave; this is confirmed by the presence of strong halos in maps from line core intensity $I_{\rm{co}}$, which form at higher layers. ii) The well-observed 6 mHz halo is the strongest in maps from Doppler velocities [$v$] forming at about 150 km above the photosphere, and it spreads out (spatially) and gets weaker against height as seen from AIA 1700 and 1600 \AA~ emissions; this feature reflects the spreading and weakening magnetic field (and hence increasing height for $\beta$ = 1 locations) against height. iii) Frequencies of peak power gradually shift to higher values, along with a spreading in frequency extent, as height increases in the atmosphere; in the upper photosphere, power halos (from $v$, $I_{\rm{co}}$ and $I_{\rm{uv1}}$) exhibit twin peaks, one centered around 6 mHz and the other around 8 mHz. iv) On the whole, the largest power excess is seen over horizontal magnetic-field locations (as inferred from the photospheric field) for each observable (or height) and the largest among these are seen from the $I_{\rm{uv1}}$ at about 7.5 mHz. v) there are no significant changes in peak positions (in wavenumber $k_{\rm{h}}$) of ridges in power spectra due to height-dependent changes in high-frequency power excess. Overall, it is clear that the upper-photospheric and lower-chromospheric regions covered by the magnetic canopy spatially redistribute incoming high-frequency acoustic-wave energy from below into a mixture of slow and fast magneto-acoustic waves, through mode conversions around the $\beta$ = 1 layer, so as to cause enhanced power around photospheric strong fields. The $B$ and $\gamma$ dependences of power halos brought out in Figures 5 -- 11, possibly include more intricate signatures of above wave interactions, which we have not been able to discern from our current analyses. As mentioned earlier, magnetic-field extrapolations above the photospheric layers and models of atmospheric structure need to be combined to estimate the height variation of $\beta$, and sound and Alfven speeds, which in turn would lead to a clear and unambiguous identification of physical mechanisms behind the power halos. We intend to follow up this present work with such attempts. \begin{acks} S. Couvidat, K. Hayashi, and Xudong Sun are supported by NASA grants NNG05GH14G to the SDO/HMI project at Stanford University. The data used here are courtesy of NASA/SDO and the HMI and AIA science teams. Data intensive computations performed in this work made use of the High Performance Computational facility of the Indian Institute of Astrophysics, Bangalore. Discussions with C.R. Sangeetha (IIA, Bangalore) are acknowledged. \end{acks}
| 12
| 6
|
1206.5874
|
1206
|
1206.4796_arXiv.txt
|
In this paper we discuss the restrictions of the spacetime for the standard model of cosmology by using results of the differential topology of 3- and 4-manifolds. The smoothness of the cosmic evolution is the strongest restriction. The Poincare model (dodecaeder model), the Picard horn and the 3-torus are ruled out by the restrictions but a sum of two Poincare spheres is allowed.
|
In the 80's, there were a growing understanding of 3- and 4-manifolds. Mike Freedman proved the (topological) Poincare conjecture in dimension 4 and classifies closed, compact, simple-connected, topological 4-manifolds in 1982 \cite{Fre:82}. Bill Thurston \cite{Thu:97} presented its geometrization conjecture at the same year (the geometrization conjecture was proved by Perelman). Soon afterward, Simon Donaldson \cite{Don:83} found a large class of non-smoothable closed, compact, simple-connected 4-manifolds leading to the first examples of exotic $\mathbb{R}^{4}$. Beginning with this development, our understanding of 3- and 4-manifolds as well its relation to each other is now in a better state. In physics, 4-manifolds are models for the spacetime and 3-manifolds are the spatial part (like in global hyperbolic spacetimes $\Sigma\times\mathbb{R}$ with the 3-manifold $\Sigma$ as Cauchy surface). There are only few papers \cite{Chernov2012} discussing the physical implications of the new 3- and 4-dimensional results. But we do not know any paper with a look for the cosmological implications.
|
In this paper we discussed the differential-topological restrictions on the spacetime for the evolution of our universe with an explicit Big Bang singularity and for models with the Big Bounce effect. Surprisingly, the results are the same for both cases. Furthermore, we showed that the main restriction comes from the assumption of a smooth spacetime. The relaxation of this smoothness assumption causes in bad singularities like non-manifold points or complicated topology changes with causal discontinuities. Especially the Poincare sphere, the Picard horn or the 3-torus are not favored.
| 12
| 6
|
1206.4796
|
1206
|
1206.4569_arXiv.txt
|
Ultra-high energy cosmic rays (UHECRs) are atomic nuclei with energies over ten million times energies accessible to human-made particle \mbox{accelerators}. Evidence suggests that they originate from relatively nearby extragalactic sources, but the nature of the sources is unknown. We develop a multilevel Bayesian framework for assessing association of UHECRs and candidate source populations, and Markov chain Monte Carlo algorithms for estimating model parameters and comparing models by computing, via Chib's method, marginal likelihoods and Bayes factors. We demonstrate the framework by analyzing measurements of 69 UHECRs observed by the Pierre Auger Observatory (PAO) from 2004--2009, using a volume-complete catalog of 17 local active galactic nuclei (AGN) out to 15~megaparsecs as candidate sources. An early portion of the data (``period~1,'' with 14 events) was used by PAO to set an energy cut maximizing the anisotropy in period~1; the 69 measurements include this ``tuned'' subset, and subsequent ``untuned'' events with energies above the same cutoff. Also, measurement errors are approximately summarized. These factors are problematic for independent analyses of PAO data. Within the context of ``standard candle'' source models (i.e., with a common isotropic emission rate), and considering only the 55 untuned events, there is no significant evidence favoring association of UHECRs with local AGN vs. an isotropic background. The highest-probability associations are with the two nearest, adjacent AGN, Centaurus~A and NGC~4945. If the association model is adopted, the fraction of UHECRs that may be associated is likely nonzero but is well below 50\%. Our framework enables estimation of the angular scale for deflection of cosmic rays by cosmic magnetic fields; relatively modest scales of $\approx\!3^\circ$ to $30^\circ$ are favored. Models that assign a large fraction of UHECRs to a single nearby source (e.g., Centaurus~A) are ruled out unless very large deflection scales are specified a priori, and even then they are disfavored. However, including the period~1 data alters the conclusions significantly, and a simulation study supports the idea that the period~1 data are anomalous, presumably due to the tuning. Accurate and optimal analysis of future data will likely require more complete disclosure of the data.
|
Cosmic ray particles are naturally produced, positively charged atomic nuclei arriving from outer space with velocities close to the speed of light. The origin of cosmic rays is not well understood. The Lorentz force experienced by a charged particle in a magnetic field alters its trajectory. Simple estimates imply that cosmic rays with energy $E \lta10^{15}$~eV have trajectories so strongly bent by the Galactic magnetic field that they are largely trapped within the Galaxy.\setcounter{footnote}{1}\footnote{An electron volt (eV) is the energy gained by an electron accelerated through a 1 Volt potential; the upgraded Large Hadron Collider will accelerate protons to energies $\sim\!7\times10^{12}$~eV. We follow the standard astronomical convention of using ``Galaxy'' and ``Galactic'' (capitalized) to refer to the Milky Way galaxy.} The acceleration sites and the source populations are not definitively known but probably include supernovae, pulsars, stars with strong winds and stellar-mass black holes. For recent reviews, see \citet{Cronin99} and \citet{Hillas06}. More mysterious, however, are the highest energy cosmic rays. By 1991, large arrays of cosmic ray detectors had seen a few events with energies $\sim\!100$~EeV (where EeV $=10^{18}$~eV). In the 1990s the Akeno Giant Air Shower Array [AGASA; \citet{1992APh.....1...27C}] and the High Resolution Fly's Eye [HiRes; \citet{2002NIMPA.482..457B}] were built to target these ultra-high energy cosmic rays (UHECRs); each detected a few dozen cosmic rays with $E>10$~EeV. For recent reviews, see \citet{KO11-UHECRs,LS11-UHECRs,KW12-UHECRs-hist}. Detectable UHECRs likely emanate from relatively nearby extragalactic sources. On the one hand, their trajectories are only weakly deflected by galactic magnetic fields so they are unconfined to the galaxy from which they originate. On the other hand, they are unlikely to reach us from distant (and thus isotropically distributed) cosmological sources. Cosmic ray protons with energies above the Greisen--Zatsepin--Kuzmin (GZK) scale of $\sim\!50$ to 100~EeV should scatter off of cosmic microwave background photons, losing some of their energy to pion production with each interaction [\citet{G66-GZK,ZK66-GZK}]; heavier nuclei can lose energy from other interactions at similar energy scales. Thus, the universe is not transparent to UHECRs; they are not expected to travel more than $\sim\!100$ megaparsecs (Mpc; a parsec is $\approx\!3.26$ light years) before their energies fall below the GZK scale. Notably, over this distance scale there is significant anisotropy in the distribution of matter that should be reflected in the arrival directions of UHECRs. Astronomers hope that continued study of the directions and energies of UHECRs will address the fundamental questions of the field: What phenomenon accelerates particles to such large energies? Which astronomical objects host the accelerators? What sorts of nuclei end up being energized? In addition, UHECRs probe galactic and intergalactic magnetic fields. The flux of UHECRs is very small, approximately 1 per square kilometer per century for energies $E\gta50$~EeV. Large detectors are needed to find these elusive objects; the largest and most sensitive detector to date is the Pierre Auger Observatory [PAO; \citet{PAO04-Proto}] in Argentina. The observatory uses air fluorescence telescopes and water Cerenkov surface detectors to observe the air shower generated when a cosmic ray interacts with nuclei in the upper atmosphere over the observatory. The surface detectors (SDs) operate continuously, detecting energetic subatomic particles produced in the air shower and reaching the ground. The fluorescence detectors (FDs) image light from the air shower and supplement the surface detector data for events detected on clear, dark nights.\footnote{The FD on-time is about 13\% [\citet{PAO10-GZK}], but analysis can reveal complications preventing use of the data---for example, obscuration due to light cloud cover or showers with significant development underground---so fewer than 13\% of events have usable FD data. These few so-called \emph{hybrid} events are important for calibrating energy measurements and provide information about cosmic ray composition vs. energy.} PAO began taking data in 2004 during construction; by June 2008 the PAO array comprised~$\approx\!1600$ SDs covering $\approx\!3000$ km$^2$, surrounded by four fluorescence telescope stations (with six telescopes in each station) observing the atmosphere over the array. By 31 August 2007, PAO had detected 81 UHECRs with $E > 40$~EeV [see \citet{PAO07-Aniso}, hereafter PAO-07], finding clear evidence of an energy cutoff resembling the predicted GZK cutoff, that is, a sharp drop in the energy spectrum above~$\approx\!100$~EeV and a discernable pile-up of events at energies below that [\citet{PAO10-GZK}]. This supports the idea that the UHECRs originate in the nearby universe, although other interpretations are possible.\footnote{The PAO data also indicate that the composition of cosmic rays changes with energy, with protons dominant at $E\approx\!1$~EeV but heavier nuclei becoming predominant for $E\gta10$~EeV [\citet{PAO10-CRComposn,KU12-CRComposn}]. Astrophysically, it is natural to presume that the maximum energy a cosmic accelerator can impart to a nucleus of charge $Z$ grows with $Z$. Combined with the PAO composition measurements, this has motivated models for which the maximum energy for protons is $\sim\!1$~EeV, with the observed cutoff above 50~EeV reflecting the maximum energy for heavy nuclei [\citet{A+08-CRCompEmax,A+12-UHECRDis}]. In such models, there is no GZK suppression; the observed cutoff reflects properties of the cosmic ray acceleration process.} The PAO team searched for correlations between the cosmic ray arrival directions and the directions to nearby active galactic nuclei (AGN) [initial results were reported in PAO-07; further details and a catalog of the events are in \citet{PAO08-AGN}, hereafter PAO-08]. AGN are unusually bright cores of galaxies; there is strong (but indirect) evidence that they contain rapidly mass-accreting supermassive black holes that eject some material in energetic, jet-like outflows. AGN are theoretically favored sites for producing UHECRs; electromagnetic observations indicate particles are accelerated to high energies near AGN. The PAO team's analysis was based on a significance test that counted the number of UHECRs with best-fit directions within a critical angle, $\psi$, of an AGN in a catalog of local AGN (more details about the catalog appear below); the number was compared with what would be expected from an isotropic UHECR directional distribution using a \pval. A simple sequential approach was adopted. The earliest half of the data was used to tune three parameters defining the test statistic by minimizing the \pval. The parameters were as follows: $\psi$; a maximum distance, $\Dmax$, for possible hosts; and a minimum energy, $\Eth$, for UHECRs considered to be associated with AGN. With these parameters tuned ($\Eth=56$~EeV, $\psi=3.1^\circ$, $\Dmax=75$ Mpc), the test was applied to the later half of the data; 13 UHECRs in that period had $E>\Eth$. The resulting \pval\ of $1.7\times10^{-3}$ was taken as indicating the data reject the hypothesis of isotropic arrival directions ``with at least a 99\% confidence level.'' The PAO team was careful to note that this result did not necessarily imply that UHECRs were associated with the cataloged AGN, but rather that they were likely to be associated with some nearby extragalactic population with similar anisotropy. Along with these results, the PAO team published a catalog of energy and direction estimates for the 27 UHECRs satisfying the $E>\Eth$ criterion, including both the earliest 14 events used to define $\Eth$ and the 13 subsequent events used to obtain the reported \pval\ (the PAO data are proprietary; measurements of the other 54 events used in the analysis were not published). Their statistical result spurred subsequent analyses of these early published PAO UHECR arrival directions, adopting different methods and aiming to make more specific claims about the hosts of the UHECRs. Roughly speaking, these analyses found similarly suggestive evidence for anisotropy, but no conclusive evidence for any specific association hypothesis. In late 2010, the PAO team published a revised catalog, including new data collected through 2009 [\citet{PAO10-AnisoUpdate}; hereafter PAO-10]. An improved analysis pipeline revised the energies of earlier events downward by 1~EeV; accordingly, the team adopted $\Eth= 55$~EeV on the new energy scale. The new catalog includes measurements of 42 additional UHECRs (with $E>\Eth$) detected from 1 September 2009 through 31 December 2010. A repeat of the previous analysis (adding the new events but again excluding the early tuning events) produced a larger \pval\ of $3\times10^{-3}$, that is, \emph{weaker} evidence against the isotropic hypothesis. The team performed a number of other analyses (including considering new candidate host populations). Despite the growth of the post-tuning sample size from 14 to 55, they found that the evidence for anisotropy weakened. Time-resolved measures of anisotropy provided puzzling indications that later data might have different directional properties than early data, although the sample size is too small to demonstrate this conclusively. Various investigators have performed other analyses aiming to detect anisotropy in the distribution of detected UHECR directions, the vast majority also adopting a hypothesis testing approach (seeking to reject isotropy), but differing in choices of test statistic. Most such tests require some accounting for tuning parameters, and many do not explicitly account for measurement errors. See \citet{KK11-AGN-UHECR} for a recent example with references to numerous previous frequentist analyses. Here we describe a new framework for modeling UHECR data based on Bayesian multilevel modeling of cosmic ray emission, propagation and detection. A virtue of this approach is that physical and experimental processes have explicit representations in the framework, facilitating exploration of various scientific hypotheses and physical interpretation of the results. This is in contrast to hypothesis testing approaches where elements such as the choice of test statistic or angular and energy thresholds, only implicitly represent underlying physics, and potentially conflate astrophysical and experimental effects (e.g., magnetic scattering of trajectories and measurement errors in direction). Our framework can handle a priori uncertainty in model parameters via marginalization. Marginalization also accounts for the uncertainty in such parameters via weighted averaging, rather than fixing them at precise, tuned values. This eliminates the need to tune energy, angle and distance scales with a subset of the data that must then be excluded from a final analysis. Such parameters are allowed to adapt to the data, but the ``Ockham's razor'' effect associated with marginalization penalizes models for fine-tuned degrees of freedom, thereby accounting for the adaptation. Our approach builds on our earlier work on Bayesian assessment of spatiotemporal coincidences in astronomy (see Section~\ref{sec3}). A recent approximate Bayesian analysis of coincidences between UHECR and AGN directions independently adopts some of the same ideas [\citet {WMJ11-BayesUHECR}]; we discuss how our approach compares with this recent analysis in the supplementary material [\citet{S+13-UHECR-Supp}]. In this paper we describe our general framework, computational algorithms for its implementation and results from analyses based on a few representative models. Our models are somewhat simplistic astrophysically, although similar to models adopted in previous studies. We do not aim to reach final conclusions about the sources of UHECRs; the focus here is on developing new methodology and demonstrating the capabilities of the approach in the context of simple models. An important finding is that \emph{thorough and accurate independent analysis of the PAO data likely requires more data than has so far been publicly released} by the PAO collaboration. In particular, although our Bayesian approach eliminates the need for tuning, in the absence of publicly available ``untuned'' data (i.e., measurements of lower-energy cosmic rays), we cannot completely eliminate the effects of tuning from analyses of the published data (Bayesian or otherwise). Additionally, a~Bayesian analysis can (and should) use event-by-event (i.e., heteroskedastic) measurement uncertainties, but these are not publicly available. Finally, astrophysically plausible conclusions about the sources of UHECRs will require models more sophisticated than those we explore here (and those explored in other recent studies).
|
\label{sec:summary} We have described a new multilevel Bayesian framework for modeling the arrival times, directions and energies of UHECRs, including statistical assessment of directional coincidences with candidate sources. Our framework explicitly models cosmic ray emission, propagation (including deflection of trajectories by cosmic magnetic fields) and detection. This approach cleanly distinguishes astrophysical and experimental processes underlying the data. It handles uncertain parameters in these processes via marginalization, which accounts for uncertainties while allowing use of all of the data (in contrast to hypothesis testing approaches that optimize over parameters, requiring holding out a subset of the data for tuning). We demonstrated the framework by implementing calculations with simple but astrophysically interesting models for the 69 UHECRs with energies above 55~EeV detected by PAO and reported in PAO-10. Here we first summarize our findings based on these models, and then describe directions for future work. \subsection{Astrophysical results} We modeled UHECRs as coming from either nearby AGN (in a volume-limited sample including all 17 AGN within 15 Mpc) or an isotropic background population of sources; AGN are considered to be standard candles in our models. We thoroughly explored three models. In $M_0$ all CRs come from the isotropic background; in $M_1$ all CRs come from either a background or one of the 17 closest AGN; in $M_2$ all CRs come from either a background source or one of the two closest AGN (Cen A and NGC 5128, neighboring AGN at a distance of 5 Mpc). The data were reported in three periods. Data from period 1 were used to tune the energy threshold defining the published samples in all periods by maximizing an index of anisotropy in period 1. Out of concern that this tuning compromises the data in period 1 for our analysis, we analyzed the full data set and various subsamples, including an ``untuned'' sample omitting period 1 data. Using \emph{all} of the data, Bayes factors indicate there is strong evidence favoring either $M_1$ or $M_2$ against $M_0$ but do not discriminate between $M_1$ and $M_2$. The most probable models associate about 5\% to 15\% of UHECRs with nearby AGN and strongly rule out associating more than $\approx\!25$\% of UHECRs with nearby AGN. Most of the high-probability associations in the 17 AGN model are with the two closest AGN. However, if we use only the \emph{untuned} data, the Bayes factors are equivocal (although the most probable association models resemble those found using all data). If we subdivide the untuned data, we find positive evidence for association using the period 2 sample, but weak evidence \emph{against} association using the much larger period 3 sample. Together, these results suggest that the statistical character of the data may differ from period to period, due to tuning of the period 1 data or other causes. One way to explore this is to ask whether the data from the various periods are better explained using models with differing parameter values rather than a shared set of values. We investigated this via a change-point analysis that considered the time points bounding the periods as candidate change points. The results are consistent with the hypothesis that the parameters do \emph{not} vary between periods, justifying using the combined data for these models. This suggests the variation of the Bayes factors across periods is a consequence of the modest sample sizes. However, the change-point analysis does not address the possibility that none of the models is adequate, with model misspecification being the cause of the apparently discrepant Bayes factors. We used simulated data from both the isotropic model and high-probability association models to perform predictive checks of our models, using the Bayes factors based on subsets of the data as test statistics. Simulations based on the isotropic model indicate that large Bayes factors favoring association are unlikely for \emph{untuned} samples of the size of the period 1 sample. Simulations based on representative association models indicate that such Bayes factors are not surprising for samples of the size of period 1, considered in isolation. But the observed pattern of large Bayes factors for the subsamples in periods 1 and 2, and a small Bayes factor for the much larger period 3 subsample, is very surprising. The full data set thus is not fit comfortably by either isotropic models or standard candle association models. Whether the effects of tuning could explain the apparent inconsistencies remains an open question that is not easy to address without access to the untuned data. Restricting to the untuned data (periods 2 and 3), the pattern of Bayes factors is consistent with both isotropic models and representative standard candle association models. The best-fitting association models assign a few percent of UHECRs to nearby AGN; at most $\approx\!20$\% may be associated with AGN, with the remainder assigned to sources drawn from an isotropic distribution. Magnetic deflection angular scales of $\approx\!3^\circ$ to $30^\circ$ are favored. Models that assign a large fraction of UHECRs to a single nearby source (e.g., Cen A) are ruled out unless very large deflection scales are specified a priori, and even then they are disfavored. Even restricting to results based on the untuned data, we hesitate to offer these models as astrophysically plausible explanations of the PAO UHECR data, both because of how important the problematic period 1 sample is in the analysis and because of astrophysical limitations of the models considered here and elsewhere. In particular, the high-probability models assign the vast majority of UHECRs to sources in an isotropic distribution. But the observation by PAO of a GZK-like cutoff in the energy spectrum of UHECRs suggests that UHECRs originate from within $\sim\!100$ Mpc, where the distribution of both visible matter (galaxies) and dark matter is significantly \emph{an}isotropic. If most or all UHECRs are protons, so that magnetic deflection is not very strong, an isotropic distribution of UHECR arrival directions is implausible. It then may be the case that some of the strength of the evidence for association with nearby AGN is due to the ``straw man'' nature of the isotropic alternative. On the other hand, if most UHECRs are heavy nuclei, then strong magnetic deflection could isotropize the arrival directions. The highest probability association models have relatively small angular deflection scales, but it could be that the few UHECRs that these models associate with the nearest AGN happen to be protons or very light nuclei. Future models could account for this by allowing a mixture of $\kappa$ values among cosmic rays, as noted in Section~\ref{sec:dflxn}. In addition, the standard candle cosmic ray intensity model adopted here and in other studies very likely artificially constrains inferences. \subsection{Future directions} All of these considerations indicate a more thorough exploration of UHECR production and propagation models is needed. We thus consider the analyses here to be a demonstration of the utility and feasibility of analyzing such models within a multilevel Bayesian framework, and not a definitive astrophysical analysis of the data. We are pursuing more complex models separately, expanding on the present analysis in four directions. First, we are considering larger, statistically well-characterized catalogs of potential hosts, for example, the recently-compiled catalog of X-ray selected AGN detected by the Burst and Transient (BAT) instrument on the \emph{Swift} satellite, a catalog considered by PAO-10. Second, we are building more realistic background distributions, for example, by using the locations of nearby galaxy clusters or the entire nearby galaxy distribution, to build smooth background densities (e.g., via kernel density estimation, or fitting of mixture or multipole models). Third, we are considering richer luminosity function models, including models assigning a distribution of cosmic ray intensities to all candidate sources and models that place some sources in ``on'' states and the others ``off.'' The latter models are motivated both by the possibility of beaming of cosmic rays and by evidence for AGN intermittency in jet substructure, and could enable assignment of significant numbers of UHECRs to both distant and nearby sources. Finally, more complicated deflection models are possible. For example, we have developed a class of ``radiant'' models that produce correlated deflections (as seen in some astrophysical simulations). For a radiant model, each source has a single guide direction associated with it, drawn from a Fisher distribution centered at the source direction, with concentration $\kappa_g$; the guide direction serves as a proxy for the shared magnetic deflection history of cosmic rays from that source. Each cosmic ray associated with that source then has its arrival direction drawn from an independent Fisher distribution centered about the guide direction, with concentration potentially depending on cosmic ray energy and source distance; this distribution describes the effect of the deflection history unique to a particular cosmic ray. The resulting directions for a multiplet will cluster along a ray pointing toward the source. The resulting joint distribution for the directions in a multiplet (with the guide direction marginalized) is exchangeable but not independent. For the current, modest-sized UHECR catalog, the complexity of some of these generalizations is probably not warranted. But PAO is expected to operate for many years, and the sample is continually growing in size. Making the most of existing and future data will require not only more realistic models, but also more complete disclosure of the data. In particular, a fully Bayesian treatment---including modeling of the energy dependence in the UHECR flux and deflection scale---requires data uncorrupted by tuning cuts. Further, the most accurate analysis should use event-specific direction and energy uncertainties (likelihood summaries), rather than the typical error scales currently reported. We hope our framework helps motivate more complete releases of future PAO data.
| 12
| 6
|
1206.4569
|
1206
|
1206.6336_arXiv.txt
|
\noindent Far ultraviolet emission has been detected from a knot of H$\alpha$ emission in the Horseshoe filament, far out in the NGC~1275 nebula. The flux detected relative to the brightness of the H$\alpha$ line in the same spatial region is very close to that expected from Hydrogen two-photon continuum emission in the particle heating model of \citet{Ferlandetal09} if reddening internal to the filaments is taken into account. We find no need to invoke other sources of far ultraviolet emission such as hot stars or emission lines from CIV in intermediate temperature gas to explain these data.
|
\label{intro} Many central galaxies in cool core clusters exhibit an extensive optical emission-line nebula. The best observed example of such a system is in NGC~1275 which lies at the centre of the Perseus Cluster. The optical emission-line spectrum shows a characteristic low-ionization state similar to that seen in Low Ionization Nuclear Emssion-Line Regions (LINERs); many authors have sought to understand the excitation and ionization mechanism of this gas. The line spectrum is rich and there are many potential diagnostics present. Recently, the field has become confused because, in the nearby systems that we can observe in detail (eg NGC~4696 in the centaurus Cluster and NGC~1275 in the Perseus Cluster), it is clear that different processes are at work in different spatial regions. Depending on the redshift of the source, the instrument and the exposure times being used different parts of the nebula will be detected. For example, in early works such as \citet{Johnstoneetal87} which used smaller telescopes and relatively short exposure times the emission lines were detected mainly from the very central part of the galaxy. There, it was found that an excess of blue light over that expected from a normal elliptical galaxy correlated quantitatively with the luminosity in H$\alpha$, strongly suggesting the presence of star formation. Star formation is also clearly at work in some regions which are much further out (e.g. \citealt{Canningetal10}). Most central cluster galaxies are host to a radio source, indicating the presence of an active nucleus. Although in most cases the accretion luminosity from the black hole is many orders of magnitude lower than expected from accretion at the Bondi rate (\citealt{Dimatteoetal01}) there are some sources which are clearly dominated by lines from the active nucleus in their central regions (eg broad lines in Abell~1068, {\citealt{Hatchetal07}; strong [OIII]$\lambda5007$ emission in NGC~1275, \citealt{JohnstoneFabian88}; strong [OIII]$\lambda5007$ emission, Abell~3581 \citealt{Farageetal12}). In a series of papers starting with \citet{Johnstoneetal07} we are focussing on the emission line spectrum from more typical regions of the nebulae, far away from the complex regions near the centres of the galaxies. Although the surface brightness is lower, the expectation is that the physical processes at work are simpler and therefore easier to understand. The Horseshoe filament in NGC~1275, or more particularly the bright knot in that region known as Region 11 in the notation of \citet{Conseliceetal01}, is at a radial distance of 22 kpc (62 arcsec) from the centre of the galaxy and has an emission-line spectrum characteristic of large areas of the emission line nebulae (\citealt{Hatchetal06}). There are several key diagnostics which set these regions apart from most other astrophysical emission line regions: First, the molecular hydrogen lines are very strong relative to the Balmer or Paschen lines of hydrogen, and certainly much stronger than can be produced in a photo-dissociation region such as the Orion bar (\citealt{Hatchetal05}; \citealt{Ferlandetal08}). Second, the forbidden [NI] doublet at $\lambda5199$\AA\ is unusually strong, being typically $0.25\times$H$\beta$ (\citealt{Ferlandetal09}). Third, the [NeIII] lines, both in the optical at $\lambda 3869$\AA, and in the mid-infrared at $\lambda15.55\mu$m are stronger than expected while the [OIII]$\lambda\lambda4959,5007$ lines are typically very weak {(\citealt{Ferlandetal09})}.
|
We have shown that Region 11 in the Horseshoe filament of NGC~1275 is detected in the far ultraviolet at wavelengths of ~1500\AA\ by the Hubble Space Telescope ACS/SBC camera. The observed count rate is consistent with that expected from Hydrogen two-photon emission if the filaments are excited by ionizing particles. These particles could naturally originate from the cooler phases of the hot intracluster medium. In this region there is no requirement for any further components to the far ultraviolet emission from hot stars or emission lines such as CIV$\lambda1550$ from intermediate temperature ($10^5$K) gas. Our particle heating model predicts a C~I $\lambda1656$\AA\ emission line at about 0.2$\times$I(H$\alpha$) which is not expected in a nebula spectrum dominated by recombination processes.
| 12
| 6
|
1206.6336
|
1206
|
1206.6931_arXiv.txt
|
We present radial velocities and chemical abundances for nine stars in the old, distant open clusters Be 18, Be 21, Be 22, Be 32, and PWM 4. For Be 18 and PWM 4, these are the first chemical abundance measurements. Combining our data with literature results produces a compilation of some 68 chemical abundance measurements in 49 unique clusters. For this combined sample, we study the chemical abundances of open clusters as a function of distance, age, and metallicity. We confirm that the metallicity gradient in the outer disk is flatter than the gradient in the vicinity of the solar neighborhood. We also confirm that the open clusters in the outer disk are metal-poor with enhancements in the ratios [$\alpha$/Fe] and perhaps [Eu/Fe]. All elements show negligible or small trends between [X/Fe] and distance ($<$ 0.02 dex/kpc), but for some elements, there is a hint that the local (\rgc\ $<$ 13 kpc) and distant (\rgc\ $>$ 13 kpc) samples may have different trends with distance. There is no evidence for significant abundance trends versus age ($<$ 0.04 dex Gyr$^{-1}$). We measure the linear relation between [X/Fe] and metallicity, [Fe/H], and find that the scatter about the mean trend is comparable to the measurement uncertainties. Comparison with solar neighborhood field giants shows that the open clusters share similar abundance ratios [X/Fe] at a given metallicity. While the flattening of the metallicity gradient and enhanced [$\alpha$/Fe] ratios in the outer disk suggest a different chemical enrichment history to the solar neighborhood, we echo the sentiments expressed by Friel et al.\ that definitive conclusions await homogeneous analyses of larger samples of stars in larger numbers of clusters. Arguably, our understanding of the evolution of the outer disk from open clusters is currently limited by systematic abundance differences between various studies.
|
Our Galaxy's open clusters are valuable tools to study the disk \citep{friel95}. Accurate homogeneous distances and ages can be measured for samples of open clusters that span a wide range in parameters \citep{salaris04}. Additionally, metallicity estimates of open clusters can be readily obtained from a variety of methods (albeit with differing degrees of accuracy) thereby enabling studies of the structure, kinematics, and chemistry of the disk as well as any temporal variations of these properties. The atmospheres of low-mass stars retain, to a great extent, the chemical composition of the interstellar medium at the time and place of their birth. The nucleosynthetic yields of the chemical elements depend upon stellar mass and metallicity. Therefore, measurements of metallicity, [Fe/H], and chemical abundance ratios, [X/Fe], in stars in open clusters offer powerful insight into the formation and evolution of the disk \citep{janes79,freeman02,friel02,jbh10,kobayashi11}. In recent times there has been considerable effort to understand the evolution of the outer Galactic disk (e.g., \citealt{hou00,chiappini01,andrievsky02c,daflon04,costa04,cescutti07,magrini09}). It appears that the metallicity gradient (i.e., [Fe/H] versus Galactocentric distance, \rgc) in the outer disk (\rgc\ $>$ 13 kpc) is flatter than the metallicity gradient in the solar neighborhood (e.g., \citealt{twarog97,luck03,carraro04,carney05,y05,bragaglia08,sestito08,jacobson09,friel10}). Qualitatively similar behavior has now been found in external galaxies (e.g., \citealt{worthey05,bresolin09,vlajic09,vlajic11}) suggesting that the processes which govern chemical evolution in the outer Galactic disk have also operated in the outskirts of other disk galaxies. Another intriguing result is evidence for enhanced [$\alpha$/Fe] ratios in the outer Galactic disk from open clusters \citep{carraro04,y05}, field stars \citep{carney05,bensby11}, and Cepheids \citep{y06}. Such abundance ratios in objects spanning a large range in ages can be explained by infall of pristine gas in the outer disk and/or vigorous star formation. Either explanation requires that the outer disk has experienced a significantly different star formation history compared to the solar neighborhood and this has important implications for the formation and evolution of the Galactic disk. However, not all studies of the outer disk find enhanced [$\alpha$/Fe] ratios (e.g., \citealt{bragaglia08}). While one explanation for the discrepancy may be systematic differences in the analyses, studies of additional open clusters in the outer disk are necessary to clarify the situation. Therefore, to further explore the formation and evolution of the outer Galactic disk, we analyze the chemical abundances of a new sample of open clusters. This is the fourth, and final, paper in our series on elemental abundance ratios in stars in the outer Galactic disk. Paper I \citep{y05} concentrated upon open clusters, Paper II \citep{carney05} was dedicated to field red giants, and Paper III \citep{y06} focused upon Cepheids. In this paper, we present radial velocities, metallicities [Fe/H], and element abundance ratios [X/Fe] for the open clusters Be 18, Be 21, Be 22, Be 32, and PWM 4.
| 12
| 6
|
1206.6931
|
|
1206
|
1206.4934_arXiv.txt
|
The Cosmic Infrared Background (CIB) provides an opportunity to constrain many properties of the high redshift ($z>6$) stellar population as a whole. This background, specifically from 1 to 200 microns, should contain information about the era of reionization and the stars that are responsible for these ionizing photons. In this paper, we look at the fractional anisotropy ($\delta I / I$) of this high redshift population, where $\delta I$ is the ratio of the magnitude of the fluctuations and $I$ is the mean intensity. We show that this can be used to constrain the escape fraction of the population as a whole, because the magnitude of the fluctuations of the CIB depends on the escape fraction, while the mean intensity does not. This results in lower values of the escape fraction producing higher values of the fractional anisotropy. This difference is predicted to be larger at longer wavelengths bands (above 10 microns), albeit it is also much harder to observe in that range. We show that the fractional anisotropy can also be used to separate a dusty from a dust-free population. Finally, we discuss the constraints provided by current observations on the CIB fractional anisotropy.
|
\label{sec:introduction} Modern cosmology is now able to constrain details about the era of reionization. Observations show that the reionization of the universe occurred early and was extended in time, with an equivalent of an instantaneous reionization at $z \sim 11$ \citep{komatsu/etal:2008, komatsu/etal:2011}. Stars are a likely candidate for being responsible for the majority of reionization because they are efficient producers of ultraviolet photons. Thanks to a wave of modern, sensitive telescopes, we can begin to observe and understand the frontier of reionization, along with these high redshift stellar populations ($z>6$). For example, we can observe these galaxies directly via high redshift surveys, which can now routinely identify a population of bright galaxies up to a redshift of about $z \sim 8$. However, these surveys can only locate those galaxies that are both above the limiting magnitude and common enough to be present in the survey field. It is now thought \citep{Bouwens/etal:2010,Robertson/etal:2010,fernandez/shull:2011} that reionization needed a large population of smaller galaxies below the current detection limits. Because we cannot yet observe these galaxies directly, we can instead look for their cumulative light, which should exist as background radiation. Because reionization is said to have occurred around $z \sim 11$, the photons responsible for reionization should be present in the Cosmic Infrared Background (CIB). The spectral peak of this radiation is around the Lyman$-\alpha$ line, which will be redshifted to $1-4$ microns. However, continuum emission will create an extended tail at longer wavelengths. Here, we discuss the CIB from about 1 to 200 microns. The majority of the CIB will be emission from sources below $z \sim 6$, such as our Galaxy, foreground galaxies, and other sources of infrared light, such as zodiacal light. If these sources can be subtracted away to a high precision, it is possible that the remainder could be from the era of reionization, and if so, could tell us about the properties of these high redshift stars. There have been many attempts to theoretically model the high redshift component of the CIB, especially in the near-infrared, from the mean \citep{santos/bromm/kamionkowski:2002, magliocchetti/salvaterra/ferrara:2003,salvaterra/ferrara:2003, cooray/yoshida:2004,madau/silk:2005, fernandez/komatsu:2006}, to fluctuations \citep{kashlinsky/etal:2002, kashlinsky/etal:2004,kashlinsky/etal:2005, kashlinsky/etal:2007, kashlinsky/etal:2012, kashlinsky:2005,magliocchetti/salvaterra/ferrara:2003, cooray/etal:2004,thompson/etal:2007a, thompson/etal:2007b, fernandez/etal:2010, fernandez/etal:2011}. In this paper, we examine another way to analyze the CIB, the fractional anisotropy, which is the ratio of the fluctuations to the mean. By looking at the fractional anisotropy, many free parameters are removed and more information about this elusive stellar population can be extracted. Specifically, we discuss using the CIB as a probe for the escape fraction of ionizing photons. Finding the escape fraction is important for understanding reionization and its duration. There have been several attempts to measure the escape fraction through analytical models, simulations, and observations (see \citet{fernandez/shull:2011} and references within). These papers have shown that the escape fraction appears to vary greatly from galaxy to galaxy. Therefore, instead of trying to measure the escape fraction of an individual galaxy, here we discuss the average escape fraction of all galaxies, which will give more of a global view of reionization. In addition, the fractional anisotropy can reveal information about the dust content of galaxies, which is mostly unknown at high redshifts. We describe our simulations in section \ref{sec:sims} and our models in section \ref{sec:models}. In section \ref{sec:anis}, we discuss our method for finding the mean CIB, the fluctuations of the CIB, and the fractional anisotropy. In section \ref{sec:results}, we discuss our results of the fractional anisotropy for various bands. In section \ref{sec:obs}, we discuss the most recent observations. We conclude in section \ref{sec:conc}. Throughout this paper, we use the cosmological parameters ($\Omega_{\rm m}$, $\Omega_\Lambda$, $\Omega_{\rm b}$, $h$)=(0.27, 0.73, 0.044, 0.7), consistent with the simulations from \citet{iliev/etal:2011}, which are based on the WMAP 5-year results and other available constraints \citep{komatsu/etal:2008}.
|
\label{sec:conc} We have shown the observable signatures of high redshift populations with different values of the escape fraction $f_{esc}$ and dust content in the CIB. It is possible to distinguish these populations through observations of the fractional anisotropy of the CIB. The global escape fraction of high redshift galaxies is a main variable that can be probed in this way, since the angular power spectrum is dependent on it, while the mean is not. In addition, dust will transform the SED of the galaxy, thus leaving an imprint on the fractional anisotropy. Therefore, low values of the fractional anisotropy will be indicative of a population of stars with a high escape fraction and little dust. This will be more noticeable at longer wavelengths. While observations are still difficult, improved observations could be able to distinguish between these populations. \\ We would like to thank Masami Ouchi and Kristian Finlator for helpful discussions. In addition, we would like to thank Melanie Koehler and Laurent Verstraete for help with DustEM. This work was supported by the Science and Technology Facilities Council [grant numbers ST/F002858/1 and ST/I000976/1]; the ANR program ANR-09-BLAN-0224-02 , and The Southeast Physics Network (SEPNet). The authors acknowledge the Texas Advanced Computing Center (TACC) at The University of Texas at Austin for providing HPC resources that have contributed to the research results reported within this paper. URL: http://www.tacc.utexas.edu. This research was supported in part by the National Science Foundation through TeraGrid resources provided by TACC and NICS.
| 12
| 6
|
1206.4934
|
1206
|
1206.3808_arXiv.txt
|
Galaxy structure and morphology is nearly always studied using the light originating from stars, however ideally one is interested in measuring structure using the stellar mass distribution. Not only does stellar mass trace out the underlying distribution of matter it also minimises the effects of star formation and dust on the appearance and structure of a galaxy. We present in this paper a study of the stellar mass distributions and structures of galaxies at $z < 1$ as found within the GOODS fields. We use pixel by pixel K-corrections to construct stellar mass and mass-to-light ratio maps of 560 galaxies of known morphology at magnitudes $z_{850} < 24$. We measure structural and size parameters using these stellar mass maps, as well as on ACS $BViz$ band imaging. This includes investigating the structural $CAS-$Gini-$M_{20}$ parameters and half-light radius ($R_{\rm e}$) for each galaxy. We further identify and examine unusual galaxy types with this method, including compact and peculiar ellipticals, and peculiar galaxies in some mode of formation. We compare structural parameters and half-light radii in the ACS $z_{850}$-band and stellar mass maps, finding no systematic bias introduced by measuring galaxy sizes in $z_{850}$. We furthermore investigate relations between structural parameters in the ACS $BViz$ bands and stellar mass maps, and compare our result to previous morphological studies. Combinations of various parameters in stellar mass generally reveal clear separations between early and late type morphologies, but cannot easily distinguish between star formation and dynamically disturbed systems. We also show that while ellipticals and early-type spirals have fairly constant CAS values at $z < 1$ we find a tendency for late-type spiral and peculiar morphological types to have a higher $A(M_{*})$ at higher redshift. We argue that this, and the large fraction of peculiars that appear spiral-like in stellar mass maps, are possible evidence for either an active bulge formation in some late-type disks at $z < 1$ or the presence of minor merger events.
|
One of the most intriguing questions in modern astronomy is how the universe assembled into the structures we see today. A major part of this question is understanding how galaxies formed over cosmic time. A popular and rapidly developing method of tracing the evolution of galaxies is through examining galaxy morphologies and structures over a range of redshifts (e.g. Conselice 2003; Conselice, Rajgor \& Myers 2008; Conselice et al. 2009; Lotz \etal 2008; Cassata et al. 2010; Ricciardelli et al. 2010; Weinzirl et al. 2011). While understanding how the morphologies of galaxies evolve, and how matter in galaxies is structured, is fundamental, structural studies of galaxies have thus far focused on measuring properties in one or more photometric bands, and using this as a tracer of evolution. These types of structural analyses have always been measured in terms of relative luminosities, such that brighter parts of galaxies contribute more towards their structure and morphology. To understand galaxies more fully however, it is important to study the evolution of galaxy structure in terms of their stellar mass distribution, as this better reflects the underlying distribution of stars in a galaxy. Morphological studies have evolved from initial attempts to describe the range of galaxy forms during the early-mid 20th century, towards modern efforts of linking the spatial distribution of a galaxy's stars to its formation history. In the move away from visual classifications, the goal has been to quantify galaxy structure/morphology in a way such that structures can be measured automatically and in a reliable way. There are two broad approaches for automated classification of galaxies - the parametric and non-parametric methods. In the parametric approach, the light profiles of galaxies, as seen in the plane of the sky, are compared with predetermined analytic functions. For example, single Sersic profiles are fit to an entire galaxy's light profile. Similarly, bulge-to-disc ($B/D$) light ratios are computed by fitting two component profiles to a galaxy. These methods are however unable to parameterise directly any star formation or merging activity which produces random or asymmetric structures within galaxies. The ``non-parametric'' structural approaches have no such presumed light profiles implicit in their analyses (e.g., Shade et al. 1995; Abraham et al. 1996; Conselice 1997, 2003; Lotz et al. 2004). This is a more natural approach towards measuring galaxy structures, as the majority of galaxies in the distant Universe have irregular structures that are not well fit by parameterised forms (e.g. Conselice \etal 2005; Conselice, Rajgor \& Myers 2008). Perhaps the most successful and straightforward of the non-parametric systems is the $CAS$ method, which uses a combination of concentration ($C$), asymmetry ($A$) and clumpiness ($S$) values to separate galaxy types (Conselice \etal 2000; Bershady \etal 2000; Conselice 2003). Besides overall structure, the measurement of the size evolution of galaxies, as measured through half-light radii, also has important implications for their formation histories. For example, many studies such as Trujillo \etal (2007) and Buitrago et al. (2008) have measured galaxy sizes for systems at $z > 1$ and have found a population of compact spheroid galaxies with number densities two orders of magnitude higher than what we find in the local universe. The absence of a significant number of these small sized (as measured by half-light radii), high mass galaxies in the local Universe suggests that this population has evolved and has perhaps merged with other galaxies. However, these studies are measured based on the light originating from these galaxies, and it is not clear whether sizes would change when measured using the distribution of stellar mass rather than light. All of these methods for measuring the resolved structures of high redshift galaxies depend on measurements made in one or more photometric bands. As such, quantitative structures are influenced by a combination of effects, including: regions of enhanced star formation, irregular dust distributions, differing ages and metallicities of stellar populations and minor/major mergers. The morphology and structure of a galaxy also depends strongly upon the rest-frame wavelength probed (e.g., Windhorst et al. 2002; Taylor-Mager et al. 2007). Young stars, such as OB stars can dominate the appearances of galaxies at blue wavelengths, thereby giving a biased view of the underlying mass within the galaxy. Longer wavelength observations improve the situation, but at every wavelength, the light emitted is from a mixture of stars at a variety of ages and the dust content. More fundamental is the structure of the stellar mass distribution with a galaxy, as it is more of a direct tracer of the underlying potential. Although galaxy structure has been measured on rest-frame near-infrared and $I$-band imaging, which traces stellar mass to first order, we directly investigate the stellar mass images within this paper. We use the method outlined in Lanyon-Foster, Conselice \& Merrifield (2007; LCM07) to reconstruct stellar mass maps of galaxies within the GOODS fields at $z < 1$. We then use this to investigate the evolution of the distribution of galaxy stellar mass during the last half of the universe's history. We use these stellar mass maps to directly measure $CAS$ parameters, and the sizes of these galaxies in stellar mass over this epoch. We test the assumptions that the $CAS$ parameters and sizes can be reliably measured in optical light by comparing the parameters for these galaxies measured in $the B_{435}$, $V_{606}$, $i_{775}$, and $z_{850}$ bands and in stellar mass. We finally investigate how these various wavelengths and stellar mass maps can be used to classify galaxies by their formation modes, revealing how these systems are assembling. This paper is organised as follows: in \S~\ref{sec:data} and \S~\ref{sec:method} we describe the data, sample selection and method, including explanations of the K-correction code we use and the $CAS$ analysis. In \S~\ref{sec:results} we present our results, discussing the stellar mass maps themselves, the comparison between galaxy size and the $CAS$ parameters in $z_{850}$ and mass. We then explore the relations between the structural parameters in $B_{435}$, $V_{606}$, $i_{775}$, $z_{850}$ and stellar mass. Finally, in \S~\ref{sec:conclusions}, we discuss our conclusions and comment on future applications. Throughout we assume a standard cosmology of {\it H}$_{0} = 70$ km s$^{-1}$ Mpc$^{-1}$, and $\Omega_{\rm m} = 1 - \Omega_{\Lambda}$ = 0.3.
|
\label{sec:conclusions} We conduct a pixel by pixel study of the structures of 560 galaxies found in the GOODS-N field at $z < 1$ within stellar mass maps and Hubble Space Telescope ACS $BViz$ wavebands. We measure stellar masses for each pixel of each galaxy image from our $BViz$ images by fitting to stellar population models. We use these values to construct stellar mass maps for our sample and compare morphologies in $BViz$ and stellar mass. Our major findings and results include: I. We construct stellar mass maps and mass-to-light ratio maps for each galaxy and we present examples of each morphological type, in the $z_{850}$ band, stellar mass maps, and $(M/L)_B$ in Figures ~\ref{fig:massmap_cE} to ~\ref{fig:massmap_M}. We find that some compact elliptical (cE) galaxies have blue cores, and more complicated internal structures than in either $z_{850}$ or stellar mass would reveal on their own. Many of the peculiar ellipticals (pE) also have blue cores and we assert that these objects may have merged in their recent histories. The early and late-type spirals display more varied patterns. We find that for some of these galaxies, structures seen in light (e.g. spiral arms) are smoothed out in stellar mass. However, this effect does not hold true for all galaxies, nor for all features seen in light. Stellar mass and ($M/L$) maps of the `peculiar' galaxies are complicated and vary across the sample. Although it is more difficult to make out features in stellar mass for galaxies that have very blue colours, structures not seen in light are revealed. II. We compare the half-light radius, $R_{\rm e}$, in $z_{850}$ with half-mass radii for our sample as a function of morphological type. We find no systematic tendency for any particular morphological type to have larger $R_{\rm e}$ in stellar mass than in the $z_{850}$ band, and thus conclude that there is no bias introduced by measuring galaxy sizes in $z_{850}$. III. We find a clear tendency for many galaxies to be more asymmetric in stellar mass than in $z_{850}$. We also find that morphology correlates with asymmetries, with early-types having low $A(M_{*})$, and late-types higher values of $A(M_{*})$. We find a relation between colour and $A(M_{*})$ such that bluer galaxies have higher stellar mass asymmetry differences between optical light and stellar mass maps. The late-type spiral galaxies in the sample have higher $A(M_{*})$ than would be expected from their asymmetries in $z_{850}$. We discuss possible causes of this effect, including regions of enhanced star formation also possessing higher stellar masses, and the evolution of spirals. We note that these highly asymmetric spirals resemble the clumpy disks of Elmegreen \etal (2007), and Conselice et al. (2004) and are experiencing either minor merging activity, or bulge formation through accretion of disk material. IV. We find that the Gini index in stellar mass is higher than in the z-band ($G(M_{*}) > G(z_{850})$) for all galaxies except the early-types, indicating that most of the stellar mass in later morphological types is contained within fewer pixels. Late-type and peculiar morphologies show a trend for M$_{20}(M_{*})$ $>$ M$_{20}(z_{850})$, suggesting that the brightest $20$ percent of the brightest pixels is not necessarily where the greatest $20$ percent of the stellar mass is located. V. We investigate the relations between several combinations of the $CAS$ parameters in $BViz$ and stellar mass, and compare our results to previous morphological studies of this type (e.g. Conselice, Rajgor \& Myers 2008; Lotz \etal 2004; 2008). We find that $z_{850}$ is the most appropriate photometric band to utilise for a $z < 1$ sample of galaxies with all morphologies, although stellar mass maps are better at distinguishing active galaxies from passive ones and is more a physical measure of structure. We furthermore compare our sample classifications in $G$-M$_{20}$ to those of Lotz \etal (2008) and find that the Lotz \etal criteria do not best describe our sample. We revise the Lotz \etal (2008) criteria to best fit our sample in the $z_{850}$-band and within the stellar mass maps, and find that early-types, late-types and mergers can be separated. However, the edge-on disk galaxies remain problematic and cannot be distinguished from mergers in $G(z_{850})$-M$_{20}(z_{850})$. We find that $G(M_{*})$-M$_{20}(M_{*})$ can be used to broadly separate early from late-type galaxies, but the criteria cannot distinguish between late-type/edge-on disks from peculiar/merger systems. VI. We find a relationship between $R_{\rm e}$ and $C$ (see appendix A) for early-type galaxies in both $z_{850}$ and stellar mass. In each case we find that galaxies with higher concentrations have larger radii, and this relation is steeper in $z_{850}$ than stellar mass. We also investigate asymmetry versus M$_{20}$ and find that these parameters display a similar relation to $A-C$ in $i_{775}$ and $z_{850}$. In stellar mass, $A(M_{*})$-M$_{20}(M_{*})$ shows a tighter relation than $A(M_{*})-C(M_{*})$, with a clear separation between early and late-type systems. Thus, we conclude that, for all parameters, late-type spiral galaxies overlap with with Pec/pM/M systems which may be ultimately taken from the same subset of galaxies. Structural studies in stellar mass do however track both minor and major merging events, and can be used to find galaxies in active galaxy evolution modes. We thank the GOODS team for making their data public, and STFC for a studentship towards supporting this work and the Leverhulme Trust for support.
| 12
| 6
|
1206.3808
|
1206
|
1206.2103_arXiv.txt
|
As part of the Transit Ephemeris Refinement and Monitoring Survey (TERMS), we present new radial velocities and photometry of the HD 192263 system. Our analysis of the already available Keck-HIRES and CORALIE radial velocity measurements together with the five new Keck measurements we report in this paper results in improved orbital parameters for the system. We derive constraints on the size and phase location of the transit window for HD 192263b, a Jupiter-mass planet with a period of 24.3587 $\pm$ 0.0022 days. We use 10 years of Automated Photoelectric Telescope (APT) photometry to analyze the stellar variability and search for planetary transits. We find continuing evidence of spot activity with periods near 23.4 days. The shape of the corresponding photometric variations changes over time, giving rise to not one but several Fourier peaks near this value. However, none of these frequencies coincides with the planet's orbital period and thus we find no evidence of star-planet interactions in the system. We attribute the $\sim$23-day variability to stellar rotation. There are also indications of spot variations on longer (8 years) timescales. Finally, we use the photometric data to exclude transits for a planet with the predicted radius of 1.09 $R_{J}$, and as small as 0.79 $R_{J}$.
|
\label{introduction} Detecting exoplanets via the radial velocity (RV) method and subsequently monitoring their transit windows is the most fruitful strategy for exoplanets orbiting bright stars, which represent the best candidates for atmospheric studies. Ground and space-based transit surveys have revealed nearly 200 transiting exoplanets, but most of those orbit faint stars, while the RV technique is best suited for brighter stars. Moreover, searching for transits of known RV planets allows the selective monitoring of intermediate and long-period planets, of which only a few are known to transit so far. The Transit Ephemeris Refinement and Monitoring Survey (TERMS; \citealt{Kan09}) aims to improve the orbital parameters of RV exoplanets and monitor them photometrically during their thusly constrained transit windows. In \cite{Dra11}, we presented new transits and improved parameters for several known transiting exoplanets, originally discovered by the SuperWASP survey \citep{Poll06}. We demonstrated that the photometric precision required to detect and characterize transits of giant planets is easily attainable by modest-sized, ground-based facilities such as the Cerro Tololo Inter-American Observatory (CTIO ) 1.0 m telescope. Through TERMS, the ephemerides of HD 156846b, HD 114762b, HD 63454b and HD 168443b have been refined and transit searches conducted in each case (see \citealt{Kan11a}, \citealt{Kan11b}, \citealt{Kan11c} and \citealt{Pil11}, respectively). Exoplanets discovered using the RV method can sometimes be controversial, and HD 192263b is one such mischief. It was first published in 2000 \citep{San00} as a planet with a period of 24.13 days and $m \sin i$ = 0.73 M$_{J}$. These results arose from an analysis of RV measurements obtained using the CORALIE spectrograph. A paper by \cite{Vog00} followed, reporting a similar solution based on their Keck measurements, but noting that the chromospheric activity appeared to vary with a period close to that of the suspected planet. Two years later, \cite{Hen02} attributed the RV signal at least partly to stellar variability, as indicated by their photometric and spectrophotometric data. Indeed, a modulation with a period of $\sim$24 days is clearly visible in the light curves, and the power spectrum of the Ca II H and K spectrophotometric observations exhibits a significant peak at the same period. In the end, unlike other planet-like RV signals that were caused by stellar activity (\citealt{Que01}; \citealt{Des04}; \citealt{Pau04}), HD 192263b made a convincing comeback in \cite{San03}. New CORALIE measurements demonstrated that the RV variation remained coherent in amplitude and phase for over three years, while new photometric observations from La Palma revealed significant changes over time \citep{San03}. In this paper, we present new Keck RV observations which we use to refine the orbital parameters (section 2) and the transit ephemeris of the revived HD 192263b (section 3). We introduce new photometry of the host star obtained between 2002 and 2011 in section 4, and report on the stellar variability during this period in section 5. Finally, in section 6 we analyze the photometric measurements acquired during the predicted transit window and exclude transits with the predicted depth of 2.5$\%$ with high confidence. We conclude in section 7.
|
The planet orbiting HD 192263 is an example of the challenges posed by the detection of companions around active stars. In this paper, we presented five new RV measurements which we used to refine the orbital parameters of HD 192263b. The new measurements provide continuing support for the existence of the planet, in agreement with the conclusion of \cite{San03}. We have also shown new photometry of the system. We perform a Fourier analysis of the photometry and find evidence for variability with periods near ~23 days, which agrees with previous reports of stellar variability and which we attribute to stellar rotation. A detailed examination of the Fourier spectrum near this value reveals a multiplet of peaks with periods ranging between 22 and 27 days. The dominant signal in this cluster has a period of 23.3932 $\pm$ 0.0046 days. The identified surrounding peaks arise from the evolving nature of spots on the stellar surface, which affect the shape and amplitude of the light curve on the timescale of the stellar rotation period. Nevertheless, neither the dominant period nor any of the five other peaks match the orbital period of the planet ($P=24.3587 \pm 0.0022$ days), so we find no evidence of star-planet interactions. We also observe a longer-term trend which may be a $\sim$ 8-year activity cycle (if the cycle repeats), or may just be part of a longer interval of random variations in the star's spottedness. Continuing long-term monitoring of the star should more convincingly discriminate between the two scenarios. It is noteworthy that we do not see a long-term variation in the RV measurements in phase with this long-term photometric trend. As our photometric dataset spans approximately a decade, we have good coverage of the 3$\sigma$ transit window when the data are phased to the orbital period of the planet. Thus we are able to thoroughly exclude transits of the predicted depth ($2.53 \%$, corresponding to a planet with a radius of 1.09 $R_{J}$) for a planet with a mass of 0.733 $M_{J}$. The absence of a detectable ($> 12.0$ m s$^{-1}$) Rossiter-McLaughlin effect is consistent with this result. We also exclude transit depths as low as $1.3\%$ (corresponding to a planetary radius of 0.79 $R_{J}$). In the case of a non-edge-on orbital configuration, the cadence of the data allows us to rule out transits with impact parameter $<$ 0.86.
| 12
| 6
|
1206.2103
|
1206
|
1206.2759_arXiv.txt
|
{ According to standard stellar evolution, lithium is destroyed throughout most of the evolution of low- to intermediate-mass stars. However, a number of evolved stars on the red giant branch (RGB) and the asymptotic giant branch (AGB) are known to contain a considerable amount of Li, whose origin is not always understood well. Here we present the latest development on the observational side to obtain a better understanding of Li-rich K giants (RGB), moderately Li-rich low-mass stars on the AGB, as well as very Li-rich intermediate-mass AGB stars possibly undergoing the standard hot bottom burning phase. These last ones probably also enrich the interstellar medium with freshly produced Li.
|
Lithium (Li) is not only important for testing big bang nucleosynthesis predictions and for studying diffusion processes in atmospheres of dwarf stars \citep[see e.g.][and contributions in this issue]{Korn06}, it is also an important diagnostic tool for stellar evolution. Its abundance strongly depends on the ambient conditions because it is quickly destroyed at $T>3\times10^6$\,K so that it diminishes if the stellar surface is brought into contact with hot layers by mixing processes. A nice illustration of the evolution of the Li abundance in low-mass, low-metallicity stars during their ascent on the RGB can be found in Fig.~5 of \citet{Lind09}, who investigated the globular cluster \object{NGC 6397}. The least evolved main sequence and turn-off stars define a plateau of Li abundances that is in agreement with the well-known Spite plateau \citep{Spite82}. A sharp drop in Li abundance occurs once the stars undergo the first dredge-up (FDU) on the sub-giant branch: the abundance drops asymptotically by more than an order of magnitude towards a final value of $\log\epsilon({\rm Li})\approx1.1$, in agreement with the maximum value of 1.5 predicted by stellar models excluding atomic diffusion and rotation. This is also observed in most G-K giants in the field \citep{Lam80,Bro89,Mis06}, where some of these stars show Li abundances even far below the expectations \citep{Mal99}. During FDU, the convective envelope is diluted by H-processed material that is devoid of Li. When the outward advancing H-burning shell reaches the discontinuity in mean mo\-le\-cu\-lar weight ($\mu$ barrier) left behind by FDU, the star is at the bump in the luminosity function (RGB bump), and the Li abundance is further diminished on the star's surface. This happens because the radiative layer between the H-burning shell and the convective envelope is now chemically homogeneous, so that any instability in the layer can lead to slow mixing processes \citep[e.g.\ thermohaline mixing;][]{CZ07} that brings the remaining Li to hot layers where it is burned. In the cluster investigated by \citet{Lind09}, the Li abundance in some stars evolved beyond the RGB bump drops to below the detection threshold at $\log\epsilon({\rm Li})\sim0.2$. For more massive stars, the H-burning shell does not cross the $\mu$ barrier before it reaches the early AGB, when again slow mixing processes can further diminish the surface Li abundance. However, about 1 -- 2\,\% of the K giants have a Li abundance higher than this classical limit of $\log\epsilon({\rm Li})\approx1.5$ and are therefore called {\em Li-rich} \citep{WS82,Bro89,dlR97}. Because it is known that these stars underwent normal FDU, it is commonly believed that these stars cannot have retained their original Li abundance, but rather must have replenished it somehow in situ. Li destruction can turn into production if the overturn time scale for mixing between the H-burning shell and the convective envelope becomes faster than the decay of the parent nucleus $^7$Be. This is known as the Cameron-Fowler mechanism \citep[$^3$He($\alpha$,$\gamma$)$^7$Be(e$^-$,$\nu$)$^7$Li;][]{CF71}. In this contribution we review our observational results of the Li destruction and production in red giant stars with various masses and in various evolutionary stages. We focus on three different topics here: first we present our results from a recent spectroscopic survey of Li in RGB stars in the Galactic bulge \citep{Leb12}, second we review the correlation between Li and the third dredge-up indicator technetium (Tc) in low-mass, oxygen-rich AGB stars \citep{Utt07,UL10,Utt11}, and finally we report on the large abundance of Li detected in long-period Miras that probably undergo a phase of hot bottom burning.
|
The phenomenon of the Li-rich K giants is still mysterious. Despite more and more such stars are being found in diverse stellar systems, no satisfactory explanation can be given for them. A step forward would be to determine precise luminosities, for which precise distances are required. Parallax measurements of many such stars will become available in the Gaia era. Furthermore, it would be of great help to know whether these stars are preferentially H-shell burning RGB or He-burning early AGB stars. The tools of asteroseismology can distinguish between these evolutionary stages \citep{Bed11}. The Li-rich long-period Miras should be studied in more detail in the future. Only few observational constraints of the evolution of intermediate-mass stars on the AGB are available at the moment. However, they potentially play an important role in the evolution of Li in the Galaxy and in star clusters because of their very high Li abundance and high mass-loss rate. They might be responsible for polluting the intra-cluster gas of globular clusters \citep{Lind09}, thus they are of importance on a larger scale.
| 12
| 6
|
1206.2759
|
1206
|
1206.3569_arXiv.txt
|
{ LAMOST (Large sky Area Multi-Object fiber Spectroscopic Telescope) is a Chinese national scientific research facility operated by National Astronomical Observatories, Chinese Academy of Sciences (NAOC). After two years of commissioning beginning in 2009, the telescope, instruments, software systems and operations are nearly ready to begin the main science survey. Through a spectral survey of millions of objects in much of the northern sky, LAMOST will enable research in a number of contemporary cutting edge topics in astrophysics, such as: discovery of the first generation stars in the Galaxy, pinning down the formation and evolution history of galaxies especially the Milky Way and its central massive black hole, looking for signatures of dark matter distribution and possible sub-structures in the Milky Way halo. To maximize the scientific potential of the facility, wide national participation and international collaboration has been emphasized. The survey has two major components: the LAMOST ExtraGAlactic Survey (LEGAS), and the LAMOST Experiment for Galactic Understanding and Exploration (LEGUE). Until LAMOST reaches its full capability, the LEGUE portion of the survey will use the available observing time, starting in 2012. An overview of the LAMOST project and the survey that will be carried out in next five to six years is presented in this paper. The science plan for the whole LEGUE survey, instrumental specifications, site conditions, the descriptions of the current on-going pilot survey, including its footprints and target selection algorithm, will be presented as separate papers in this volume.
|
% \label{sect:intro} The Chinese national scientific research facility, LAMOST (Large sky Area Multi-Object fiber Spectroscopic Telescope, also named Guo Shou Jng Telescope (GSJT); for the original project plan see Wang et al. 1996, see also Cui et al . 2010 and Cui, Zhao \& Chu et al. 2012 for more details on the optical system) will begin carrying out its scientific survey of over 10 million stars and galaxies in Oct 2012. The survey contains two main parts: the LAMOST ExtraGAlactic Survey (LEGAS), and the LAMOST Experiment for Galactic Understanding and Exploration (LEGUE) survey of Milky Way stellar structure. Because of the special horizontal reflecting Schmidt design of the optics, the Guoshoujing telescope has a field of view as large as 20 square degrees, and at the same time a large effective aperture that varies from 3.6 to 4.9 meters in diameter (depending on the direction it is pointing). The unique design of LAMOST enables it to take 4000 spectra in a single exposure to a limiting magnitude as faint as $r=19$ at resolution $R=1800$, which is equivalent to the design aim of $r=20$ for the resolution R=500. This telescope therefore has a great potential to efficiently survey a large volume of space for stars and galaxies. Much progress has been made in the past two years in tuning the performance of LAMOST systems, including fiber positioning, dome seeing control, and optical alignments of spectrographs on hardware side; and calibrations, data pipelines and data archiving on software side. As demonstrated by 2 years of technical commissioning, the system is not yet reaching its original designed performance. However, LAMOST is already producing useful spectra of bright objects, and can successfully observe targets as faint as $r=20$ in the very best cases (and even fainter for emission line objects). LAMOST commissioning data have already produced a number of scientific results, including a search for metal poor stars (Li et al. 2010), the discovery of new quasars (He et al. 2010, Wu et al. 2010a,b) and planetary nebulae (Yuan et al. 2010), and mapping of the 2D stellar population pattern of the M31 disk (Zou et al. 2011). A summary of statistics of abstracts from ADS (publications and conference proceedings) on LAMOST scientific work and technical reports is shown in Fig.~\ref{lamost-ads}. Since 1996, over 250 abstracts related to LAMOST have been contributed to the literature, according to the ADS search engine. Most of the abstracts are presentations and papers on instrumentation (65\%) and data reduction algorithms (14\%) . Science planning started at a later time (mostly after 2006) after the detailed hardware designs of the LAMOST project had been determined. There are also scientific research presentations and papers that use LAMOST data (3\%) or are somehow related with LAMOST (8\%). All these publications demonstrate the complex technical design of LAMOST and its scientific potential. More recent research papers show that LAMOST not only realized a high efficiency in spectral data acquisition, but also has been capable of producing science quality data. \begin{figure} \centerline{\includegraphics[width=10cm]{lamost-ads-abs.eps}} \caption{A statistical survey of all the abstracts that are linked to LAMOST from ADS, as of March 2012. }\label{lamost-ads} \end{figure}
| 12
| 6
|
1206.3569
|
|
1206
|
1206.3872_arXiv.txt
|
{ The recently discovered subdwarf B (sdB) pulsator \target\ is one of the 16 pulsating sdB stars detected in the \kep\ field. It features a rich $g$-mode frequency spectrum, with a few low-amplitude $p$-modes at short periods. This makes it a promising target for a seismic study aiming to constrain the internal structure of this star, and of sdB stars in { general. We have obtained ground-based spectroscopic radial-velocity measurements of \target\ based on low-resolution spectra in the Balmer-line region, spanning the 2010 and 2011 observing seasons. From these data we have discovered that \target\ is a binary with period $P$=10.05\,d, and that the radial-velocity amplitude of the sdB star is 58\,km\,s$^{-1}$. Consequently the companion of the sdB star has a minimum mass of 0.63\,$M_{\sun}$, and is therefore most likely an unseen white dwarf. We analyse the near-continuous 2010--2011 \kep\ light curve to reveal the orbital Doppler-beaming effect, giving rise to light variations at the 238\,ppm level, which is consistent with the observed spectroscopic orbital radial-velocity amplitude of the subdwarf. We use the strongest 70 pulsation frequencies in the \kep\ light curve of the subdwarf as clocks to derive a third consistent measurement of the orbital radial-velocity amplitude, from the orbital light-travel delay. The orbital radius $a_{\rm sdB}\sin i$\,=\,11.5\,$R_{\sun}$ gives rise to a light-travel time delay of 53.6\,s, which causes aliasing and lowers the amplitudes of the shortest pulsation frequencies, unless the effect is corrected for. We use our high signal-to-noise average spectra to study the atmospheric parameters of the sdB star, deriving \teff\,=\,27\,910\,K and \logg\,=\,5.41\,dex, and find that carbon, nitrogen and oxygen are underabundant relative to the solar mixture. Furthermore, we analyse the \kep\ light curve for its pulsational content and extract more than 160 significant frequencies. } We investigate the pulsation frequencies for expected period spacings and rotational splittings. We find period-spacing sequences of spherical-harmonic degrees $\ell$=1 and $\ell$=2, and we associate a large fraction of the $g$-modes in \target\ with these sequences. From frequency splittings we conclude that the subdwarf is rotating subsynchronously with respect to the orbit. }
|
The hot subdwarf\,B (sdB) stars populate an extension of the horizontal branch where the hydrogen envelope is of too low a mass to sustain hydrogen burning. These core helium burning stars must have suffered extensive mass loss close to the tip of the red giant branch in order to reach this core/envelope configuration. Binary interactions, either through stable Roche lobe overflow or common envelope ejection, are likely to be responsible for the majority of the sdB population \citep[see][for a detailed review]{heber09}. \begin{figure}[t] \centerline{\psfig{figure=means.ps,width=6.5cm,angle=-90}} \caption[]{Mean spectra from KP4m, NOT, WHT (bottom to top), offset in flux for clarity. The top panel is a zoom-in of the WHT spectrum, demonstrating some of the stronger lines of heavy elements, such as MgII 4481\,\AA\ and SiIII 4552,\,4567\,\AA. } \label{fig:meanSpectra} \end{figure} Several extensive radial-velocity surveys have targeted the sdB stars, with the most recent large sample explored by \citet{copperwheat11}. They find that $\sim$50\%\ of all sdB stars reside in short-period binary systems with the majority of companions being white dwarf (WD) stars. A recent compilation of such short-period systems can be found in Appendix A of \citet{geier11a}, which lists a total of 89 systems. Adding 18 new systems from \citet{copperwheat11} brings the total well above a hundred sdBs with periods ranging from 0.07 to 27.8\,d. These systems are all characterised by being single-lined binaries, {\em i.e.} only the sdB stars contribute to the optical flux, which directly constrains the companion to be either an M-dwarf or a compact stellar-mass object. Binaries with companions of type earlier than M are double-lined and also readily identifiable from a combination of optical and infrared photometry. \citet{reed04} find that $\sim$50\%\ of all sdB stars have IR excess and must have a companion no later than M2. Radial velocity studies targeting these double-lined stars have had a hard time detecting orbital periods, indicating that they must be exceedingly long. A recent breakthrough was made by \citet{ostensen12a} using high-resolution spectroscopy of a sample of eight bright subdwarf + main-sequence (MS) binaries detecting orbital periods spanning a range from $\sim$500 to 1200\,d with velocity amplitudes between 2 and 8\,km\,s$^{-1}$. The period distribution of these different types of binary systems are important in that they can be used to constrain a number of vaguely defined parameters used in binary population synthesis models, including the common envelope ejection efficiency, the mass and angular momentum loss during stable mass transfer, the minimum core mass for helium ignition, etc. The seminal binary population study of \citet{han02,han03} successfully predicts many aspects of the sdB star population, but the key parameters have a wide range of possible values. A recent population synthesis study by \citet{clausen12} explores the possible populations of sdB+MS stars and demonstrates how the entire population can change with different parameter sets, but does not deal with sdB+WD binaries. A theoretical prediction of the existence of pulsations in sdB stars, due to an opacity bump associated with iron ionisation in subphotospheric layers, was made by \citet{charpinet97}. Since both $p$ and $g$-mode pulsations were discovered in sdB stars \citep{kilkenny97,green03}, there has been a focus on the possibilities to derive the internal structure and to put constraints on the lesser known stages of the evolution by means of asteroseismology. Currently the immediate aims of asteroseismology of sdB stars are to derive the mass of the Helium-burning core and the mass of the thin Hydrogen envelope around the core \citep[e.g.][]{randall06}, the rotational frequency and internal rotation profile \citep{charpinet08}, the radius, and the composition of the core \citep[e.g.][]{VanGrootel10,charpinet11b}. Recent observational success has been achieved from splendid light curves obtained by the CoRoT and \kep\ spacecrafts, delivering largely uninterrupted time series with unprecedented accuracy for sdB stars. Overviews of the \kep\ survey stage results for sdB stars were given by \citet{ostensen10b,ostensen11b}, and case studies revealing dense pulsational frequency spectra are presented by \citet{reed10a} and \citet{baran11b}. From \kep\ data it has become clear that the $g$-modes in sdB stars can be identified from period spacings \citep{reed11c}. Earth-size planets stripped from their outer layers have been found around the pulsating sdB star KIC\,05807616 \citep[KOI-55, KPD\,1943+4058,][]{charpinet11b}, the star being also the subject of the first seismic study of an sdB star in the \kep\ field \citep{VanGrootel10}. \kep\ sdB+dM binaries with pulsating subdwarf components have been presented by \citet{kawaler10b}, \citet{2m1938}, and by \citet{pablo11}. White-dwarf companions in close \kep\ binaries are presented by e.g.\ \citet{bloemen11,bloemen12} and \citet{silvotti12}. \begin{figure*}[ht] \centerline{\psfig{figure=kic11558725-rv.ps,height=14cm,angle=-90}} \caption[]{Radial-velocity curve from observations with the KP4m Mayall, the Nordic Optical, and the William Herschel telescopes. } \label{fig:radialVelocityCurve} \end{figure*} Our target, \target\ or J19265+4930, is one of the 16 pulsating sdB stars detected in the \kep\ field. The \kep\ magnitude of \target\ is 14.95, and the B-band magnitude is about 14.6, making it the third brightest in the sample. A first description of the spectroscopic properties, and the pulsational frequency spectrum as found from the 26 day \kep\ survey dataset, was given by \citet{ostensen11b}, with the source showing frequencies in the range of 78--391\,$\mu$Hz. Based on this relatively short data set already 36 frequencies were identified, showing the potential of this star for a seismic study. Subsequently, \citet{baran11b} derived 53 frequencies in total from the \kep\ survey data, and the frequencies were identified in terms of spherical-harmonic degrees by \citet{reed11c}. As a consequence, the star was observed by \kep\ from Q6 onwards. At the time of writing, we have analysed data from the five quarters Q6 to Q10. We present the full frequency spectrum resulting from these 15 months of short-cadence \kep\ observations. In this paper we present our discovery of the binary nature of \target\ based on low-resolution spectroscopy. This object was sampled as part of a spectroscopic observing campaign to study the binary nature of the sdB population in the \kep\ field, for which some preliminary results have already been presented by \citet{telting12a}. Part of the data from this campaign were presented in case studies on the close sdB+WD binary KIC\,06614501 \citep{silvotti12}, and the $p$-mode pulsating sdB KIC\,10139564 (Baran et al. 2012). From our new spectra of \target\ we solve the orbital radial-velocity amplitude, and derive a lower limit of the companion of the sdB star, which is most likely an unseen white dwarf (Sect.\ 2). We use the average spectrum to study the atmospheric parameters in detail. We show that the orbital \kep\ light curve reveals strong evidence for Doppler beaming that results in light variations at the 238\,ppm level, consistent with theoretical predictions, again allowing us to make an independent measurement of the orbital radial-velocity amplitude (Sect.\ 3). We extract 166 pulsational frequencies from the \kep\ light curve (Sect.\ 4), and show that the orbit has an appreciable effect through the light-travel time on the observed phases and frequencies of these pulsations, which in fact allows us to make a third independent measurement of the orbital-radial velocity amplitude (Sect.\ 5). In the final sections of this paper we discuss pulsational period spacings and frequency splittings, aiming to identify the spherical-harmonic degree of the modes and to disclose the rotation period of the subdwarf in \target. \begin{table*}[ht] \begin{center} \caption[]{Log of the low-resolution spectroscopy of \target } \label{tbl:obslog} \begin{tabular}{ccrr@{~~}rll} \hline\hline \noalign{\smallskip} Mid-exposure Date & {Barycentric JD} & S/N & RV~~~ & RV$_{\rm err}$~ & Telescope & Observer/PI \\ & --2455000 & & km\,s$^{-1}$ & km\,s$^{-1}$ \\ \noalign{\smallskip} \hline \noalign{\smallskip} 2010-07-28 00:24:52.0 & 405.519273 & 113.9 & $-$26.6 & 8.6 & WHT & JHT/CA \\ 2010-07-28 05:16:04.8 & 405.721505 & 94.6 & $-$30.6 & 9.0 & WHT & JHT/CA \\ 2010-08-12 10:05:19.3 & 420.922376 & 57.6 & $-$93.5 & 10.0 & KP4m & MDR,LF \\ 2010-08-12 10:16:03.9 & 420.929837 & 60.6 & $-$91.1 & 9.8 & KP4m & MDR,LF \\ 2010-08-13 03:41:29.7 & 421.655828 & 68.3 & $-$69.3 & 7.4 & KP4m & MDR,LF \\ 2010-08-13 03:52:04.9 & 421.663178 & 74.1 & $-$66.4 & 8.1 & KP4m & MDR,LF \\ 2010-08-14 08:24:41.8 & 422.852489 & 59.1 & $-$31.1 & 8.0 & KP4m & MDR,LF \\ 2010-08-14 08:35:42.3 & 422.860134 & 59.4 & $-$25.9 & 8.5 & KP4m & MDR,LF \\ 2010-08-14 08:46:01.9 & 422.867305 & 60.1 & $-$34.2 & 6.9 & KP4m & MDR,LF \\ 2010-08-25 22:54:52.3 & 434.456682 & 91.3 & $-$4.6 & 2.1 & WHT & RH\O/CA \\ 2010-08-25 23:09:46.0 & 434.467026 & 106.2 & $-$17.4 & 5.2 & WHT & RH\O/CA \\ 2010-08-27 00:36:26.9 & 435.527210 & 117.9 & $-$25.8 & 9.2 & WHT & RH\O/CA \\ 2010-08-27 00:46:36.6 & 435.534266 & 112.3 & $-$31.2 & 7.3 & WHT & RH\O/CA \\ 2010-08-27 21:56:54.7 & 436.416408 & 93.6 & $-$39.0 & 8.2 & WHT & RH\O/CA \\ 2010-08-27 22:08:29.4 & 436.424449 & 90.5 & $-$63.8 & 4.4 & WHT & RH\O/CA \\ 2010-08-28 23:18:34.2 & 437.473103 & 74.7 & $-$83.9 & 8.3 & WHT & RH\O/CA \\ 2010-08-28 23:28:43.8 & 437.480158 & 71.5 & $-$90.5 & 6.1 & WHT & RH\O/CA \\ 2010-08-29 21:53:35.3 & 438.414074 & 105.3 & $-$111.8 & 9.1 & WHT & RH\O/CA \\ 2010-08-29 22:03:44.8 & 438.421129 & 107.7 & $-$128.2 & 10.9 & WHT & RH\O/CA \\ 2010-08-31 01:31:47.4 & 439.565587 & 119.6 & $-$116.6 & 6.0 & WHT & RH\O/CA \\ 2010-08-31 01:41:56.9 & 439.572642 & 109.0 & $-$117.0 & 6.4 & WHT & RH\O/CA \\ 2011-05-31 01:56:05.9 & 712.581494 & 49.4 & $-$85.7 & 18.4 & NOT & JHT \\ 2011-05-31 04:48:22.9 & 712.701140 & 79.6 & $-$88.5 & 14.7 & NOT & JHT \\ 2011-06-01 02:35:14.4 & 713.608708 & 57.0 & $-$41.4 & 17.1 & NOT & JHT \\ 2011-06-07 03:46:43.9 & 719.658535 & 67.2 & $-$123.2 & 9.2 & NOT & JHT \\ 2011-06-09 03:34:59.9 & 721.650444 & 49.6 & $-$99.0 & 14.2 & NOT & JHT \\ 2011-06-10 04:58:52.1 & 722.708717 & 43.9 & $-$103.3 & 10.0 & NOT & JHT \\ 2011-06-20 00:20:10.7 & 732.515438 & 53.1 & $-$78.3 & 14.9 & NOT & JHT \\ 2011-06-21 01:27:07.4 & 733.561953 & 36.3 & $-$69.3 & 14.7 & NOT & JHT \\ 2011-06-27 21:53:09.0 & 740.413514 & 56.4 & $-$148.6 & 19.1 & NOT & JHT \\ 2011-07-22 23:30:41.0 & 765.481614 & 38.1 & $-$22.1 & 14.4 & NOT & JHT \\ 2011-07-23 22:43:51.2 & 766.449100 & 49.2 & $-$28.4 & 16.1 & NOT & JHT \\ 2011-08-29 00:17:40.9 & 802.514159 & 42.1 & $-$110.0 & 16.6 & NOT & RO \\ 2011-08-30 00:38:59.5 & 803.528944 & 48.1 & $-$81.2 & 16.6 & NOT & RO \\ 2011-08-31 00:25:05.0 & 804.519272 & 38.2 & $-$50.1 & 13.9 & NOT & RO \\ \noalign{\smallskip} \hline \end{tabular} \end{center} \end{table*}
| 12
| 6
|
1206.3872
|
|
1206
|
1206.6516_arXiv.txt
|
Several lines of evidence, from isotopic analyses of meteorites to studies of the Sun's elemental and isotopic composition, indicate that the solar system was contaminated early in its evolution by ejecta from a nearby supernova. Previous models have invoked supernova material being injected into an extant protoplanetary disk, or isotropically expanding ejecta sweeping over a distant ($> 10$ pc) cloud core, simultaneously enriching it and triggering its collapse. Here we consider a new astrophysical setting: the injection of clumpy supernova ejecta, as observed in the Cassiopeia A supernova remnant, into the molecular gas at the periphery of an H {\sc ii} region created by the supernova's progenitor star. To track these interactions we have conducted a suite of high-resolution ($1500^3$ effective) three-dimensional numerical hydrodynamic simulations that follow the evolution of individual clumps as they move into molecular gas. Even at these high resolutions, our simulations do not quite achieve numerical convergence, due to the challenge of properly resolving the small-scale mixing of ejecta and molecular gas, although they do allow some robust conclusions to be drawn. Isotropically exploding ejecta do not penetrate into the molecular cloud or mix with it, but, if cooling is properly accounted for, clumpy ejecta penetrate to distances $\sim 10^{18} \, {\rm cm}$ and mix effectively with large regions of star-forming molecular gas. In fact, the $\sim 2 \, M_{\odot}$ of high-metallicity ejecta from a single core-collapse supernova is likely to mix with $\sim 2 \times 10^{4} \, M_{\odot}$ of molecular gas material as it is collapsing. Thus all stars forming late ($\approx 5 \, {\rm Myr}$) in the evolution of an H {\sc ii} region may be contaminated by supernova ejecta at the level $\sim 10^{-4}$. This level of contamination is consistent with the abundances of short-lived radionuclides and possibly some stable isotopic shifts in the early solar system, and is potentially consistent with the observed variability in stellar elemental abundances. Supernova contamination of forming planetary systems may be a common, universal process.
|
\subsection{Solar System Contamination by Supernova Material} Many lines of evidence indicate that our solar system was contaminated during its formation by material from a nearby core-collapse supernova. Isotopic analyses of meteorites reveal both evidence for the one-time presence of short-lived radionuclides (SLRs) as well as stable element isotopic anomalies suggestive of supernova ejecta. Furthermore, the Sun's elemental and even its isotopic composition point to contamination from a supernova. Traditionally, the strongest arguments for supernova contamination come from isotopic analyses of the decay products of radioactive isotopes in meteorites. By observing correlations between excesses of the daughter isotope with the elemental abundance of the parent, it is inferred that the solar nebula contained several SLRs with half lives $< 10 \, {\rm Myr}$, including ${}^{36}{\rm Cl}$, ${}^{10}{\rm Be}$, and most importantly, ${}^{26}{\rm Al}$, and ${}^{60}{\rm Fe}$ (Wadhwa et al.\ 2007). Even before it was discovered, Cameron (1962) suggested that the presence of ${}^{26}{\rm Al}$ in the early solar system would imply injection from a nearby supernova. Since its discovery (Lee et al.\ 1976), alternative sources of ${}^{26}{\rm Al}$ have been suggested, including production by irradiation by energetic particles within the solar nebula (Lee et al.\ 1998; Gounelle et al.\ 2001, 2006). These models encounter a number of difficulties, however (Desch et al.\ 2010), and an external nucleosynthetic source is usually invoked for this isotope (Huss et al.\ 2009; Wadhwa et al.\ 2007; Makide et al.\ 2011; Boss 2012). More recently, the existence of ${}^{60}{\rm Fe}$ in the solar nebula at a level ${}^{60}{\rm Fe} / {}^{56}{\rm Fe} \sim 3 \times 10^{-7}$ was reported by Tachibana \& Huss (2003). This would definitively indicate injection of material from a nearby supernova into the Sun's molecular cloud or protoplanetary disk, as no other plausible sources exist for this neutron-rich isotope (Leya et al.\ 2003; Wadhwa et al.\ 2007). On the other hand, the widespread existence of ${}^{60}{\rm Fe}$ in the solar nebula at these levels has been called into question, although its existence at lower levels, ${}^{60}{\rm Fe} / {}^{56}{\rm Fe} \sim 1 \times 10^{-8}$ appears to be robust (Telus et al.\ 2012; Quitte et al.\ 2010; Spivak-Birndorf et al.\ 2011). Even at ${}^{60}{\rm Fe} / {}^{56}{\rm Fe} \sim 1 \times 10^{-8}$, the existence of ${}^{60}{\rm Fe}$ probably demands a late input from a supernova (Jacobsen 2005; Huss et al.\ 2009). Thus, while the evidence from meteoritic SLRs is not quite as clear-cut as previously thought, the consensus view remains that ${}^{60}{\rm Fe}$, ${}^{26}{\rm Al}$, and other SLRs were injected by a supernova. Furthermore, the SLR measurements in meteorites also suggest this contamination occurred early in the solar system's evolution (Wadhwa et al.\ 2007; Huss et al.\ 2009). This is because high levels of ${}^{26}{\rm Al}$ (at an initial abundance ${}^{26}{\rm Al} / {}^{27}{\rm Al}$ $\approx 5 \times 10^{-5}$) are commonly inferred for calcium-rich, aluminum-rich inclusions (CAIs) in meteorites at the time they formed (MacPherson et al.\ 1995). CAIs are composed of minerals that condense from a solar composition gas at very high temperatures, $> 1700 \, {\rm K}$ (Ebel \& Grossman 2000), meaning that they formed in a hot solar nebula. Such temperatures require high mass accretion rates through the protoplanetary disk $\dot{M} > 10^{-6} \, M_{\odot} \, {\rm yr}^{-1}$ that cannot be maintained for more than $\sim 10^{5} \, {\rm yr}$ (e.g., Lesniak \& Desch 2011). This timeframe is consistent with the finding by Larsen et al.\ (2011) that the initial ${}^{26}{\rm Al} / {}^{27}{\rm Al}$ ratio in CAIs is uniform and suggestive of ${}^{26}{\rm Al}$-bearing CAIs forming from a homogenized reservoir all within $< 3 \times 10^{5} \, {\rm yr}$ of each other (Makide et al.\ 2011). In fact, this timescale is nearly as short as the expected free-fall timescale on which material is believed to collapse from the molecular cloud, and it appears quite likely that ${}^{26}{\rm Al}$ was injected at some point {\it during} the collapse process (Thrane et al.\ 2008; Makide et al.\ 2011). Injection and incomplete homogenization would also explain the existence of rare CAIs called FUN inclusions (Fractionation and Unknown Nuclear effects), for which strong upper limits on initial ${}^{26}{\rm Al}$ exist, as low as ${}^{26}{\rm Al} / {}^{27}{\rm Al} < 10^{-8}$ (Fahey et al.\ 1987), at least for some of these objects. Presumably these CAIs formed early, from material not yet contaminated by mixing of injected supernova material (Sahijpal \& Goswami 1998). The weight of evidence is that injection of ${}^{26}{\rm Al}$-bearing supernova material happened very early in the solar system's evolution, probably in the first 1 Myr. Strong meteoritic evidence for supernova injection is also provided by stable isotope anomalies. Variations in ${}^{54}{\rm Cr}$ among planetary materials argue strongly for a heterogeneous distribution of this isotope within the solar nebula (Podosek et al.\ 1997; Rotaru et al.\ 1992; Trinquier et al.\ 2007). The carrier of this anomaly recently has been discovered to be small ($\sim 100 \, {\rm nm}$) spinel (${\rm MgAl}_{2}{\rm O}_{4}$) presolar grains with ${}^{54}{\rm Cr} / {}^{52}{\rm Cr}$ ratios greater than 3 (Dauphas et al.\ 2010) or more (Qin et al.\ 2011; Nittler et al.\ 2012) times the solar value. Qin et al.\ (2011) argue these formed from material from the O/Ne and O/C burning zones of a type II supernova. Other stable isotopes anomalies appear to correlate with ${}^{54}{\rm Cr}$, including ${}^{62}{\rm Ni}$ (Regelous et al.\ 2008) and ${}^{46}{\rm Ti}$ and ${}^{50}{\rm Ti}$ (Trinquier et al.\ 2009), which Qin et al.\ (2011) argue are also consistent with an origin in the O/Ne or O/C zones of a type II supernova. Interestingly, Larsen et al.\ (2011) have presented evidence for heterogeneous ${}^{26}{\rm Mg}$ anomalies (from decay of ${}^{26}{\rm Al}$) that correlate with the ${}^{54}{\rm Cr}$ anomalies, which would strongly imply that the source of ${}^{26}{\rm Al}$ in the solar nebula was associated with the nanospinels that introduced the ${}^{54}{\rm Cr}.$ In addition, Ranen \& Jacobsen (2006) inferred late contributions from a nucleosynthetic source from variations in Ba isotopes, and Dauphas et al.\ (2002) inferred the same from variations in Mo isotopes. These stable isotope anomalies, manifested as differences in isotopic ratios between different planetary materials, represent (late) additions of material that did not mix well in the solar nebula. There are also stable isotopes which appear well mixed but manifest themselves as differences in isotopic ratios between planetary materials and the predictions of Galactic chemical evolution. As emphasized by Clayton (2003), the isotopic ratios of Si in meteorites and planetary materials in the solar system are difficult to reconcile with the isotopic ratios in ``mainstream" SiC presolar grains. These grains seem to show greater contributions from secondary isotopes (${}^{29}{\rm Si}$ and ${}^{30}{\rm Si}$), relative to the primary isotope ${}^{28}{\rm Si}$, than solar system materials, despite the fact that they predate the solar system and sample material that has seen less Galactic chemical evolution (Clayton \& Timmes 1997; Alexander \& Nittler 1999; Zinner 1998). Contamination of the solar system by ${}^{28}{\rm Si}$-rich supernova material has been invoked to explain this discrepancy (Alexander \& Nittler 1999). In a similar way, Young et al.\ (2011) have considered the oxygen isotopic composition of the solar system in a Galactic context, comparing it to gas around protostars. They infer that the solar system was enriched in ${}^{18}{\rm O}$ (and / or depleted in ${}^{17}{\rm O}$), relative to ${}^{16}{\rm O}$, by about 30\%. They also argue for mixing of material with ejecta from a core-collapse supernova. Going beyond the strong evidence for supernova contamination of meteorites and planetary materials, there is growing evidence for contamination of the Sun itself. Recent {\it Genesis} measurements of isotopic ratios in the solar wind appear to confirm that the Sun's oxygen isotopic ratio matches that of CAIs in meteorites (McKeegan et al.\ 2011), meaning that if the meteorites differ isotopically from the Galactic average, then so does the Sun. Also, it has long been recognized that the Sun's metallicity is anomalously high compared to G dwarfs formed at the same time and galactocentric distance (Edvardsson et al.\ 1993), and it has even been suggested that the Sun formed at 6.6 kpc, in order to explain its elevated [Fe/H] (Wielen et al.\ 1996). An alternative explanation is that stars forming at the same place and time may receive considerably different contributions of supernova material (Reeves 1978). The Sun's [Fe/H] might appear anomalously high if it received a significant amount of supernova material. A prediction of this scenario is that stars would exhibit variations in [Fe/H] and other elemental ratios, because of the presumably stochastic nature of supernova contamination. Observational support for elemental variations was sought by Cunha \& Lambert (1994) and Cunha et al.\ (1998), who found up to a factor of 2 variations in elemental ratios in O and Si but not Fe, C and N among newly formed B, F and G stars of the same age and subgroup in the Orion star-forming region. The variability of O and Si, which are primary products of core-collapse supernovae, but not in C and N, which come predominantly from sources other than core-collapse supernovae, was taken as strong evidence for contamination from nearby supernovae. Unfortunately, subsequent work has not confirmed such high degrees of variability among Orion stars, (D'Orazi et al.\ 2009; Takeda et al.\ 2010; Sim\'{o}n-Di\'{a}z 2010; Nieva \& Sim\'{o}n-Di\'{a}z 2011). Intriguingly, though, among stars known by radial velocity surveys to host planets, the ratios of abundant elements like C, O, Si and Fe appear to vary by factors of 2 in their stellar atmospheres (Bond et al.\ 2008; Pagano et al.\ 2010). Supernova injection into the molecular cloud from which protostars are forming remains a plausible mechanism for these variations, and may contribute to the abundances observed in planet-hosting stars. In summary, the preponderance of the evidence from studies of SLRs and stable isotope anomalies in meteorites, comparisons of stable isotopic ratios in the solar system with those in presolar grains and interstellar gas, and measurements of the elemental variations of planet-hosting stars stars all point to a single scenario. Supernova ejecta contaminated the Sun, likely very early in the solar system's evolution, and similar contamination is likely to be a common occurrence in the formation of Sun-like stars. \subsection{Sources of Supernova Contamination} Various models have been proposed for how a newly forming solar system could be contaminated with supernova material during either in the early stages of collapse, or soon after the protoplanetary disk has formed. Cameron \& Truran (1977) suggested that the Sun's molecular cloud core was both contaminated by supernova material {\it and} simultaneously triggered by the supernova shock to collapse. Increasingly sophisticated numerical models have simulated the interaction of supernova ejecta with a marginally stable molecular cloud core, showing that the ejecta simultaneously can trigger collapse of the cloud core and inject supernova material into the collapsing gas, provided the ejecta have been slowed to speeds $5 - 70 \, {\rm km} \, {\rm s}^{-1}$ (Boss 1995; Foster \& Boss 1996, 1997; Boss \& Foster 1998; Vanhala \& Cameron 1998; Boss \& Vanhala 2000; Vanhala \& Boss 2002; Boss et al.\ 2008, 2010; Boss \& Keiser 2010). This last point is crucial, since higher speeds tend to shred apart the cloud core rather than initiate its collapse. The need to slow the ejecta from initial velocities $> 2000 \, {\rm km} \, {\rm s}^{-1}$ demands that several parsecs of gas must lie between the supernova and the cloud core. Such molecular gas is observed to lie at the periphery of H {\sc ii} regions in which massive stars evolve and go supernova, but the rest of the scenario is difficult to test observationally, because the cloud cores would be deeply embedded several parsecs deep within the molecular clouds. Supernova injection into molecular clouds was explored in a different context by Gounelle et al.\ (2009). In their model, multiple supernovae in a stellar cluster sequentially condense the ambient low-density interstellar gas into molecular clouds, and the ejecta material is assumed to mix into the molecular gas simultaneously. As a result of this sequential enrichment, stars of the next generation forming from these molecular clouds would contain the products of multiple supernovae. Injection of supernova material into a protoplanetary disk was considered analytically by Chevalier (2000), and numerically by Ouellette et al.\ (2005, 2007, 2009, 2010). These authors noted that protoplanetary disks are commonly found in high-mass star-forming regions, near massive stars that will quickly evolve off the main sequence and explode as supernovae. Ouellette et al.\ (2010) found that injection of supernova material into a protoplanetary disk, at levels high enough to explain the abundances of SLRs like ${}^{26}{\rm Al}$, was possible, provided these species resided in large (radii $> 0.1 \, \mu{\rm m}$) dust grains, to avoid flowing around the disk. Gounelle \& Meibom (2009) and Williams \& Gaidos (2009) noted that the disk is very likely to have already evolved several Myr, or to be many parsecs away, at the time of the explosion, raising doubts that injection into a protoplanetary disk can explain the abundances of SLRs. Ouellette et al.\ (2010) countered that a combination of triggered formation and clumpy supernova ejecta may yet satisfy the constraints, and work on this model is ongoing. An important feature of this model that may distinguish it from alternatives is that significant amounts of supernova ejecta do not enter the star, just the disk material. Here we study a third alternative, based on the observation that star formation occurs in the molecular gas at the edges of H {\sc ii} regions, quite probably triggered by the ionization fronts and associated shocks driven by the massive stars at the center of the H {\sc ii} region (Hester et al.\ 2004; Hester \& Desch 2005; Snider et al.\ 2009). Supernova ejecta from the explosion of a massive star will generally occur at the center of an H {\sc ii} region, and will generally contaminate this peripheral gas. If supernova ejecta could emplace themselves in the molecular gas as it collapses due to compression either from a D-type ionization front or a supernova shock, then supernova contamination of protostars would indeed be a common process. Thus we are motivated to study a mechanism by which supernova material may be deposited directly into forming solar systems: the injection of dense clumps of innermost supernova material such as those observed in SN 1987A and the Cassiopeia A (Cas A) supernova remnant. By means of high-resolution 3D numerical simulations, we consider how such highly-enriched dense knots enter and mix with the gas of a nearby molecular cloud, at the periphery of the H {\sc ii} region in which the supernova progenitor resided. Many interesting numerical studies of the interaction of cold, overdense clumps moving through a hot, lower density medium have been undertaken in other astrophysical contexts. Klein et al.\ (1994) studied the evolution of nonradiative clouds propagating through the general interstellar medium (ISM), showing that if the cloud velocity is much greater than its sound speed, it will be disrupted on a ``cloud crushing" timescale given by the time for the shock to cross the cloud interior. Subsequent ISM-scale studies showed that magnetic fields (e.g.\ Mac Low et al.\ 1994) and radiative cooling that operates above the initial cloud temperature (Fragile et al.\ 2005) were only able to delay this disruption by 1-2 cloud crushing times. However, if shock interactions are able to efficiently catalyze coolants that radiate efficiently below the initial cloud temperature, the cloud will collapse, a process that may lead to triggered star formation on galactic scales (e.g., Fragile et al. 2004; Gray \& Scannapieco 2010). Even if cooling is efficient only above the initial cloud temperature, clumps can be maintained for long timescales if they move through the medium faster than the exterior sound speed, because of a bow shock that forms in front of them. Analogous cases that have been modeled include comet Shoemaker-Levy 9 plunging through Jupiter's atmosphere (Mac Low \& Zahnle 1994), clouds interacting with galaxy outflows (Cooper et al.\ 2009), high-velocity clouds orbiting the Milky Way (Kwak et al.\ 2011), and ``bullets" of ejecta from stellar outflows (Poludnenko et al.\ 2004), proto-planetary nebulae (Dennis et al.\ 2008), and supernovae (Raga et al.\ 2007) moving through the ionized ISM. The simulations presented in this paper also lie in this supersonic regime, but invoke a very different set of parameters from these previous studies. Our simulations are the first to study the interaction of supernova bullets with the molecular gas at the periphery of an H {\sc ii} region. The structure of this paper is as follows. In \S 2 we discuss the astrophysical context in which supernovae often take place, in an H {\sc ii} region. We also discuss the evidence that a substantial portion of supernova ejecta explodes in the form of dense clumps. In \S 3 we outline the numerical methods by which we model the interaction of these clumps with the surrounding molecular cloud. In \S 4 we present the results of a parameter study designed to test the extent to which supernova material can penetrate into a molecular cloud and mix with molecular gas, as a function of clump velocity, mass, density, and other parameters. In \S 5 we discuss the implications of these results for the abundances of short-lived radionuclides and stable isotope anomalies in the early solar system, elemental variations among stars formed in the same cluster, and galactic enrichment in general.
|
\subsection{Summary} The numerical simulations described above represent the first numerical study of clumpy supernova ejecta interacting with molecular gas at the periphery of an H {\sc ii} region. We assumed typical distances from the supernova of about 2 pc, similar to the distances of ejecta from the explosion center in Cas A, and comparable to the distance an ionization front is inferred to propagate before supernova occurs. Guided by the approximate models of Ouellette et al.\ (2010) and by observations of the Cas A supernova remnant, we assume radii $\approx \times 10^{15} \, {\rm cm}$, masses $\approx 1 \times 10^{-4} \, M_{\odot}$, and densities $\approx 3.8 \times 10^{-19} \, {\rm g} \, {\rm cm}^{-3}$. Furthermore, we adopted a fiducial ejecta velocity $= 2000 \, {\rm km} \, {\rm s}^{-1}$, and the molecular gas density was fixed at $n_{\rm H2} \approx 10^{4} \, {\rm cm}^{-3},$ including a cooling function appropriate to shocked optically-thin gas. With these parameters, numerical resolution is a real concern. Metrics like the mean distance traveled by ejecta after a stopping time (i.e., $d_{\rm ej}$ at 30,000 years) are sensitive to physical conditions in the first 1000 years of the interaction and vary non-monotonically as the numerical resolution increases. Convergence is difficult to achieve because of the very large span of length scales in the problem: clumps travel $\sim 10^3$ times their own diameter, and fragment by KH and possibly cooling instabilities, into even smaller scales. Unfortunately, higher numerical resolution is infeasible for this study, as each run consumed several hundred thousand CPU-hours. Turbulence, which is not included in these runs, may ameliorate these problems somewhat by introducing a lower limit to the size of coherent fragments. Despite the lack of numerical convergence, certain trends in the data appear to be robust. Under the right conditions, $\gtsimeq 80-90\%$ of the clump material is injected to mean depths $\approx 0.3 \, {\rm pc}$ and remains in the molecular cloud. The conditions under which ejecta remain in the cloud appear to hinge entirely on the cooling timescale. If cooling is not sufficiently rapid, the post-shock pressure builds to the point that the bulk of the ejecta is expelled from the molecular cloud. Efficient injection requires a cooling timescale not much greater than the dynamical timescale, $\sim 100 \, {\rm yr}$. The cooling timescale decreases in inverse proportion to the post-shock density, and a threshold density exists for injection of material, which is very roughly 6 times the density of the molecular gas. The cooling timescale also increases sensitively with post-shock temperature, and therefore shock speed. An optimal ejecta velocity $V \approx 1000 - 2000 \, {\rm km} \, {\rm s}^{-1}$ exists for injection, and the depth of delivery, $d_{\rm ej},$ and the fraction injected, $f_{\rm inj},$ decrease with increasing $V$. Another robust trend is that if cooling is effective, $d_{\rm ej}$ appears to correlate with the clump's momentum per unit area\footnotemark\footnotetext{The momentum per unit area has been found to be a crucial quantity in a different context by Foster and Boss (1996). In a study of the interaction of stellar ejecta with molecular cloud cores, they showed that the incident momentum of the ejecta plays a important role in determining whether the interaction leads to collapse or destruction of the cloud core.}. Clumps with higher $\rho_{\rm ej} R$ generally travel farthest, but $d_{\rm ej}$ does not monotonically increase with this quantity because instabilities can cause the clump to laterally spread, decreasing its momentum per area. These instabilities manifest themselves more prominently when the radius of the clump is increased and is therefore better resolved. In all cases where the clump penetrates beyond $\gtsimeq 10^{17} \, {\rm cm}$, $\gtsimeq 80\%$ of the clump material remains in the molecular cloud at late times, and $f_{\rm inj}$ increases with $d_{\rm ej}$. Our ability to numerically model the full range of parameters relevant to the supernova injection problem is incomplete. For example, clumps can be smaller (below the imaging resolution of the {\it Hubble Space Telescope}) and denser than we accounted for, and many clumps in the Cas A supernova remnant are likely to be traveling at speeds $V \approx 6000 \, {\rm km} \, {\rm s}^{-1}$ (Fesen et al.\ 2001). Smaller clumps and faster shock speeds are very difficult to numerically compute; because of the limitations of the CFL condition, even a factor of 2 increase in resolution requires an order of magnitude more computing time. As well, we did not vary the density of gas in the molecular cloud. Gas at the tops of the pillars in M16 has been shocked by the advancing D-type ionization front, and is characterized by densities $\gtsimeq 10^5 \, {\rm cm}^{-3}$, an order of magnitude higher than the densities we assumed. Density inhomogeneities such as star-forming cores within the molecular cloud would affect the propagation of ejecta material as well, and turbulent motions which must be present in the molecular cloud would affect the diffusion of material and the minimum lengthscales of coherent structures. Despite these limitations, we have gained enough insight from our parameter studies to predict how injection would proceed under different scenarios. One relevant set of parameters might be high-velocity ($6000 \, {\rm km} \, {\rm s}^{-1}$) clumps encountering denser ($n_{\rm H2} = 10^5 \, {\rm cm}^{-3}$) molecular gas. A shock speed that is a factor of 3 higher than our canonical value suggests a postshock temperature an order of magnitude higher ($T \propto V^2$), and a cooling timescale about a factor of 3 longer (see Fig.~1). On the other hand, the higher density of gas in the molecular cloud leads to postshock densities an order of magnitude greater, and a cooling timescale a factor of 10 smaller. This means cooling timescales are likely to be sufficiently short to allow efficient injection of clumpy supernova material into molecular gas that has already been shocked by a D-type ionization front. While much work remains to be done, our initial investigations clearly indicate that supernova clumps can be injected efficiently in to molecular material in many cases. When the ejecta finally come to rest, a large fraction of the clump mass will remain in the molecular cloud, mixing into material that is in the midst of collapsing to form new Sun-like stars. \subsection{Impact on Solar System Isotopic Anomalies} To determine the degree to which a forming solar system is contaminated by supernova material, we assume there are $N \sim 10^4$ clumps of mass $M \sim 10^{-4} \, M_{\odot}$, so that $N M = M_{\rm ej}$, the mass of the ejecta that is not ejected isotropically. Implicitly neglecting density variations within the molecular cloud, we assume that the periphery of the H {\sc ii} region traces a sphere of radius $r$ centered on the supernova, consistent with the assumption that the supernova progenitor was the dominant source of the ionizing photons that carved out the H {\sc ii} region. As discussed in \S 2.1, based on a main-sequence lifetime for the progenitor of 5 Myr and an ionization front that advances at $0.4 \, {\rm km} \, {\rm s}^{-1}$, we adopt a value $r \approx 2 \, {\rm pc}$. On average, the cross-sectional area of the molecular cloud that is associated with each clump is $4\pi r^2 / N$ $\sim (2 \times 10^{17} \, {\rm cm})^2$ or $\sim (0.06 \, {\rm pc})^2$ for our adopted parameters, such that clumps will be separated by $\sim 4 \times 10^{17} \, {\rm cm}$. The separation between clumps is surprisingly close to the width of the channel that is carved out in our fiducial runs, $\approx 2 \times 10^{17} \, {\rm cm}$, which is not significantly wider than the distribution of ejecta (Fig.~3). This suggests that lateral mixing may be rapid enough to contaminate the gas between channels. We do not explicitly model the turbulence that would effect this mixing, but we can estimate the mixing timescale by assuming that turbulence operates at least as effectively as in the pre-shocked molecular cloud. The turbulent mixing timescale at a lengthscale $l$ scales as $l / \delta v(l)$, where $\delta v(l)$ is the amplitude of the velocity fluctuations on the scale $l$ (Pan \& Scannapieco 2010). On the scale of the channel separation, $\sim 0.1 \, {\rm pc}$, we estimate $\delta v(l) \approx 1 \, (l / {\rm pc})^{0.4} \, {\rm km} \, {\rm s}^{-1}$ using the Larson scaling law (Larson 1981). Thus the mixing timescale across 0.1 pc would be $2 \times 10^{5} \, {\rm yr}$. This should be viewed as an upper limit for mixing, because of several factors that would increase turbulence and $\delta v(l)$. For example, the interaction of the underlying turbulent gas with the shocks created by the clump may increase the turbulent velocity fluctuations (e.g., Lee et al.\ 1997). The mixing timescale of $\sim 10^5 \, {\rm yr}$ is slightly longer than the duration of our simulations, but interestingly are comparable to the free-fall timescale ($\sim 3 \times 10^5 \, {\rm yr}$) on which this molecular gas will form protostars. Mixing across multiple channel separations (i.e., 0.3 pc), however, takes significantly longer. Thus, for the purposes of estimating the magnitude of isotopic anomalies, we assume that the gas between channels is contaminated effectively by the ejecta deposited in the nearest few channels. Assuming effective mixing, the volume of molecular gas that is associated with a single clump is $4\pi r^2 \, d_{\rm sh} / N$. If the ejecta mix evenly throughout this entire volume, the fraction of the mass that comes from the supernova would then be \[ f_{\rm cont} \approx \frac{ M }{ \rho_{\rm MC} \, 4\pi r^2 \, d_{\rm sh} / N} = 1 \times 10^{-4} \, \left( \frac{M_{\rm ej}}{2 \, M_{\odot}} \right) \, \left( \frac{ r }{ 2 \, {\rm pc} } \right)^{-2} \, \] \begin{equation} \times \left( \frac{ d_{\rm sh} }{0.5 \, {\rm pc}} \right)^{-1} \left( \frac{ \rho_{\rm MC} }{ 4.7 \times 10^{-20} \, {\rm g} \, {\rm cm}^{-3} } \right)^{-1}. \end{equation} Here we must interpret $M_{\rm ej}$ as the total mass of clumpy supernova material, as isotropic ejecta do not inject efficiently. This is the average concentration of supernova material in the molecular gas, up to a depth of about 0.5 pc. Clearly, the physical parameters of individual ejecta clumps affect $f_{\rm cont}$ only through $d_{\rm sh}$, whose dependence on these parameters has been studied in \S 4.4. The fraction $f_{\rm cont}$ is likely insensitive to $\rho_{\rm MC}$. This is because the penetration of clumps into the molecular cloud, when successful, is limited by the momentum of the clumps, and thus the product of $d_{\rm sh}$ and $\rho_{\rm MC}$ is expected to be roughly constant. Note that this is an {\it average} concentration of all the ejecta lying within 0.5 pc of the ionization front, and higher concentrations are possible in smaller fractions of the volume. Note also that the average concentration of supernova material is lower at greater distances, but still substantial. If the ionization front were at 4 pc instead of 2 pc, for example, the concentration would be reduced by less than a factor of 4, because the clump would have expanded, lowering its column density and reducing $d_{\rm sh}$. The point is that even at a different distance the molecular gas still would be robustly contaminated by the supernova at a significant level. We note again that ongoing star formation is observed in the molecular gas at the edges of of H {\sc ii} regions, probably triggered by the shocks driven a few tenths of a pc in advance of the ionization front (e.g., Snider et al.\ 2009 and references therein; see also \S 2.1). There is the additional intriguing possibility that the shocks propagating through the molecular cloud, driven by the clumps themselves, could trigger new star formation. This star formation is expected to take $10^{4} - 10^{5} \, {\rm yr}$, based on the evolutionary state of protostars that are uncovered by ionization fronts (Hester et al.\ 2004; Hester \& Desch 2005; Snider et al.\ 2009). This is comparable to the mixing timescale derived above, suggesting that each protostar is likely to acquire material from just one or a few clumps, and incorporation of this material is likely to take place shortly before or during star formation. Thus, provided the Sun formed at the periphery of an H {\sc ii} region, it is likely to have incorporated supernova material from a single small region or mixture of a few small regions from the nearby supernova. Adopting a mixing ratio $\approx 10^{-4}$, the abundance of an element or isotope in the solar nebula can be determined if the composition of that clump can be constrained. Obviously it is possible to pick any number of small regions within a ``prompt" supernova of any arbitrary mass $\gtsimeq 40 \, M_{\odot}$, and a full exploration of the problem is not possible here, although we can make general estimates. For example, the total amount of ${}^{26}{\rm Al}$ that is produced in a $40 \, M_{\odot}$ progenitor is $\approx 1.5 \times 10^{-5} \, M_{\odot}$ (Ellinger et al.\ 2010). This implies that after mixing, $1 \, M_{\odot}$ mass of gas will contain about $1.5 \times 10^{-9} \, M_{\odot}$ of ${}^{26}{\rm Al}$. This is to be compared to the mass of ${}^{27}{\rm Al}$ in the solar system, which is about $6.7 \times 10^{-5} \, M_{\odot}$ using the abundances of Lodders (2003). This estimate immediately suggests that within the contaminated portions of the molecular cloud, ${}^{26}{\rm Al} / {}^{27}{\rm Al} \sim 2 \times 10^{-5}$, {\it on average}. This is remarkably comparable to the initial ${}^{26}{\rm Al} / {}^{27}{\rm Al}$ ratio $\approx 5 \times 10^{-5}$ inferred for the solar nebula (MacPherson et al.\ 1995). There are several factors that could cause the ${}^{26}{\rm Al} / {}^{27}{\rm Al}$ ratio to deviate significantly from this approximate average value. First, it is not certain that mixing within the molecular cloud following the injection can proceed to completion, so that regions near the channels carved out by the ejecta might be over-enriched with respect to the surrounding gas. Also, because the solar system could have been contaminated by any small region within the supernova material, which varies by an order of magnitude in its ${}^{26}{\rm Al}$ content. For example, in the 1D calculations by Ellinger et al.\ (2010), the sub-explosive C-burning regions produced $\sim 10^{-5} \, M_{\odot}$ of ${}^{26}{\rm Al}$, despite making up a small fraction of the supernova mass. And in the 3D simulations by Ellinger et al.\ (2010), some $\sim 2 \times 10^{-5} \, M_{\odot}$ smoothed-particle-hydrodynamics (SPH) particles contained $4.8 \times 10^{-10} \, M_{\odot}$ of ${}^{26}{\rm Al}$, yielding an even higher mass fraction of ${}^{26}{\rm Al}$. In fact, again assuming a mixing ratio $\sim 10^{-4}$, if only one of these ${}^{26}{\rm Al}$-rich clumps contaminated the solar nebula, the initial ${}^{26}{\rm Al} / {}^{27}{\rm Al}$ ratio would be $7 \times 10^{-4},$ over ten times the observed value. Currently, it is not possible to predict the initial abundance of ${}^{26}{\rm Al}$ any better than this, but it is clear that if the solar nebula formed from molecular gas contaminated by an ${}^{26}{\rm Al}$-rich clump, then its initial ratio would be comparable to the value observed in meteorites. Using the same example dataset of Ellinger et al.\ (2010) we can also estimate the shifts in elemental and isotopic abundances of oxygen. In their 1D model of a $40 \, M_{\odot}$ progenitor, the total ejected mass of O (almost all ${}^{16}{\rm O}$) is $3.29 \, M_{\odot}$. Assuming a mixing ratio of $7 \times 10^{-5}$ implies that $2.3 \times 10^{-4} \, M_{\odot}$ of oxygen is injected, on average, into every $1 \, M_{\odot}$ mass of molecular gas that will form a solar system. This is to be compared to the mass of oxygen in the solar system, $6.7 \times 10^{-3} \, M_{\odot}$ (Lodders 2003), which implies that on average the late-forming stars at the edge of the H {\sc ii} will see increases in their oxygen content by 3\%, although again some clumps will be significantly more oxygen-rich than average. The sub-explosive C burning zones in the 1D models themselves produced $2.47 \, M_{\odot}$ of oxygen (again, almost all ${}^{16}{\rm O}$), despite their lower mass overall, suggesting that larger shifts in oxygen abundance, potentially several tens of percent, are not unreasonable for some stars. The isotopic shifts in O associated with injection of supernova material also were considered by Ellinger et al.\ (2010). Assuming that sufficient ${}^{26}{\rm Al}$ is injected into the forming solar system to explain the meteoritic abundance ${}^{26}{\rm Al} / {}^{27}{\rm Al} = 5 \times 10^{-5}$, they found isotopic shifts in oxygen could span a wide range of values. For the 1D models, a clump from the sub-explosive C-burning zones of a $40 \, M_{\odot}$ progenitor tends to inject nearly pure ${}^{16}{\rm O}$, dropping both the ${}^{17}{\rm O} / {}^{16}{\rm O}$ and ${}^{18}{\rm O} / {}^{16}{\rm O}$ ratios in the solar system, equivalent to a decrease in $\delta^{17}{\rm O}$ by roughly 35 permil in the cosmochemicalnotation. For this case there is little change in the ${}^{18}{\rm O} / {}^{17}{\rm O}$ ratio, but in 3D simulations, they found that many ${}^{26}{\rm Al}$ producing regions were significantly enhanced in ${}^{18}{\rm O}$ relative to ${}^{17}{\rm O}$, an effect that was especially strong in anisotropically exploding supernovae (Ellinger et al.\ 2011). Thus, injection of enough ${}^{26}{\rm Al}$ to explain the meteoritic evidence could shift the ${}^{18}{\rm O} / {}^{17}{\rm O}$ ratio by a factor of 2, by increasing $\delta^{18}{\rm O}$ by $> 1000$ per mil with little change in $\delta^{17}{\rm O}$. This injection of ${}^{18}$O into the solar system as it formed would decrease $\Delta {}^{17}O$ by a shift comfortably larger than the 300 permil shift inferred by Young et al (2011). In summary, exact shifts in elemental and isotopic abundances will depend on where within the supernova the one or few clumps that contaminated the solar system came from, so it is premature to try to predict the exact shifts. Nevertheless, supernova contamination of molecular gas appears able to qualitatively explain the abundance of ${}^{26}{\rm Al}$ and the shifts in oxygen abundance and isotopic composition inferred for our solar system. \subsection{Statistics of Supernova Contamination} Injection of supernova material into an already-formed protoplanetary disk has been critically examined by Gounelle \& Meibom (2007) and Williams \& Gaidos (2007). The general point raised by these authors is that {\it recently formed} disks are overwhelmingly likely to be several parsecs from the supernova progenitor, necessarily forming in the molecular gas at the periphery of the H {\sc ii} region. Looney et al.\ (2006) made similar points. Disks forming at $> 2 \, {\rm pc}$ from the supernova will intercept relatively little ejecta, so that insufficient ${}^{26}{\rm Al}$ could be intercepted to explain the meteoritic ratio. The meteoritic ratio ${}^{26}{\rm Al} / {}^{27}{\rm Al} = 5 \times 10^{-5}$, if the disk intercepts isotropically exploding ejecta, requires the disk to lie only $\sim 0.1 \, {\rm pc}$ from the supernova (Ouellette et al.\ 2005, 2007, 2010). On the other hand, Ouellette et al.\ (2010) showed that a protoplanetary disk intercepting clumpy ejecta can receive much more material than a disk intercepting isotropic ejecta at the same distance. For example, considering the model discussed above, $10^4$ clumps of mass $2 \times 10^{-4} \, M_{\odot}$ each, with radii $d / 300$, will have a volume filling fraction $3.7 \times 10^{-4}$ and will be 2700 times denser than isotropically expanding ejecta. Such a clump could intecept a protoplanetary disk at a distance of 2 pc (at which point its radius would be $\sim 10^3 \, {\rm AU}$), and the disk would receive as much supernova material as if it were exposed to isotropically expanding ejecta at a distance of 0.04 pc. If the clump samples a ${}^{26}{\rm Al}$-rich region within the supernova, the disk could intercept even more ${}^{26}{\rm Al}$. On the other hand, the areal filling fraction of the ejecta clumps, at the boundary of the H {\sc ii} region, will be only 2.8\%, so only one in 36 disks at the time of the supernova would encounter such clumps. Multiplying this fraction by the number of Sun-like stars forming disks late in the evolution of an H {\sc ii} region, we estimate that $\approx 0.1 - 1\%$ of Sun-like stars will encounter significant amounts of supernova material during the protoplanetary disk stage. This scenario, suggested by Chevalier (2000) and Ouellette et al.\ (2005, 2007, 2010), remains a viable, but unlikely, explanation for SLRs in the solar nebula. In contrast, injection of supernova material into molecular gas, just as stars and planetary systems are forming, appears to be robust. Essentially {\it all} stars forming late in the evolution of the star-forming region will be contaminated by some type of clumpy supernova material. The probability that a solar system would be contaminated is essentially the fraction of stars forming in a cluster rich enough to have a star $\gtsimeq 40 \, M_{\odot}$ in mass (so that it explodes in $< 5 \, {\rm Myr}$), times the fraction of stars that form in such a cluster after 5 Myr of evolution. As outlined in \S 2.1, the first fraction is probably $\approx 75\%$. The second fraction depends on the rate of star formation and is related to the question of whether star formation is triggered. Multiple observations show that star formation is ongoing in H {\sc ii} regions, even of ages 2-3 Myr (Palla \& Stahler 2000; Hester et al.\ 1996, 2004; Healy et al.\ 2004; Sugitani et al.\ 2002; Snider 2008; Snider et al.\ 2009; Snider-Finkelstein 2009; Getman et al.\ 2007; Reach et al.\ 2009; Choudhury et al.\ 2010; Billot et al.\ 2010; Bik et al.\ 2010; Zavagno et al.\ 2010; Beerer et al.\ 2010; Comer\'{o}n \& Schneider 2011). Snider (2009) examined the ages of recently formed stars in several H {\sc ii} regions using combined {\it Spitzer Space Telescope} and {\it Hubble Space Telescope} data. The analysis of NGC 2467 in particular was presented by Snider et al.\ (2009), who found that 30-45\% of the Sun-like stars in this H {\sc ii} region were triggered to form after the initial formation of the cluster and the most massive stars, 2 Myr ago. This implies that if the rate of triggered star formation is constant in time, and extends until the supernova explodes, at about 5 Myr of age, then $\sim 10\%$ of Sun-like stars in this cluster would form in the last 1 Myr of star formation. If the rate of triggered star formation scales as the area swept out by the ionization front and therefore as the square of the age of the cluster, $t^2$, then the fraction of stars forming in the last 1 Myr before the supernova could be as high as $\sim 40\%$. If approximately 75\% of all Sun-like stars form in a rich cluster with a star that will go supernova within 5 Myr, and 10-40\% of those stars form in the 1 Myr before the supernova, then the likelihood of a Sun-like star forming from gas contaminated by ejecta from a recently exploded supernova is on the order of 7-30\%. This is a considerably higher probability than the $0.1 - 1\%$ probability of injection of an ejecta clump into a protoplanetary disk (Ouellette et al.\ 2010). More importantly, it suggests strongly that supernova contamination may be a common and universal process. \subsection{Elemental Variability of Sun-Like Stars} If supernova contamination is a common process, one would expect to see variations in elemental abundances in spectra of Sun-like stars. In fact, there is ample evidence from stellar spectra of planet-hosting stars for variability in elemental abundances. Fischer \& Valenti (2005) surveyed 850 FGK-type stars that have Doppler observations sufficient to uniformly detect all planets with radial velocity semiamplitudes $K > 30 \, {\rm m} \, {\rm s}^{-1}$ and orbital periods shorter than 4 yr. Among this sample they found variations of up to a factor two in [Na/Fe], [Si/Fe], [Ti/Fe], and [Ni/Fe] over the range -0.5$<$[Fe/H]$<$ 0.5, and no correlations between metallicity and orbital period or eccentricity. They concluded that host stars do not have an accretion signature that distinguishes them from non-host stars, and that host stars are simply born in higher-metallicity molecular clouds. Bond et al.\ (2008) analyzed elemental abundances for eight elements, including five heavy elements produced by the r- and s-processes, in 28 planetary host dwarf stars and 90 non-host dwarf stars. They found elemental abundances of planetary host stars are only slightly different from solar values, while host stars are enriched over non-host stars in all elements studied, varying by up to a factor of two but with enrichments of 14\% (for O) and 29\% around the mean. Pagano et al. (2010) examined elemental abundances for 13 elements in 52 dwarf stars in the solar neighborhood, and found the variations in C, O, Na, Al, Mg, Ca, and Ti to be about a factor of two around the mean at the 3 $\sigma$ level. Supernova injection into the molecular cloud from which protostars are forming remains a plausible mechanism for these variations, and may contribute to the abundances observed in planet-hosting stars. For example, as discussed in \S 5.2, if clumps from different regions of a supernova could be injected into a $1 \, M_{\odot}$ mass cloud core as it was collapsing, at a mixing ratio $\approx 10^{-4}$ (see Eq.~1), one could get variations of the order observed. For example, an oxygen-rich clump could have an increase of up to 30\%. It is worth noting that if the variability in stellar elemental abundances can be attributed to injection of supernova material, then observations like those mentioned above could be used to assess whether an exoplanetary system was contaminated by unobservable species. These could include short-lived radionuclides like ${}^{26}{\rm Al}$, which are long extinct in any system older than a few Myr, as well as P, which is difficult to observe because it lacks optical transition lines. Massive stars produce a number of isotopes within a given mass shell, and these isotopic abundance ratios may be conserved within an impinging clump if mixing within the supernova explosion is not large. For example, Young et al.\ (2009) considered the co-spatial production of elements in supernova explosions, to find observationally detectable proxies for enhancement of $^{26}$Al. Using several massive progenitor stars and explosion models, they found that the most reliable indicator of $^{26}$Al in an unmixed clump is a low S/Si ratio of $\sim$0.05. A clump formed from material within the O-Ne burning shell should be enriched in both $^{26}$Al and $^{60}$Fe (Timmes et al 1995, Limongi \& Chieffi 2006) and the biologically important element P is produced at its highest abundance in the same regions (Young et al 2009). Even if these specific elemental ratios are not found, the supernova injection model broadly predicts that species co-produced within supernovae will tend to show correlated excesses in stellar spectra. Observations of elemental abundances from stellar spectra can be used to test this hypothesis. \subsection{Mixing on Galactic Scales} Finally, our results have implications for how the metals ejected by supernovae are released into the multiphase ISM, a question of key importance in understanding their turbulent mixing on lengthscales $\approx 10 - 500 \, {\rm pc}$ (Roy \& Kunth 1995; Scalo \& Elmegreen 2004). Tenorio-Tagle (1996), for example, argued using simple estimates that metals are likely to be released directly into hot, thermalized superbubbles, which blow out of the galactic disk, only to cool and rain down later as metal-rich ``droplets" that are then broken apart by the RT instability. In this case, there would be a significant delay between metal production and enrichment, but after this delay metals would be deposited over large regions. A more detailed numerical study was carried out by de Avillez \& Mac Low (2002), who examined turbulent mixing in a multiphase ISM that was seeded with a scalar concentration field that varied on a fixed spatial scale that was uncorrelated with the locations of supernovae. They found that at early times the variance of the concentration decreased on a timescale that was proportional to the lengthscale of the initial fluctuations, and they argued that the late-time evolution was largely independent of this lengthscale. At early times, these results can be understood as being controlled by the mixing of metals in hot low-density environments, which occurs in any single temperature medium on a timescale set by the initial length fluctuations divided by the turbulent velocity (Pan \& Scannapieco 2010). On the other hand, at late times the results might depend on the much slower process of mixing between hot and cold regions, which is set by the size of the cold clouds and their density contrast with the hot medium (e.g.\ Klein, McKee, \& Colella 1994; Fragile et al.\ 2004). More recently, Ntormousi \& Burkert (2012) have emphasized the difficulty of mixing metals from the hot gas into the colder ISM out of which new molecular clouds form, arguing that the enrichment of the cold ISM will be delayed by at least a cooling time of the hot diffuse gas. The mixing of clumpy supernova ejecta directly into molecular clouds, seen in our simulations, would completely circumvent this limiting step in galactic chemical evolution. While a fraction of the elements deposited by this mechanism would be locked into Sun-like stars formed in the wake of the D-type ionization front and supernova shock, at least as much enriched molecular material would be subsequently ionized and launched into the low-density ($\approx 0.1 \, {\rm cm}^{-3}$), warm, ionized ($\approx 10^4 \, {\rm K}$) medium (e.g. Matzner \& McKee 2000). The higher densities of this gas lead to shorter cooling times and higher density contrasts, by orders of magnitude, greatly accelerating mixing, as compared to superbubbles. This process of warm-phase galactic enrichment merits further theoretical study and may be important in explaining the relative homogeneity of the Milky Way ISM on $\sim 100 \, {\rm pc}$ scales (Meyer et al.\ 1998; Cartledge et al.\ 2006), as well as the low dispersions seen in massive stars in nearby galaxies (e.g.\ Kobulnicky \& Skillman 1996; 1997). \subsection{Final Word} Supernovae have long been implicated to explain stable isotope anomalies and the abundances of short-lived radionuclides in the early solar system. As the Sun's elemental and isotopic abundances have become better constrained and compared to abundances in meteoritic material, presolar grains, and interstellar gas, it has also become increasingly apparent that the Sun itself might have been contaminated by supernova material. Surprisingly, large variations in elemental abundances among planet-hosting stars point to a similar stochastic contamination process by individual nearby supernovae. The traditional environment articulated for this contamination has been either ejecta sweeping over a distant $\gtsimeq 10 \, {\rm pc}$ molecular cloud core, injecting material as it prompts its collapse, or ejecta sweeping over a nearby ($\sim 0.1-1 \, {\rm pc}$) protoplanetary disk. Here we consider for the first time enrichment in the H {\sc ii} region environment in which a core-collapse supernova is likely to take place. The explosion of a massive ($\gtsimeq 40 \, M_{\odot}$) progenitor will occur within only 5 Myr, before the ionization fronts launched by the progenitor can advance more than a few parsecs. At these times the material ejected by the supernova will interact with the molecular gas at the edge of the H {\sc ii} region. Supernova do not, in general, explode isotropically. Instead, both numerical calculations and observations of SN1987A and the Cas A supernova remnant indicate that clumpiness is a common feature. This clumpiness plays a crucial role in enrichment, as our numerical simulations find that isotropically exploding ejecta are too diffuse to penetrate into a molecular cloud. On the other hand, clumps with properties consistent with those in Cas A deposit their material $\approx 0.5$ pc into the molecular cloud, but only if cooling is significant, such thatthe cooling timescale is $\ltsimeq 10^2 \, {\rm yr}$. Our simulations are limited by numerical resolution and were not able to span the entire set of relevant parameters, but these results appear robust. The gas at the edge of an H {\sc ii} region is widely recognized to be the site of active star formation. It is likely that this star formation is triggered by the advance of the ionization fronts into the molecular gas, but the mechanism need not be identified to assert that the supernova material injected into the molecular gas at the edge of the H {\sc ii} region will be taken up by forming solar systems. All of this star-forming material will be contaminated at an average mixing ratio $\sim 10^{-4}$. Both this mixing ratio and the compositions of small regions within modeled core-collapse supernovae are consistent with the quantities of ${}^{26}{\rm Al}$ injected into the early solar system as well as the elemental and isotopic shifts inferred in oxygen. Possibly injection of ${}^{28}{\rm Si}$-rich silicon could also explain the difference in Si isotopes between the Sun and presolar grains. Future work will examine whether specific regions within promptly exploding supernovae match the isotopic shifts inferred from meteorites and other observations. Injection of clumpy supernova material into molecular gas at the edge of an H {\sc ii} region can occur under very common conditions, and all of the stars forming late in the evolution of an H {\sc ii} region are likely to be contaminated by this process. Depending on the specific trigger for star formation and the overall rate of star formation, we estimate between 7 and 30\% of {\it all} Sun-like stars are likely to be contaminated by a single, nearby supernova. The injection process that we infer gave the solar system its inventory of ${}^{26}{\rm Al}$ and other isotopic anomalies may be a common, universal mechanism.
| 12
| 6
|
1206.6516
|
1206
|
1206.6502_arXiv.txt
|
{We present \texttt{THC}: a new high-order flux-vector-splitting code for Newtonian and special-relativistic hydrodynamics designed for direct numerical simulations of turbulent flows. Our code implements a variety of different reconstruction algorithms, such as the popular weighted essentially non oscillatory and monotonicity-preserving schemes, or the more specialised bandwidth-optimised WENO scheme that has been specifically designed for the study of compressible turbulence. We show the first systematic comparison of these schemes in Newtonian physics as well as for special-relativistic flows. In particular we will present the results obtained in simulations of grid-aligned and oblique shock waves and nonlinear, large-amplitude, smooth adiabatic waves. We will also discuss the results obtained in classical benchmarks such as the double-Mach shock reflection test in Newtonian physics or the linear and nonlinear development of the relativistic Kelvin-Helmholtz instability in two and three dimensions. Finally, we study the turbulent flow induced by the Kelvin-Helmholtz instability and we show that our code is able to obtain well-converged velocity spectra, from which we benchmark the effective resolution of the different schemes.}
|
Numerical relativistic hydrodynamics has come a long way since the pioneering works by \cite{May66} and \cite{Wilson72} and it is now playing a central role in modelling of systems involving strong gravity and/or flows with high Lorentz factors. Example of applications are relativistic jets, core-collapse supernovae, merger of compact binaries and the study of gamma-ray bursts [see \cite{Marti03} and \cite{Font08} for a complete overview]. In all of these areas the progress has been continuous over the past few years to the point that relativistic computational fluid dynamics is starting to provide a realistic description of many relativistic-astrophysics scenarios [see, \eg \cite{Rezzolla:2011}]. Key factors in this progress have been the switch to more advanced and accurate numerical schemes, and in particular the adoption of high resolution shock capturing (HRSC) schemes \citep{Marti91, Schneider90, Banyuls97, Donat98, Aloy99a, Baiotti03a} and the progressive inclusion of more ``physics'' for a more accurate description of the different scenarios. Examples of the latter are the inclusion of magnetic fields \citep{Koide99, DelZanna2003, Gammie03, Komissarov04, Duez05MHD0, Neilsen2005, Anton06, Giacomazzo:2007ti} the use of realistic tabulated equations of state, [see, \eg \cite{Sekiguchi2011}], and the description of radiative processes \citep{Farris08, Sekiguchi2010, Zanotti2011}. We expect that both improved physical models and better numerical techniques will be key elements in the future generation of codes for relativistic astrophysics. On the one hand it is necessary to take into account many physical phenomena that are currently oversimplified and, on the other hand, higher accuracy is necessary to make quantitative predictions even in the case where simplified models are used to describe the objects of study. For example, in the case of inspiralling binary neutron stars, waveforms that are sufficiently accurate for gravitational-waves templates are just now becoming available and only in the simple case of polytropic stars \citep{Baiotti:2009gk,Baiotti:2010,Bernuzzi2011}. Clearly, even higher accuracy will be required as more realistic equations of state are considered or better characterisations of the tidal effects are explored \citep{Baiotti2011,Bernuzzi2012}. For this reason the development of more accurate numerical tools for relativistic hydrodynamics is an active and lively field of research. Most of the effort has been directed towards the development of high-order finite-volume \citep{tchekhovskoy_2007_wham, Duffell2011} and finite-difference \citep{Zhang2006, DelZanna2007} schemes, but many alternative approaches have been also proposed, including finite-element methods \citep{Mann1985, meier_1999_mas}, spectral methods \citep{gourgoulhon_1991_seg}, smoothed-particle-hydrodynamics \citep{Siegler00, rosswog_2010_csr} and discontinuous Galerkin methods \citep{Dumbser2009, Radice2011}. The use of flux-conservative finite-difference HRSC schemes is probably the easiest way of increasing the (formal) order of accuracy of the current generation of numerical codes: finite-difference schemes are much cheaper then high-order finite-volume codes since they do not require the solution of multiple Riemann problems at the interface between different regions \citep{Shu99, Shu01} and they are free from the complicated averaging and de-averaging procedures of high-order finite-volume codes [see, \eg \cite{tchekhovskoy_2007_wham}]. Here we present a new code, the Templated-Hydrodynamics Code (\texttt{THC}), developed using the Cactus framework \citep{Goodale02a}, that follows this approach. \texttt{THC} employs a state-of-the-art flux-vector splitting scheme: it uses up to seventh-order reconstruction in characteristics fields and the Roe flux split with a novel entropy-fix prescription. The ``templated'' aspect reflects the fact that the code design is based on a modern C$++$ paradigm called template metaprogramming, in which part of the code is generated at compile time. Using this particular programming technique it is possible to construct object-oriented, highly modular, codes without the extra computational costs associated with classical polymorphism, because, in the templated case, polymorphism is resolved at compile time allowing the compiler to inline all the relevant function calls, see \eg \citet{yang_2000_oon}. Among the different reconstruction schemes that we implemented are the classical monotonicity-preserving (MP) MP5 scheme \citep{suresh_1997_amp, mignone_2010_hoc}, the weighted essentially non oscillatory (WENO) schemes WENO5 and WENO7 \citep{Liu1994, Jiang1996, Shu97} and two bandwidth-optimized WENO schemes: WENO3B and WENO4B \citep{martin_2006_bow, taylor_2007_one}, designed for direct simulations of compressible turbulence (we recall that the number associated to the different methods indicates the putative order of accuracy). By far the largest motivation behind the development of \texttt{THC} is the study of the statistical properties of relativistic turbulence and the determination of possible new and non-classical features. In a recent paper (Radice \& Rezzolla, in prep.), we have presented the results of direct numerical simulations of driven turbulence in an ultrarelativistic hot plasma obtained using \texttt{THC}. More specifically, we have studied the statistical properties of flows with average Mach number ranging from $\sim 0.4$ to $\sim 1.7$ and with average Lorentz factors up to $\sim 1.7$, finding that flow quantities, such as the energy density or the local Lorentz factor, show large spatial variance even in the subsonic case as compressibility is enhanced by relativistic effects. We have also uncovered that the velocity field is highly intermittent, but its power-spectrum is found to be in good agreement with the predictions of the classical theory of Kolmogorov. Overall, the results presented in Radice \& Rezzolla in prep.~indicate that relativistic effects are able to enhance significantly the intermittency of the flow and affect the high-order statistics of the velocity field, while leaving unchanged the low-order statistics, which appear to be universal and in good agreement with the classical Kolmogorov theory. In this paper we give the details of the algorithms used in \texttt{THC} and employed in Radice \& Rezzolla in prep., presenting a systematic comparison between the results obtained using the above mentioned reconstruction schemes, with emphasis for the application of these schemes for direct simulations of relativistic turbulence. To our knowledge this is the first time that such a comparison has been done in the relativistic case. The rest of this paper is organised as follows. In Section \ref{sec:code} we present \texttt{THC} code in more detail: we discuss the numerical algorithms it uses and recall the equations of Newtonian and special-relativistic hydrodynamics. The results obtained with our code in a representative number of test cases for the Newtonian and special relativistic hydrodynamics are presented in Section \ref{sec:tests}. In Section \ref{sec:kh3d} we present the application of our code to the study of the linear and nonlinear development of the relativistic Kelvin-Helmholtz instability (KHI) in three dimensions as a nontrivial application for our code and a stringent test of its accuracy. Finally Section \ref{sec:conclusions} is dedicated to the summary and the conclusions.
|
\label{sec:conclusions} We have presented \texttt{THC}, a new multi-dimensional, finite-difference, high-resolution shock-capturing code for classical and special-relativistic hydrodynamics. \texttt{THC} employs up to seventh-order accurate reconstruction of the fluxes in local characteristic variables and the Roe flux-vector-splitting algorithm with a novel entropy-fix prescription. The multi-dimensional case is treated in a dimensionally unsplit fashion and the time integration is done with a third-order strongly-stability-preserving Runge-Kutta scheme. We have carried out a systematic comparison of the results obtained with our code using five different reconstruction operators: the classical WENO5, WENO7, MP5, as well as two specialised bandwidth-optimized WENO schemes: WENO3B and WENO4B. For all schemes, we have checked their ability to sharply capture grid-aligned, diagonal or spherical shock waves, and have carried out a rigorous assessment of their accuracy in the case of smooth solutions. Finally, we have contrasted the performance of the different methods in the resolution of small scale structures in turbulent flows. To the best of our knowledge, this is the first time that such a comparison has been carried over in the special-relativistic case. Among the different tests studied, some are highly nontrivial, such those involving the linear and the nonlinear phases of the development of the Kelvin-Helmholtz instability for a relativistic fluid in two and three dimensions. In particular, we have shown the importance of using schemes that are able to properly capture the initial contact discontinuity in order to obtain the correct growth rate of the RMS transverse velocity in the linear-growth phase of the instability at low resolution, confirming the findings by \cite{Mignone2009} and \cite{Beckwith2011}. When studying Kelvin-Helmholtz instability in two dimensions, we have investigated the nature of the secondary vortices that appear during the initial stages of the instability when using some of the numerical schemes considered. We have then clarified that these vortices are not genuine features of the solution of the equations, but rather numerical artefacts that converge away with resolution. When studying Kelvin-Helmholtz instability in three dimensions, we have instead investigated the ``mixing timescale'' of the instability and the subsequent turbulent flow, showing that we are able to obtain a converged measure of the velocity power spectrum, using the WENO5 scheme. Our data offers a clear indication that the Kolmogorov phenomenology \citep{Kolmogorov1991a} holds also for the turbulence in a subsonic relativistically-warm fluid. Using the Kolmogorov $-5/3$ scaling as a reference, we have estimated the effective inertial range of the different schemes, highlighting the importance of using high-order schemes to study turbulent flows. The code has also been used in a systematic investigation of direct numerical simulations of driven turbulence in an ultrarelativistic hot plasma, whose results will be presented in a companion paper (Radice \& Rezzolla in prep.). Finally, \texttt{THC} represents the first step towards the implementation of new and high-order methods for the accurate study of general-relativistic problems, such as the inspiral and merger of binary neutron stars~\citep{Baiotti2011} and their relation with gamma-ray bursts~\citep{Rezzolla:2011}. We are in fact convinced that the transition towards higher-order methods is now a necessary and an inevitable step for a more realistic description of the complex phenomenology associated with these relativistic-astrophysics processes.
| 12
| 6
|
1206.6502
|
1206
|
1206.1322_arXiv.txt
|
Galaxy clusters are one of the most promising candidate sites for dark matter annihilation. We focus on dark matter ($\chi$) with mass in the range $(10\,\rm{GeV}-100\,\rm{TeV})$, annihilating through the channels $\chi\chi\rightarrow\mu^+\mu^-$, $\chi\chi\rightarrow\nu\bar{\nu}$, $\chi\chi\rightarrow t \overline{t} $, or $\chi\chi\rightarrow\nu\bar{\nu}\nu \bar{\nu}$, and forecast the expected sensitivity to the annihilation cross section into these channels by observing galaxy clusters at IceCube/KM3NeT. Optimistically, the presence of dark matter substructures in galaxy clusters is predicted to enhance the signal by $(2-3)$ orders of magnitude over the contribution from the smooth component of the dark matter distribution. Optimizing for the angular size of the region of interest for galaxy clusters, the sensitivity to the annihilation cross section, $\langle \sigma v \rangle$, of heavy DM with mass in the range ($300\,{\rm GeV}-100\,{\rm TeV}$) will be $\mathcal{O}$($10^{-24}$\,cm$^3$s$^{-1}$), for full IceCube/KM3NeT live time of 10 years, which is about one order of magnitude better than the best limit that can be obtained by observing the Milky Way halo. We find that neutrinos from cosmic ray interactions in the galaxy cluster, in addition to the atmospheric neutrinos, are a source of background. We show that significant improvement in the experimental sensitivity can be achieved for lower DM masses in the range ($10\,{\rm GeV}-300\,{\rm GeV})$ if neutrino-induced cascades can be reconstructed to $\approx 5^\circ$ accuracy, as may be possible in KM3NeT. We therefore propose that a low-energy extension ``KM3NeT-Core'', similar to DeepCore in IceCube, be considered for an extended reach at low DM masses.
|
\label{sec:Intro} There is overwhelming evidence for, yet unexplained, invisible mass in our Universe~\cite{Zwicky:1933gu, Rubin:1970zz, Komatsu:2010fb, Clowe:2006eq}. Particles in the standard model of particle physics cannot account for the major fraction of this excess mass, but a new particle with weak-scale annihilation cross sections to standard model particles, as predicted in several extensions of the standard model of particle physics, would naturally explain its observed abundance~\cite{Jungman:1995df, Feng:2010gw, Bertone:2004pz, Bergstrom:2012np, Bertone:2010at}. This has motivated a comprehensive search for the particle identity of this ``dark matter'' (DM) using (i) direct production of DM at colliders~\cite{CMS, ATLAS}, (ii) direct detection of DM via elastic scattering~\cite{Bernabei:2008yi, Kim:2012rz, Aalseth:2010vx, Angloher:2011uu, Ahmed:2009zw, Ahmed:2010wy, Angle:2011th, Aprile:2011hi, Felizardo:2010mi, Behnke:2012ys, Armengaud:2011cy, Archambault:2012pm} and (iii) indirect detection of DM via its annihilation or decay~\cite{Gunn:1978gr,Stecker:1978du,Zeldovich:1980st,Ackermann:2011wa, IceCube:2011ae, Abdo:2010nc, Ajello:2011dq, Abdo:2010dk, Tanaka:2011uf, Abramowski:2010aa, Aleksic:2009ir, collaboration:2011sm, Aleksic:2011jx, Abbasi:2011eq, Adriani:2008zr, Adriani:2010rc, Ackermann:2010ij, FermiLAT:2011ab, Ackermann:2012qk, Aharonian:2008aa, Aharonian:2009ah}. This three-pronged approach to DM detection is necessary because a single experiment cannot probe all the properties of DM. For example, collider experiments mainly probe production of DM particles, whereas direct detection only probes the interaction between the DM particle and the particular detector material~\cite{Pato:2010zk}. Analogously, in an indirect detection experiment, we learn about the final states of DM annihilation or decay. Indirect detection experiments are also sensitive to the DM density distribution at cosmological scales in this Universe unlike direct detection experiments which are only sensitive to the local DM distribution in the Milky Way. If LHC detects a DM candidate, then indirect detection experiments are also useful to determine whether that particular DM candidate makes up most of the DM in the Universe~\cite{Bertone:2011pq}. Indirect detection experiments, looking for products of DM annihilation in astrophysical sources, only detect a handful of the final states, \eg, photons, electrons, protons, neutrinos, and their antiparticles. If these experiments detect a signal that requires a cross section larger than the thermal relic annihilation cross section it would challenge a simple thermal WIMP paradigm of DM, and thus provide a crucial test of the WIMP paradigm~\cite{Steigman:2012nb}. On the other hand, there is no guarantee that a signal must be found if we can probe cross sections comparable to, or smaller than, the thermal relic annihilation cross section -- annihilations could proceed to undetected channels. In that case, however, one sets an upper bound on the partial annihilation cross sections into these observed channels, constraining particle physics models of DM. Several astrophysical targets, \eg, the Sun, the Milky Way, dwarf galaxies, and galaxy clusters, may be observed by indirect detection experiments. A careful estimate of the signal and the background for each of these source classes is needed to determine which of these targets provides the best signal-to-noise ratio for a given DM model. The Sun accumulates DM particles while moving through the DM halo of the Milky Way. Due to the high density at the core of the Sun, for DM mass \,$\gsim$\,300 GeV, annihilations products are absorbed and the sensitivity of DM annihilation searches weakens considerably, making it inefficient for probing high mass DM~\cite{IceCube:2011aj, Rott:2011fh, Bell:2011sn, Taoso:2010tg, Lim:2009jy}. The Milky Way is dominated by DM in its central regions, but unknown astrophysical backgrounds make it difficult to disentangle the signal~\cite{Bertone:2004ag, Bertone:2002je, Weniger:2012tx, Bringmann:2012vr, Hooper:2011ti, Hooper:2010mq}, whereas the diffuse component of the Milky Way DM halo~\cite{Abdo:2010dk, Abbasi:2011eq} leads to a significantly reduced signal. Dwarf galaxies have a high mass-to-light ratio and are one of the ideal targets for detecting DM in gamma-ray experiments with subdegree angular resolution~\cite{IceCube:2011ae, PalomaresRuiz:2010pn, Ackermann:2011wa, GeringerSameth:2011iw, Belikov:2011pu, Buckley:2010vg, Aharonian:2007km, SanchezConde:2011ap, Sandick:2009bi, Moulin:2007ca}. Galaxy clusters have the largest amount of DM amongst all known classes of gravitationally bound objects in the Universe. Although the background due to other astrophysical sources is also large therein, the contribution of DM substructures can enhance the DM annihilation signal from the smooth component, typically modeled using a Navarro, Frenk, and White (NFW) profile~\cite{Navarro:1996gj}. This enhancement depends on the abundance of DM substructures. State-of-the-art galaxy cluster simulations do not have the resolution to directly calculate the contribution due to the theoretically expected least massive substructures. However using theoretically well-motivated values for the mass of the smallest substructure and extrapolating the abundance of substructures to these lowest masses, high resolution computer simulations predict that galaxy clusters provide the best signal-to-noise ratio for DM annihilation signal~\cite{Gao:2011rf}. Note that even a moderate enhancement due to DM substructure, as advocated in~\cite{SanchezConde:2011ap} following the works in~\cite{Kamionkowski:2008vw,Kamionkowski:2010mi} predicts that galaxy clusters give the best signal-to-noise ratio for analysis where the field-of-view is greater than or equal to 1$^{\circ}$. This strongly motivates observations of galaxy clusters to search for DM annihilation signals\mbox{~\cite{Huang:2011xr, Abramowski:2012au, Dugger:2010ys, Profumo:2008fy, Ackermann:2010rg, Ando:2012vu, Pinzke:2009cp, Pinzke:2011ek, Jeltema:2008vu, Ackermann:2010rg}}. Neutrino searches, among other indirect searches for DM, have distinct advantages. Being electrically neutral and weakly interacting, neutrinos travel undeflected and unattenuated from their sources. So neutrinos can provide information about dense sources, which may be at cosmological distances, from which no other standard model particles can reach us. Another crucial motivation to look for neutrinos is that many standard model particles eventually decay to produce neutrinos and gamma rays as final states. Detecting neutrinos is therefore complementary to gamma ray searches from DM annihilation, which have become very exciting in recent times~\cite{Ackermann:2011wa, Abdo:2010nc, Abramowski:2010aa, Abdo:2010dk, Aleksic:2011jx}. For very heavy DM, the gamma rays produced in the DM annihilations cascade and the constraints on DM annihilation cross sections become weaker than those obtained using neutrinos. Also, for hadronic explanations of any gamma ray and cosmic ray excesses, detecting neutrinos will be a smoking gun signature. Finally, direct annihilation to neutrinos is impossible to detect using any other detection channel, with electroweak bremsstrahlung being a notable exception~\cite{Bell:2011eu} although the limits obtained in that case turns out to be weaker than those obtained by direct observation of neutrinos~~\cite{Kachelriess:2007aj}. In fact, neutrinos, being least detectable, define a conservative upper bound on the DM annihilation cross section to standard model particles~\cite{Beacom:2006tt, Yuksel:2007ac}. Limits obtained by gamma ray telescopes are typically stronger than that obtained using neutrino telescopes, but the larger angular resolution of a neutrino telescope, compared to a gamma ray telescope, means that the results obtained in a neutrino telescope is less dependent on the central part of the DM density profile (which gives the strongest signal in a gamma ray telescope) where the uncertainty obtained in DM simulations is the largest. Neutrinos telescopes are also able to view a target source for a longer time compared to a gamma ray telescope, though this advantage is mitigated by the smaller cross section of neutrino detection. Another advantage of neutrino telescopes is that they are able to view a large number of sources simultaneously and can be used to find dark matter in a region which is dark in the electromagnetic spectrum. These arguments and the availability of large neutrino telescopes strongly motivate a search for DM annihilation using neutrinos. Although dwarf galaxies are known to be the best targets for dark matter searches for gamma-ray experiments, they are not the best targets for neutrino experiments. The reason for this is the limited angular resolution of a neutrino telescope, which is $\gsim$ 1$^{\circ}$. Dwarf galaxies have an angular size of $<$ 1$^{\circ}$ and thus when a neutrino telescope takes data from a dwarf galaxy, even with the minimum angular resolution, the size of the dwarf galaxy is smaller than the data-taking region, which implies a worse signal-to-noise ratio. However, galaxy clusters have a typical size of a few degrees and hence even when neutrino telescopes are taking data in the larger than minimum angular resolution mode, the size of the galaxy cluster fills up the entire data-taking region. This ensures that, unlike in the case of dwarf galaxies, there is no position in the data-taking region from where there is no potential signal candidate and thus provides a better signal-to-noise ratio. Neutrinos from galaxy clusters have been considered previously by Yuan \etal\,\cite{Yuan:2010gn}. In that paper, the DM halo for a galaxy cluster was obtained from extrapolation of the DM halo obtained from the simulation of a Milky Way like galaxy~\cite{Springel:2008by, Springel:2008cc}. Using the Fermi-LAT limits from galaxy clusters, Yuan \etal\,constrained the minimum DM substructure mass, and analyzed muon tracks in IceCube to obtain a constraint on DM annihilation cross section. In this paper, we investigate neutrinos from galaxy clusters using the latest DM density profiles, as given in Gao \etal\,\cite{Gao:2011rf}. This gives us updated inputs for both the smooth and the substructure components of DM in galaxy clusters. For comparison, we also calculate our results by taking the smooth and the substructure components of DM profile from the work by Sanchez-Conde \etal\,\cite{SanchezConde:2011ap} and find that due to the smaller boost factors (about a factor of 20 smaller boost factors than compared to that in~\cite{Gao:2011rf}), the sensitivity of the neutrino telescope for this parametrization of the DM profile is about a factor of 20 worse than what is obtained while using the DM substructure modeling of~\cite{Gao:2011rf}. We also take into account neutrinos produced due to cosmic ray interactions in the galaxy cluster, ignored in previous studies. With these updated inputs, we analyze the expected signals and backgrounds at IceCube and KM3NeT for both track and cascade events. While, quantitative improvement in the detection prospects is found for track searches, qualitative improvement in sensitivity and reach at low DM masses is expected if KM3NeT deploys a low energy extension, which we call KM3NeT-Core, and is able to reconstruct cascades with a pointing accuracy down to\,5$^\circ$ as claimed by Auer~\cite{Auer:2009zz}. The remainder of the paper is arranged as follows. In Sec.\,\ref{sec:Cluster}, we discuss the neutrino flux from DM annihilation, using the DM density profile of a typical galaxy cluster. In Sec.\,\ref{sec:NeuDet}, we discuss neutrino detection and relevant backgrounds at a neutrino telescope. In Sec.\,\ref{sec:Results} we discuss the results, showing our forecasted sensitivity to $\sigv$ for the considered annihilation channels, and conclude in Sec.\,\ref{sec:Conclusion}.
|
\label{sec:Conclusion} In this paper, we have considered observation of galaxy clusters by neutrino telescopes and discussed the improvements that can be made over the existing limits. Recent high resolution computer simulations of galaxy clusters predict a large enhancement in the annihilation flux due to DM substructures. We take the substructure contribution into account and predict the neutrino flux from a typical galaxy cluster. We find that the sensitivity that can be obtained using galaxy clusters should improve the existing constraints by more than an order of magnitude. Our results should therefore encourage the IceCube collaboration to look at galaxy clusters, as an extension of their work on dwarf galaxies~\cite{IceCube:2011ae}. Due to the extended nature of the DM substructure profile (see Fig.\,\ref{fig:J vs psi}), nearby galaxy clusters like Virgo should appear as extended sources at neutrino telescopes. We find that the optimal angular window around a galaxy cluster that maximizes the signal-to-noise ratio has a radius $\approx2^\circ$ (see Fig.\,\ref{fig:J over square root delta omega}). An order of magnitude improvement over the IceCube sensitivity is expected if KM3NeT deploys a low energy extension (like DeepCore in IceCube) in their telescope, which would allow for a full-sky observation with good pointing using cascades. This has the potential to open the $(10\,{\rm GeV}-100\,{\rm GeV})$ DM mass range to neutrino astronomy, and improve existing constraints by an order of magnitude. We hope that these promising results will encourage the KM3NeT collaboration to investigate the possibility of deploying a low energy extension to their telescope and improve the reconstruction of cascades (see right panels in Fig.\,\ref{fig:sensitivity}). We looked at the $\chi \chi \,\rightarrow \,\mu^+\mu^-$ annihilation channel and predicted an order of magnitude improvement over the current constraints (see left panel in Fig.\,\ref{fig:comparison}). Although this bound turns out to be weaker than the bound on the annihilation cross section given by Fermi-LAT while observing dwarf spheroidal galaxies, we emphasize that the large angular resolution of the neutrino telescopes make the result more model-independent than that obtained by Fermi-LAT. We have predicted that the improvement in sensitivity to the annihilation cross section in this channel will allow us to probe cross sections $\sigv\gsim(10^{-24}-10^{-22}){\rm cm^3s^{-1}}$ for DM masses in the range $(1\,{\rm GeV}-10\,{\rm TeV})$ for 10 years of observation by a km$^3$ neutrino telescope. We have also looked at the $\chi \chi \,\rightarrow \, \nu\overline{\nu}$ channel and predicted that the observation of galaxy clusters will constrain the annihilation cross section in this channel by an order of magnitude over the existing limit obtained by IceCube while observing the Milky Way Galactic halo (see right panel in Fig.\,\ref{fig:comparison}). This annihilation channel is unique as it has no signal in any other DM indirect detection experiment. We predicted that the improvement in sensitivity to the annihilation cross section in this channel will allow us to probe cross sections $\sigv\gsim(10^{-24}-10^{-22}){\rm cm^3s^{-1}}$ for DM masses in the range $(1\,{\rm GeV}-10\,{\rm TeV})$ for 10 years of observation by a km$^3$ neutrino telescope. We considered the $\chi \chi \,\rightarrow \, t\overline{t}$ annihilation channel, which is expected to be very important for a heavy fermionic DM particle. We have predicted that the improvement in sensitivity to the annihilation cross section in this channel will allow us to probe cross sections $\sigv\gsim10^{-22}{\rm cm^3s^{-1}}$ for 10 years of observation by a km$^3$ neutrino telescope. We finally considered the $\chi \chi \,\rightarrow \, \nu\overline{\nu}\nu \overline{\nu}$ channel and predict that the sensitivity that can be obtained using neutrino telescopes may be able to probe the annihilation cross sections required in models which aim to solve various small-scale problems in $\Lambda$CDM. Although we have performed our calculations for the Virgo galaxy cluster, we expect that neutrino telescope observation of a properly chosen galaxy cluster (after taking into consideration backgrounds and various detector systematics in more detail) will improve the limits on the annihilation cross section by an order of magnitude in almost all annihilation channels. We must emphasize that the biggest uncertainty in this result comes from the $\sim$11 orders of magnitude extrapolation in the minimum DM substructure mass that is used to calculate the DM substructure profile. As a consequence of this extrapolation of the minimum substructure mass, the boost factor that can be obtained in a galaxy cluster due to the presence of substructures can vary by a factor of $\sim$20. Unless simulations improve their resolution dramatically, this will remain an inherent assumption in any DM indirect detection experiment observing galaxy clusters. All things considered, we hope to have conveyed the usefulness of observing galaxy clusters at neutrino telescopes for studying DM. In particular, how good reconstruction of cascades can lead to significant improvements in sensitivity. We hope that the IceCube and the KM3NeT collaborations will consider our results and make the required improvements in their analyses and detectors to make this possible.
| 12
| 6
|
1206.1322
|
1206
|
1206.1052_arXiv.txt
|
It has long been thought that there is a connection between ultraluminous infrared galaxies (ULIRGs), quasars, and major mergers. Indeed, simulations show that major mergers are capable of triggering massive starbursts and quasars. However, observations by the {\em Herschel Space Observatory} suggest that, at least at high redshift, there may not always be a simple causal connection between ULIRGs and mergers. Here, we combine an evolving merger-triggered AGN luminosity function with a merger-triggered starburst model to calculate the maximum contribution of major mergers to the ULIRG population. We find that major mergers can account for the entire local population of ULIRGs hosting AGN and $\sim$25$\%$ of the total local ULIRG luminosity density. By $z\sim1$, major mergers can no longer account for the luminosity density of ULIRGs hosting AGN and contribute $\lesssim$12$\%$ of the total ULIRG luminosity density. This drop is likely due to high redshift galaxies being more gas rich and therefore able to achieve high star formation rates through secular evolution. Additionally, we find that major mergers can account for the local population of warm ULIRGs. This suggests that selecting high redshift warm ULIRGs will allow for the identification of high redshift merger-triggered ULIRGs. As major mergers are likely to trigger very highly obscured AGN, a significant fraction of the high redshift warm ULIRG population may host Compton thick AGN.
|
\label{sect:intro} In the 1980s astronomers discovered a new class of infrared selected galaxies known as ultraluminous infrared galaxies (ULIRGs) and characterized by $L_{IR}>10^{12}$ L$_{\odot}$, where $L_{IR}$ is the 8--1000 $\mu$m luminosity \citep[e.g.,][]{H84,S84a}. Another important topic during this time period was the study of the evolution of quasars \citep[e.g.,][]{SG83}, a class of active galactic nuclei (AGN) where accretion onto the supermassive black hole at the center of a massive galaxy gives rise to $L_X>10^{44}$ erg s$^{-1}$, where $L_X$ is the 2--10 keV luminosity. ULIRGs and quasars have similar bolometric luminosities (10$^{45}$--10$^{46}$ erg s$^{-1}$) and optical observations suggest many ULIRGs have nuclear sources of non-thermal ionizing radiation and disturbed morphologies \citep{S88}. Thus, \citet{S88} suggested that when two gas rich galaxies merge, gas and dust will fall into the nucleus of the resulting galaxy, triggering a massive starburst and a quasar. Thirty years later, the connection between ULIRGs, AGN, and major mergers is still an area of active research. Recent simulations show that, indeed, gas rich major mergers are capable of triggering large starbursts and bright AGN \citep[e.g.,][]{H06, Y09, H10, N10}. By looking for ULIRGs with strong X-ray emission or a power-law spectra in the {\em Spitzer Space Telescope} IRAC bands, studies have shown that AGN are common in ULIRGs \citep[e.g.,][]{H06, A07, Y09, D10, H10, N10}. Furthermore, both the fraction of ULIRGs that host AGN and the fraction of ULIRGs whose bolometric luminosities are dominated by AGN emission, appear to increase strongly with luminosity \citep[e.g.,][]{V02, P05, G05, B06, D10, Nard10}. Morphological studies have shown that a significant fraction of ULIRGs have disturbed morphologies, suggesting the galaxy has recently undergone a merger or interaction \citep[e.g.][]{D10, K10, Nard10}. Additionally, \citet{V02} showed that the fraction of ULIRGs triggered by major mergers increases with luminosity. Thus, it is expected that a significant fraction of ULIRGs host AGN and were triggered by gas rich major mergers. However, these studies tend to focus on ULIRGs with $z\lesssim1$. Far-infrared observations by the {\em Herschel Space Telescope} have opened a new window on the $z\gtrsim1$ ULIRG population. Interestingly, {\em Herschel} observations show that at high redshift major mergers are not necessary to trigger ULIRGs \citep{S10}. Analyzing {\em Herschel} observations of the Bo\"otes field, \citet{M12} point out that $\lesssim$30$\%$ of optically-faint $z\sim2$ ULIRGs show obvious signs of a recent merger. Deep {\em Herschel} observations of the Great Observatories Origins Deep Survey (GOODS) and the Cosmological Evolution Survey (COSMOS) fields find that most $z\gtrsim1$ ULIRGs are not in a starburst mode of star formation \citep{E11,R11}. Instead, the increased gas fraction in high redshift galaxies allows normal secular star formation to power ULIRGs \citep[e.g.,][]{D08,E11,R11,M12}. Recent cosmological simulations confirm these observational results, finding that more than half of high redshift ULIRGs can be accounted for through mechanisms other than major mergers \citep{N12}. These results indicate that at high redshift the AGN-ULIRG connection may be quite different than the connection observed locally. \citet{DB12} computed an AGN population model that constrains the space density and Eddington ratio evolution of AGN triggered by major mergers by considering the hard X-ray luminosity function (HXLF), X-ray AGN number counts, the X-ray background, and the local mass density of supermassive black holes. Thus, a model of the major merger population can be computed by combining this description of the evolving luminosity function of merger-triggered AGN with the \citet{H10} model for the time evolution of merger-triggered starbursts. Similarly, a model merger spectral energy distribution (SED) can be calculated by combining AGN infrared spectra computed with the photoionization code {\sc Cloudy} \citep{F98} with the \citet{R09} star formation templates. This method is used here to determine the maximum contribution of mergers to the ULIRG population at $z\lesssim1.5$. A $\Lambda$CDM cosmology is assumed with $H_0=70$ km s$^{-1}$ Mpc$^{-1}$ and $\Omega_{\Lambda}=1.0-\Omega_M=0.7$.
|
\label{sect:disc} Major mergers can account for at most a quarter of the local ULIRG $\Psi_{total}$. This suggests that a large fraction of local ULIRGs are triggered by mechanisms other than the coalescence of two massive gas rich galaxies, such as minor mergers, interactions, and secular processes. By $z\sim1$, the ULIRG $\Psi_{merger}$/$\Psi_{AGN}<1$ and by $z\sim1.25$, the ULIRG $\Psi_{merger}$/$\Psi_{AGN}\lesssim0.5$. At $z\sim1$, the ULIRG $\Psi_{merger}$/$\Psi_{total}\lesssim0.12$. Indeed, simulations by \citet{H10} predict that at $z=1$, the ULIRG $\Psi_{merger}$/$\Psi_{total}\approx0.08$, in good agreement with the findings of this study. As major mergers are much more common at $z>1$ than $z<1$, this suggests that secular processes are even more important for triggering ULIRGs at high redshift. {\em Herschel} has provided observational evidence that major mergers are not necessary at $z\gtrsim1$ to trigger ULIRGs \citep{S10}; a finding that has been confirmed by simulations \citep{N12}. At $z\gtrsim1$ the majority of ULIRGs appear to be scaled up versions of local normal star forming galaxies. These high redshift ULIRGs tend to have cooler far-infrared dust temperatures \citep{M12} and stronger PAH emission \citep{E11} than local ULIRGs. Furthermore, by comparing the 8 $\mu$m flux to the total infrared flux of high redshift ULIRGs, \citet{E11} find that most $z\gtrsim1$ ULIRGs lie on the infrared main sequence and are in a normal star forming mode of evolution. Similarly, \citet{B12} and \citet{K12} find that at $z\sim0.7$ and $z\sim2$, AGN tend to be hosted by galaxies on the main sequence of star formation \citep[see][]{E11}. Observations and simulations both point out that high redshift galaxies are more gas rich than local galaxies \citep[e.g.,][]{T10a, Di12, N12}. It is likely that galaxies with large reservoirs of gas and dust are capable of fueling ULIRGs without being triggered by a major merger. \citet{E11} do find that some high redshift ULIRGs are in a starburst mode, possibly triggered by a merger. Moreover, \citet{E11} find evidence for a population of very highly obscured AGN embedded in these compact dusty starbursts. Other observational studies also find CT AGN ($N_H>10^{24}$ cm$^{-2}$) candidates in high redshift dusty starburst galaxies \citep[e.g.,][]{D10,M12}. \citet{F99} explains that the gas and dust funneled into the central regions of the merger remnant galaxy will fuel a burst of star formation, rapid black hole accretion, and will obscure the resulting AGN. Thus, a significant fraction of CT AGN are expected to be recently triggered, likely by a major merger, and accreting very rapidly \citep[e.g.,][]{DB10}. The results of this study are consistent with the merger-triggered CT AGN scenario, but, as $\Psi_{merger}$/$\Psi_{total}\lesssim0.12$ for ULIRGs at $z\sim1$, the contribution of these merger-triggered CT AGN to the $z\gtrsim1$ ULIRG population must be fairly small. The fraction of AGN that are CT is hard to observationally constrain due to the severe obscuration that defines CT AGN and AGN population models predict a wide range for the CT fraction \citep[see][]{B11}. Increasing the fraction of AGN that are CT in this model by a factor of 1.5 increases the $z\sim1$ ULIRG $\Psi_{merger}$/$\Psi_{total}$ by $\sim$0.01. Local ULIRG SEDs are often divided into two groups, warm and cool, where warm ULIRG SEDs are characterized by $f_{25\mu m}/f_{60\mu m}>0.2$ \citep[e.g.,][]{AH06}, where $f_{25\mu m}$ is the 25 $\mu$m flux and $f_{60\mu m}$ is the 60 $\mu$m flux. According to \citet{AH06}, warm ULIRGs are 15--30$\%$ of the Bright Galaxy Survey sources \citep{S88b}. The merger model used here does produce warm ULIRG SEDs with $f_{25\mu m}/f_{60\mu m}\gtrsim0.5$ at all redshifts; thus mergers can account for the local population of warm ULIRGs. \citet{E11} found that galaxies on the main sequence of normal star formation, including galaxies hosting AGN, tend to have cooler dust temperatures than star-bursting galaxies. Because starbursts triggered by major mergers are expected to be more compact than secular star formation, merger-triggered starbursts will be characterized by higher dust temperatures than normal star formation \citep[e.g.,][]{E11, M12}. Indeed, observations of local ULIRGs show that compact ULIRGs are more likely to host an AGN than less compact ULIRGs \citep{Nard10}. Analyzing simulations of major mergers, \citet{Y09} find that warm ULIRGs are likely to be galaxies evolving from the star formation dominated merger phase to the AGN-starburst post-merger phase. Thus, by combining {\em Herschel} and {\em Spitzer} or {\em Wide-field Infrared Survey Explorer} (WISE) observations to select warm ULIRGs, the population of ULIRGs hosting merger-triggered AGN can be identified. As discussed above, the population of ULIRGs hosting AGN triggered by major mergers is likely to include a significant fraction of CT AGN. Thus seeking out high redshift ULIRGs with warm SEDs will also lead to the identification of high redshift CT AGN. By combining the evolving luminosity function of AGN triggered by major mergers calculated by \cite{DB12} with the merger-triggered starburst model of \citet{H10}, we computed an upper limit for the major merger contribution to the ULIRG population. Locally, major mergers can account for the observed population of ULIRGs hosting AGN and ULIRGs with warm SEDs, but the local ULIRG $\Psi_{merger}$/$\Psi_{total}\lesssim0.26$. By $z\sim1$, major merger-triggered ULIRGs hosting AGN can no longer account for the population of ULIRGs observed to have AGN signatures. Indeed, the ULIRG $\Psi_{merger}$/$\Psi_{AGN}\lesssim0.50$ by $z\sim1.25$. Furthermore, at $z\sim1$, the ULIRG $\Psi_{merger}$/$\Psi_{total}\lesssim0.12$. Combining observations by {\em Herschel} and {\em Spitzer} or WISE to identify high redshift ULIRGs with warm SEDs is a good tool for identifying the population of ULIRGs hosting merger-triggered AGN, a large fraction of which are expected to be CT.
| 12
| 6
|
1206.1052
|
1206
|
1206.6358_arXiv.txt
|
We present {\it Herschel} PACS photometry of seventeen B- to M-type stars in the 30 Myr-old Tucana-Horologium Association. This work is part of the {\it Herschel} Open Time Key Programme ``Gas in Protoplanetary Systems'' (GASPS). Six of the seventeen targets were found to have infrared excesses significantly greater than the expected stellar IR fluxes, including a previously unknown disk around HD30051. These six debris disks were fitted with single-temperature blackbody models to estimate the temperatures and abundances of the dust in the systems. For the five stars that show excess emission in the Hershcel PACS photometry and also have {\it Spitzer} IRS spectra, we fit the data with models of optically thin debris disks with realistic grain properties in order to better estimate the disk parameters. The model is determined by a set of six parameters: surface density index, grain size distribution index, minimum and maximum grain sizes, and the inner and outer radii of the disk. The best fitting parameters give us constraints on the geometry of the dust in these systems, as well as lower limits to the total dust masses. The HD105 disk was further constrained by fitting marginally resolved PACS 70$\,\mu$m imaging.
|
Debris disks are the last stage of circumstellar disk evolution, in which the gas from the protoplanetary and transitional disk phases has been dissipated and the dust seen comes from collisions between planetesimals. In the youngest debris disks, $\lesssim100$ Myrs old, terrestrial planets may still be forming \citep{Kenyon06}. Giant planets must form before the gas dissipates; their gravitational interactions with planetesimals and dust can leave signatures in debris disks. Cold debris disks may be Kuiper-belt analogs, signaling the location and properties of planetesimals remaining in the disk. To be suitable for life, terrestrial planets in habitable zones must have volatiles such as water brought to their surfaces from beyond the ice line. Planetesimals in Kuiper Belt-like debris disks may provide this reservoir of volatiles \citep{Lebreton12}. We do not yet have the capability to detect the planetesimals in these disks, but we can detect the smaller dust grains. These dust grains are believed to be produced through the collisions of the larger planetesimals, and therefore are likely to have similar compositions to the larger undetected bodies. The properties and locations of the dust grains in Kuiper Belt analogs can provide clues to the properties of the hidden planetesimals. Additionally, dust structures in the disk may point to unseen exoplanets \citep[e.g.\ $\beta$ Pic b;][]{Lagrange10}. To measure the cold dust in the outer regions of debris disks, we need great sensitivity at far-infrared wavelengths where the thermal emission from the cold dust grains peaks ($\ge 70\,\mu$m). The {\it Herschel Space Observatory} \citep{Pilbratt10} provides a unique opportunity for sensitive debris disk surveys. {\it Herschel}'s PACS instrument \citep{Poglitsch10} is sensitive to the cold dust with a wavelength range of $55-210\,\mu$m. Additionally, {\it Herschel}'s spatial resolution is almost 4 times better than {\it Spitzer} at similar wavelengths, and therefore reducing confusion with background galaxies and interstellar cirrus and making it easier for {\it Herschel} to detect faint, cold debris disks. In this paper, we present results of a sensitive {\it Herschel} debris disk survey in the 30-Myr-old Tucana-Horologium Association. This work is part of the {\it Herschel} Open Time Key Programme ``Gas in Protoplanetary Systems'' \citep[GASPS; Dent et al.\ in prep,][]{Mathews10}. The GASPS survey targets young, nearby star clusters with well determined ages, ranging from 1-30 Myrs. This range of ages covers the stages of planet formation from giant planet formation \citep[$\sim 1$ Myr;][]{Alibert05} to the late stages of terrestrial planet formation \citep[10-100 Myrs;][]{Kenyon06}. The targets in each group were chosen to span a range of spectral types from B to M. \begin{table*}[ht] \centering \caption{Stellar Properties in the 30 Myr-old Tucana-Horologium Association\label{tab:stars}} \begin{threeparttable} \begin{tabular}{l c c c c c} \hline\hline Star & Right Ascension& Declination & Spectral\tnote{a} & T$_{\ast}$\tnote{b} & Distance\tnote{c} \\ & (J2000) & (J2000) & Type & (Kelvin) & (pc) \\ \hline HD2884\tnote{d} & 00:31:32.67 & -62:57:29.58 & B9V & 11250 & $43\pm1$ \\ HD16978 & 02:39:35.36 & -68:16:01.00 & B9V & 10500 & $47\pm1$ \\ HD3003\tnote{d} &00:32:43.91 & -63:01:53.39 & A0V & 9800 & $46\pm1$ \\ HD224392 & 23:57:35.08 & -64:17:53:64 & A1V & 9400 & $49\pm1$ \\ HD2885\tnote{d} & 00:31:33.47 & -62:57:56.02 & A2V & 8600 & $53\pm10$ \\ HD30051 &04:43:17.20 & -23:37:42.06 & F2/3IV/V & 6600 & $58\pm4$ \\ HD53842 &06:46:13.54 & -83:59:29.51 & F5V & 6600 & $57\pm2$ \\ HD1466 &00:18:26.12 & -63:28:38.98 & F9V & 6200% & $41\pm1$\\ HD105 & 00:05:52.54 & -41:45:11.04 & G0V & 6000 & $40\pm1$ \\ HD12039 & 01:57:48.98 & -21:54:05.35 & G3/5V & 5600 & $42\pm2$ \\ HD202917 & 21:20:49.96 & -53:02:03.14 & G5V & 5400 & $46\pm2$ \\ HD44627\tnote{d} & 06:19:12.91 & -58:03:15.52 & K2V & 5200 & $46\pm2$ \\ HD55279 & 07:00:30.49 & -79:41:45.98 & K3V & 4800 & $64\pm4$ \\ HD3221 & 00:34:51.20 & -61:54:58.14 & K5V & 4400 & $46\pm2$ \\ HIP107345 & 21:44:30.12 & -60:58:38.88 & M1 & 3700 & $42\pm5$ \\ HIP3556 &00:45:28.15 & -51:37:33.93 & M1.5 & 3500 & $39\pm4$ \\ GSC8056-482 &02:36:51.71 & -52:03:03.70 & M3Ve & 3400 & 25\\ \hline \end{tabular} \begin{tablenotes} \item[a]{Spectral types listed are from the SIMBAD Astronomical Database} \item[b]{Calculated from stellar modeling. See Section 3} \item[c]{Distances are taken from the {\it Hipparcos} Catalog \citep{Perryman97}} \item[d]{Binary or multiple star system} \end{tablenotes} \end{threeparttable} \end{table*} The 30 Myr-old Tucana-Horologium Association is the oldest in the GASPS survey. The Tucana-Horologium Association, discovered independently by \citet{Zuckerman00} and \citet{Torres00}, is a group of $\sim60$ stars with common proper motion and an average distance of 46 pc \citep{Zuckerman04}. About $\sim1/3$ of the targets have debris disk systems known from previous {\it Spitzer} surveys \citep{Hillenbrand08, Smith06}. We obtained {\it Herschel} PACS photometry of the seventeen GASPS targets in the Tucana-Horologium Association. We also obtained PACS spectra for two of the targets. Previously unpublished {\it Spitzer} IRS spectra for three targets are presented. In Section 2, we present our methods and results of data reduction and aperture photometry. In Section 3, we fit blackbody and modified blackbody models to the detections and upper limits to determine dust temperatures and fractional luminosities. We further analyze some of the disks with our optically thin dust disk model in Section 4. Additionally, we discuss the detection of a marginally resolved disk in our sample in Section 5 and present conclusions in Section 6.
|
We observed seventeen stars in the Tucana-Horologium Association with the PACS instrument on the {\it Herschel Space Observatory}. We detected six debris disks, including one previously unknown disk and put sensitive upper limits on those not detected. We modeled the disks with a thermal dust disk model and were able to place tighter constraints on several disk parameters, such as the inner disk radius, minimum grain size, and grain size distribution. Additionally, we marginally resolved one disk and were able to put a lower limit on the outer radius. Future work will include {\it Herschel} SPIRE observations to better populate the sub-mm portion of the SEDs, and resolved imaging with ALMA to break degeneracies by determining the disk geometry. These data will also be combined with other targets of different ages to examine the statistical properties of the entire GASPS sample.
| 12
| 6
|
1206.6358
|
1206
|
1206.0748_arXiv.txt
|
The optical light curve of some supernovae (SNe) may be powered by the outward diffusion of the energy deposited by the explosion shock (so-called shock breakout) in optically thick ($\tau\gtorder 30$) circumstellar matter (CSM). Recently, it was shown that the radiation-mediated and -dominated shock in an optically thick wind must transform into a collisionless shock and can produce hard X-rays. The X-rays are expected to peak at late times, relative to maximum visible light. Here we report on a search, using {\it Swift}-XRT and {\it Chandra}, for X-ray emission from 28 SNe that belong to classes whose progenitors are suspected to be embedded in dense CSM. Our sample includes 19 type-IIn SNe, one type-Ibn SN and eight hydrogen-poor super-luminous SNe (SLSN-I; SN\,2005ap like). Two SNe (SN\,2006jc and SN\,2010jl) have X-ray properties that are roughly consistent with the expectation for X-rays from a collisionless shock in optically thick CSM. Therefore, we suggest that their optical light curves are powered by shock breakout in CSM. We show that two other events (SN\,2010al and SN\,2011ht) were too X-ray bright during the SN maximum optical light to be explained by the shock breakout model. We conclude that the light curves of some, but not all, type-IIn/Ibn SNe are powered by shock breakout in CSM. For the rest of the SNe in our sample, including all the SLSN-I events, our X-ray limits are not deep enough and were typically obtained at too early times (i.e., near the SN maximum light) to conclude about their nature. Late time X-ray observations are required in order to further test if these SNe are indeed embedded in dense CSM. We review the conditions required for a shock breakout in a wind profile. We argue that the time scale, relative to maximum light, for the SN to peak in X-rays is a probe of the column density and the density profile above the shock region. The optical light curves of SNe, for which the X-ray emission peaks at late times, are likely powered by the diffusion of shock energy from a dense CSM. We note that if the CSM density profile falls faster than a constant-rate wind density profile, then X-rays may escape at earlier times than estimated for the wind profile case. Furthermore, if the CSM have a region in which the density profile is very steep, relative to a steady wind density profile, or the CSM is neutral, then the radio free-free absorption may be low enough, and radio emission may be detected.
|
\label{sec:Introduction} Circumstellar Matter (CSM) around supernova (SN) progenitors may play an important role in the emission and propagation of energy from SN explosions. The interaction of the SN radiation with optically thin CSM shells may generate emission lines, with widths that are representative of the shell velocity (i.e., type-IIn SNe; Schlegel 1990; Kiewe et al.\ 2012). The interaction of SN ejecta with the CSM can power the light curves of SNe by transformation of the SN kinetic energy into photons. In cases where considerable amounts of optically thin (and ionized) material is present around the exploding star, synchrotron and free-free radiation can emerge, and inverse Compton scattering can generate X-ray photons (e.g., Chevalier \& Fransson 1994; Horesh et al.\ 2012; Krauss et al.\ 2012). For the type-IIn SN PTF\,09uj, Ofek et al.\ (2010) suggested that a shock breakout can take place in optically-thick wind (see also Grassberg, Imshennik, \& Nadyozhin 1971; Falk \& Arnett 1977; Chevalier \& Irwin 2011; Balberg \& Loeb 2011). This will happen if the Thomson optical depth within the wind profile is $\gtorder c/v_{{\rm sh}}$, where $c$ is the speed of light, and $v_{{\rm sh}}$ is the shock speed. Ofek et al.\ (2010) showed that shock breakout in wind environments produces optical displays that are brighter and have longer time scales than those from surfaces of red supergiants (e.g., Colgate 1974; Matzner \& McKee 1999; Nakar \& Sari 2010; Rabinak \& Waxman 2011; Couch et al.\ 2011). Chevalier \& Irwin (2011) extended this picture. Specifically, they discussed CSM with a wind profile in which the wind has a cutoff at a distance $R_{{\rm w}}$. If the optical depth at $R_{{\rm w}}$ is $\ltorder c/v_{{\rm s}}$ then the light curve of the supernova will have a slow decay (e.g., SN\,2006gy; Ofek et al.\ 2007; Smith et al.\ 2007). If the optical depth at $R_{{\rm w}}$ is $\gtorder c/v_{{\rm s}}$, then it will have a faster decay (e.g., SN\,2010gx; Pastorello et al.\ 2010a; Quimby et al.\ 2011a). Moriya \& Tominaga (2012) investigated shock breakouts in general wind density profiles of the form $\rho\propto r^{-w}$. They suggested that, depending on the power-law index $w$, shock breakouts in wind environments can produce bright SNe without narrow emission lines (e.g., SN\,2008es; Gezari et al.\ 2009; Miller et al.\ 2009). Recently Katz et al.\ (2011) and Murase et al. (2011) showed that if the progenitor is surrounded by optically thick CSM then a collisionless shock is necessarily formed during the shock breakout. Moreover, they argued that the energy emitted from the collisionless shock in the form of high-energy photons and particles is comparable to the shock breakout energy. Furthermore, this process may generate high-energy ($\gtorder 1$\,TeV) neutrinos. Although Katz et al.\ (2011) predicted that the photons are generated with energy typically above 60\,keV, it is reasonable to assume that some photons will be emitted with lower energy. Chevalier \& Irwin (2012) showed that Comptonization and inverse Compton of the high-energy photons is likely to play an important role, and that the high-energy photons will be absorbed. Svirski, Nakar \& Sari (2012) discuss the X-ray emission from collisionless shocks. They show that at early times the X-rays will be processed into the optical regime by the Compton process. Therefore, at early times, the optical emission will be about $10^{4}$ times stronger than the high-energy emission. With time, the X-ray emission will become stronger, while the optical emission will decay. They conclude that for a CSM with a steady wind profile ($w=2$), X-ray emission may peak only at late times, roughly 10--50 times the shock breakout time scale. The shock breakout time scale, $t_{{\rm br}}$, is roughly given by the diffusion time scale at the time of shock breakout. This time scale is also equivalent to the radius at which the shock breaks out ($r_{{\rm br}}$) divided by the shock velocity ($v_{{\rm s}}$; Weaver 1976). If the main source of optical photons is due to diffusion of the shock breakout energy, the SN optical light rise time, $t_{{\rm rise}}$, will be equivalent to the shock breakout time scale. Therefore, X-ray flux measurements and spectra of SNe embedded in dense CSM starting from the explosion until months or years after maximum light are able to measure the properties of the CSM around the SN progenitors and the progenitor mass-loss history. This unique probe into the final stages of massive star evolution has been only partially exploited, at best. Herein, we analyze the X-ray data for 28 SNe with light curves that may be powered by a shock breakout from dense CSM, and for which {\em Swift}-XRT (Gehrels et al.\ 2004) observations exist. We use this sample to search for X-ray signatures of collisionless shocks -- emission at late times (months to years after peak optical luminosity). We suggest that these signals were observed in several cases, most notably in SN\,2006jc (Immler et al.\ 2008) and SN\,2010jl (Chandra et al.\ 2012a). Finally, we review the conditions for a shock breakout in CSM with a wind profile and discuss, the importance of bound-free absorption and the possibility to detect radio emission from such SNe. The structure of this paper is as follows. in \S\ref{sec:Sample} we present the SN sample, while \S\ref{sec:Obs} presents the X-ray observations. We review and discuss the model in \S\ref{sec:Disc}, and discuss the observations in context of the model in \S\ref{sec:obsmoel}. Finally, we conclude in \S\ref{Conc}.
|
\label{Conc} We present a search for X-ray emission from 28 SNe in which it is possible that the shock breakout took place within a dense CSM. Most SNe have been observed with X-ray telescopes only around maximum optical light. The SNe in our sample that do have late time observations, were either detected also at early times or were observed serendipitously at very late times. In that respect, our first conclusion is that a search for X-ray emission both at early and late time from SNe is essential to constrain the properties of the CSM around their progenitors. Our analysis suggest that some type-IIn/Ibn SNe, most notably SN\,2006jc and SN\,2010jl, have optical light curves that are likely powered by a shock breakout in CSM, while some other type-IIn SNe do~not. However, for most of the SNe in our sample, including all the SLSN-I events, the observations are not conclusive. Specifically, the lack of X-ray detection of SLSN-I events, cannot rule out the interaction model suggested in Quimby et al.\ (2011a; see also Ginzburg \& Balberg 2012). We conclude that deeper observations at later times are required in order to further test this model. Given the limits found in this paper and our current understanding of these events it will be worthwhile to monitor type-IIn SNe (as well as other classes of potentially interacting SNe; e.g., type-IIL) with X-ray and radio instruments at time scales $\gtorder10$ times the rise time of the SN. We argue that in some cases bound-free absorption will play an important role at early and late times. Therefore, observations with the soon to be launched Nuclear Spectroscopic Telescope Array (NuSTAR; Harrison et al.\ 2010) in the 6--80\,keV band may be extremely useful to test the theory and to study the physics of these collisionless shocks. Moreover, in the cases where bound-free absorption is important (e.g., $v_{{\rm s}}\ltorder10^{4}$\,km\,s$^{-1}$; Chevalier \& Irwin 2012), the spectral X-ray evolution as a function of time can be use to probe the column density above the shock at any given time, and deduce the density profile outside the shocked regions. We also argue that in some cases, if the CSM has steep density profiles (e.g., SN\,2010jl), it may be possible to detect radio emission. Finally, we note that Katz et al.\ (2011) and Murase et al. (2011) predict that the collisionless shocks will generate TeV neutrinos. These particles will be able to escape when the collisionless shock begins. The detection of such neutrinos using Ice-Cube (Karle et al.\ 2003) will be a powerful tool to test this theory and explore the physics of collisionless shocks.
| 12
| 6
|
1206.0748
|
1206
|
1206.2784_arXiv.txt
|
Based on astrophysical constraints derived from Chandrasekhar's mass limit for white-dwarfs, we study the effects of the model on the parameters of unparticle-inspired gravity, on scales $\Lambda_U > 1 \; TeV$ and $d_U \approx 1$. \vskip 0.5cm
|
Many proposals for explaining the apparent shortcomings of the Standard Model have been advanced. The Unparticle Model proposed by Georgi \cite{bib:georgi_2007} aimed to include in the Standard Model massive but scale-invariant particles, sharing the same physics of the scale-dependent counterparts. These objects, called `unparticles', could play an important role in low-energy physics \cite{bib:PhysRevLett.100.031803}, since the model implies that unparticles can be exchanged between massive particles, leading to a new force called `ungravity'. This ``fifth'' force would add a perturbation term to the newtonian gravitational potential, although the exact potential can not be obtained because the distance at which the perturbed potential matches the newtonian expression needs to be known. In order to bypass this limitation, the perturbed potential has been assumed to be of the form \cite{bib:PhysRevLett.100.031803} \begin{equation} V(r) = -\frac{GM}{2r}\left[ 1 + \left( \frac{R_G}{r} \right)^{2d_U-2} \right], \label{eq:newtonian_perturbed_potential} \end{equation} where $d_U$ (the scaling dimension of the unparticles operator $O_U$) is $ \approx 1$, as a reasonable approximation, and $R_G$ is the characteristic length scale of ungravity, given by \begin{eqnarray} R_G & = & \frac{1}{\pi \Lambda_U} \left( \frac{M_{Pl}}{M_*} \right)^{1/(d_U - 1)} \times \nonumber\\ & \times & \left[ \frac{2(2-\alpha)}{\pi} \frac{\Gamma(d_U + 1/2)\Gamma(d_U - 1/2)}{\Gamma(2 d_U)} \right]^{1/(d_U - 1)}, \label{eq:R_G_definition} \end{eqnarray} where $\Lambda_U$ is the energy scale of the unparticle interactions, $M_{Pl} = 2.4 \times 10^{18} GeV$ is the Planck mass and $\alpha$ is a constant dependent on the type of propagator considered. The problem addressed in this work is to determine the bounds of the mass of the interaction (un)particle $M_{*}$ with $d_U \simeq 1$. For this purpose, a suitable quasi-newtonian gravitational system needs to be studied and compared with the pure newtonian results. A first study of this regime by Bertolami, P\'aramos and Santos \cite{bib:bertolami_2009} has addressed the stellar equilibrium problem, deriving a perturbed Lane-Emden equation further applied to the Sun. They explored the well-known similarity of the full stellar structure to an $n = 3$ polytropic model, and derived limits from the maximum allowed uncertainty in the central temperature $\Delta T_{c}/T_{c} = 0.06$. In spite of the successful derivation of meaningful limits to the unparticle parameters, it is known that the detailed structure of the Sun is actually quite complicated, and many physical factors have to be considered beyond the simplest Chandrasekhar's polytropic model \cite{bib:chandrasekhar_1967}. Therefore, it is worth considering another very well-known system to which the Chandrasekhar theory gives an even better representation: the white dwarf sequence. We shall show below that an important feature of these sequences (the maximum mass) is sensitive to the unparticle quantities and allows to impose strong limits on them.
|
We have shown in this work that quite strong limits to the unparticle parameters can be obtained by using a simple form of the polytropic theory of Chandrasekhar adding a perturbation to the Lane-Emden equation, as first obtained by Bertolami, P\'aramos and Santos \cite{bib:bertolami_2009}, and applying it to the white dwarf sequences. The key point elaborated here is that a change on the unparticle parameters would affect the maximum mass allowed to white dwarfs, and thereby we explored this characteristic in order to limit the values of such parameters. The requirement that the maximum mass can not be too small (because it would conflict with a few massive stars \cite{bib:2007MNRAS.375.1315K} or too big (because it would lead to unobserved supermassive white dwarfs) limit the values of $M_{*}$ to a confidence range of $0.1 M_{Pl} < M_{*} < 1.6 M_{Pl}$ from this analysis alone for the case $d_U \gtrsim 1$. For the case $d_U \lesssim 1$, the mass-radius relation gives only masses bigger than the canonical value. Considering that until today there is no observation of white dwarfs with such high masses, this analysis may be interpreted to mean that values of $d_U < 1$ are not allowed. Following a different approach, based on a cosmological scenario, Bertolami and Santos \cite{bib:bertolami_santos_2009} considered the variation of the gravitational coupling at the time of big bang nucleosynthesis, tensor exchange and the scaling dimension $d_U = 1.1$ and found $M_*$ to be $> 0.05 M_{Pl}$, which is very close to the bounds found for $1 < d_U < 1.06$. Other works studying complementary bounds \cite{bib:bertolami_2009, bib:PhysRevLett.99.141301, bib:1126-6708-2007-12-033, bib:Deshpande2008888, bib:JR2008561, bib:1475-7516-2009-03-019} could be combined to address the viability of a general unparticle model, unless one can manage to evade the bounds altogether. Even if so, a general argument to constrain the admissible perturbations to the newtonian potential can be made {\it via} the perturbed Lane-Emden equation, resorting to the observed massive white dwarfs.
| 12
| 6
|
1206.2784
|
1206
|
1206.5671_arXiv.txt
|
The effects of acoustic wave absorption, mode conversion and transmission by a sunspot on the helioseismic inferences are widely discussed, but yet accounting for them has proved difficult for lack of a consistent framework within helioseismic modelling. Here, following a discussion of problems and issues that the near-surface magnetohydrodynamics hosts through a complex interplay of radiative transfer, measurement issues, and MHD wave processes, I present some possibilities entirely from observational analyses based on imaging spectropolarimetry. In particular, I present some results on wave evolution as a function of observation height and inclination of magnetic field to the vertical, derived from a high-cadence imaging spectropolarimetric observation of a sunspot and its surroundings using the instrument IBIS (NSO/Sac Peak, USA). These observations were made in magnetically sensitive (Fe I 6173 A) and insensitive (Fe I 7090 A) upper photospheric absorption lines. Wave travel time contributions from within the photospheric layers of a sunspot estimated here would then need to be removed from the inversion modelling procedure, that does not have the provision to account for them.
|
Developments in sunspot seismology trace back to the original suggestion by \citet{1982Natur.297..485T} that interactions between sunspots and helioseismic p modes could be used to probe the sub-surface structure of sunspots. The analyses that followed \citet{1982Natur.297..485T} focussed mainly on changes in the frequency - wavenumber spectrum ($\nu $ - k) and in the modal power distribution. These studies led to the discovery of 'absorption' of p modes by sunspots \citep{1987ApJ...319L..27B}: about 50\% of the flux of acoustic wave energy impinging on a sunspot is not observed to return to the quiet Sun. Development of several local helioseismic techniques, viz. the ring diagram analysis \citep{1988ApJ...333..996H}, helioseismic holography \citep{1990SoPh..126..101L} and time-distance helioseismology \citep{1993Natur.362..430D}, has since brought in new ways of probing the subsurface structure and dynamics of sunspots. However, the question, viz. is a sunspot formed, in the sub-surface layers, of a monolithic flux tube or a cluster of flux tubes?, still remains to be answered. An answer to this question would also address the dynamics of heat and material flow in and around sunspots, and hence would have far reaching implications for the magnetohydrodynamics of solar and stellar magnetism. Applications of time-distance helioseismology appeared as a promising avenue with its 3-dimensional tomographic images of flow and sound speed structures beneath sunspots \citep{2000SoPh..192..159K,2001ApJ...557..384Z}. These early results showed an increased sound speed region extending from about 4 Mm down to about 18 Mm with a maximum change of about 1 - 2 \%, while the near surface layers in the 1 - 3 Mm depth range show a decrease in sound speed of similar magnitude. The flow pattern \citep{2001ApJ...557..384Z} consists of a shallow (1.5 - 3.0 Mm) converging flow that feeds a strong downflow beneath the sunspot \citep{1996Natur.379..235D} up to depths of about 5 Mm. Though these results have features indicative of the cluster model \citep{1979ApJ...230..905P}, new developments and improvements in several different fronts in local helioseismology have served to emphasise the inadequacy of such analyses \citep{gizonetal10}. In contrast to results from time-distance helioseismology, studies based on phase sensitive holography \citep{2000SoPh..192..307B} have shown phase shifts of waves consistent with a faster propagation in the near surface layers, in direct correlation with the surface magnetic proxies (e.g. LOS magnetogram signals), and which decrease monotonically with depth becoming undetectable at layers deeper than about 5 Mm. Recent new ways of travel time measurements and inversions \citep{2010SoPh..267....1M,2011A&A...530A.148S} show that the moat outflows around sunspots extend much deeper (up to about 4 - 5 Mm). These new developments have brought to the fore the dominant direct interactions between acoustic waves and magnetic fields, which leave too large a signal in measurements to be treated with the conventional methods of seismic inversions that club such effects into thermal perurbations. The early contentions that p mode absorption of sunspots could be used to probe them, thus, have come around a full circle to the realization, through theoretical attempts at explaining the above surface effects \citep{2003MNRAS.346..381C,2005SoPh..227....1C, 2006ASPC..354..244S}, that they are the very processes that need to be accounted for before we proceed further in the application of the later developed local helioseismic techniques.
|
Almost all time-distance helioseismic analyses proceed under the working assumption that wave signals at observation heights are evanescent and hence oppositely directed wave paths involving photospheric reflections at two separated points are of identical path length. This assumption is basic to the inferences on flows and wave speed from travel time differences and mean, respectively. In an early theoretical study, accompanied by attempts to model the helioseismic observations of \citet{braun97}, \citet{1998ApJ...492..379B} showed the influences of both the $p$-mode forcing of, and spontaneous emissions by, sunspots on acoustic wave travel times. Our analyses here have yielded transparent observational proofs for both effects, for the first time, with important new perspectives: (1) the process of transformation of incident acoustic waves into propagating (magneto)-acoustic waves up through the magnetic field happen in a coherent manner allowing a smooth evolution of time-distance correlations and, in agreement with several recent theoretical and numerical studies \citep{cally05,2005SoPh..227....1C,schunkeretal06}, this process depends on the inclination angle ($\gamma$) of magnetic field to the vertical, and (2) outgoing waves from acoustic sources located just beneath the sunspot photosphere add important additional contributions for both mean travel times and differences. Our results have also shown observational prospects for consistently accounting for the above effects in sunspot seismology, viz. the indispensability of imaging spectroscopy to extract wave fields so as to be able to correctly account for the wave evolution within the directly observable layers of sunspot atmosphere.
| 12
| 6
|
1206.5671
|
1206
|
1206.2926_arXiv.txt
|
Large Scale Structure Surveys have the potential of becoming the leading cosmological observable in the next decade. They contain a tremendous amount of cosmological information. If we were able to extract information from all the modes that go from the horizon scale $\sim 10^4$~Mpc to the non-linear scale $\sim 10$ Mpc, we would obtain about \be \left(\frac{10^4}{10}\right)^3\sim 10^9 \ee independent modes. The Planck satellite in comparison has about $(2 \times 10^3)^2\sim 10^6$ modes. Of course, accessing all this information is much harder than for the CMB, due to the short scale non-linearities. There are several aspects to this problem. The first problem is related to our currently limited understanding of the evolution of dark matter on large scales. Non-linear corrections are very important even on scales larger than $10$ Mpc, because modes of different wavelengths couple to each other. Understanding these corrections is a problem that affects all large scale structure observables. There are then two additional issues that affect most, but at least not all, observables. One is the fact that most dark matter is clumped in very non-linear structures (dark matter halos); and the other is the fact that what we often observe are galaxies, and not just dark matter halos, and not even dark matter long wavelength perturbations. The solution to these two last problems requires the correct understanding of the so-called halo- and galaxy- biases. These two problems, while important and deeply interesting in their own right, are very astrophysical in nature, and we do not address them here. Instead here we try to address in a rigorous way the first problem, that is the prediction of the dark matter distribution on scales larger than the non-linear scale. The fact that the universe is characterized by two well separated scales, the Hubble scale, over which perturbations are linear, and the non-linear scale, which indeed characterizes the scale over which gravitational collapse overtakes the expansion of the universe, makes the problem amenable to an Effective Field Theory (EFT) treatment. An effective theory is a description of a system that captures all the relevant degrees of freedom and describes all the relevant physics at a macroscopic scale of interest. The short distance (so called ultraviolet or `UV') physics is integrated out and affects the effective field theory only through various couplings in a perturbative expansion in the ratio of microphysical UV scale/s to the macroscopic scale being probed. This technique has been systematically used in particle physics and condensed matter physics for many years, but has not been fully used in astrophysics and cosmology. An important early (and recent) application of these techniques in cosmology is the so-called Effective Field Theory of Inflation~\cite{Cheung:2007st}. In a similar vein, understanding the large scale properties of the universe is very important, and is ready for a careful analysis. Indeed the situation in the universe is very similar to what happens in the chiral Lagrangian that describes pion interactions in Particle Physics. At very low energies, pions are weakly interacting. These interactions and the size of the fluctuations grow with energy until we hit the Quantum ChromoDynamics (QCD) scale, $\sim4\pi F_\pi$, at which the pions become strongly coupled. The Chiral Lagrangian~\cite{Weinberg:1996kr} offers the correct effective theory allowing arbitrarily precise predictions, up to non-perturbative effects, at energies $E\ll 4\pi F_\pi$. In our universe, matter fluctuations are small at large distances and becomes larger and larger as we move up to the non-linear scale. Since the size of the non-linear terms, which are nothing but interactions, grows with the size of the fluctuations, we see that at long distances the universe should be described by some weakly coupled degree of freedom, that becomes more and more interacting as we move closer to the non-linear scale, at which point the fluctuations become strongly coupled. The coupling constant should indeed be represented by the ratio of the considered wavenumber $k$ over the wavenumber at the non-linear scale $k_{NL}$: $k/k_{NL}$. Notice that indeed the size of the density perturbations $\delta\rho/\rho$ on a scale $k$ scales as $(k/k_{NL})^2$. This scaling suggests the existence of an effective field theory that should allow us to describe with arbitrary precision the universe on scales $k\ll k_{NL}$, very much as the Chiral Lagrangian represents the right effective field theory to describe pion dynamics to arbitrary precision. Such an effective theory would have particularly relevant observational implications. Already now, large scale structure surveys such as BOSS or DES are measuring the galaxy-galaxy correlation function, so called Baryon Acoustic Oscillations (BAO), at scales of order 100 Mpc. Next generation experiments such as LSST will measure this quantity at about percent precision. These observations contain huge amount of information on Dark Energy and on Inflation, through for example the non-Gaussianity of the primordial perturbations. The BAO scale is about one order of magnitude longer than the non-linear scale, where $\delta\rho/\rho\sim 10^{-2}$, and therefore physics at this scale {\it must} be describable by rigorous perturbative methods. The alternative is to rely on either time consuming numerical simulations, or on analytical approaches that however are limited by some irreducible mistake that is hard to quantify precisely. In an ideal situation, numerical $N$-body simulations should be quickly done only at {\it small} scales, to describe phenomena affected by gravitational collapse, rather than running large simulations to describe weakly coupled physics. This has been indeed recently elucidated in the context of the bias, where it was shown that in order to derive the bias on large scales one needs to run very small simulations in a curved universe~\cite{Baldauf:2011bh}. This line of reasoning is indeed very similar to what happens in QCD, where we perform lattice simulation to measure quantities relevant at energies above around one GeV, while we use the chiral Lagrangian for predictions at smaller energies. The effective field theory (EFT) of the long distance universe was initially developed by some of us in~\cite{Baumann:2010tm}. It was noticed that by concentrating on length scales longer than the non-linear scale, the universe is described by a fluid with small perturbations. The equations of motion of this fluid are organized in a derivative expansion in the ratio of the considered wavenumber over the wavenumber associated to the non-linear scale $k_{NL}\sim 1/10$ Mpc$^{-1}$. At leading order in derivatives, the fluid has the stress tensor of an ordinary imperfect fluid, characterized by a speed of sound for the fluctuations, a bulk and a shear viscosity, plus a stochastic pressure component. This makes our approach different with respect to the `standard' approaches both at a quantitative and a qualitative level. The purpose of this paper is to further develop this effective theory and be able to make observational predictions. The parameters that characterize the fluid, the speed of sounds, bulk viscosity, etc., are determined by the microphysics at the non-linear scale, that we call UV, and cannot be derived from within the effective theory. They have to be either fit to observations, or measured in {\it small} $N$-body simulations. At this point, the EFT becomes predictive. Again, this is very similar to what happens in QCD, where one can measure the pion coupling constant~$F_\pi$ in lattice simulations, after which the Chiral Lagrangian becomes predictive. Our basic method and key results are summarized as the following: \begin{itemize} \item By smoothing the collisionless Boltzmann equation for non-relativistic matter in an expanding FRW background on a length scale $\Lambda^{-1}$, we establish the continuity and Euler equations for an effective fluid. The Euler equation includes an effective stress-tensor $[\tau^{ij}]_\Lambda$ that is sourced by the short-modes $\delta_s$. \item By taking correlation functions of the stress tensor in the presence of long wavelength fluctuations, we define an effective stress-tensor that is only a function of the long wavelength fluctuations. It takes the form \bea [\tau^{ij}]_\Lambda\amp=\amp \delta^{ij}p_b+\rho_b\Bigg{[} c_s^2\,\delta^{ij}\delta_l-{c_{bv}^2\over Ha}\delta^{ij}\,\partial_k v_l^k\nonumber\\\amp-\amp{3\over4}{c_{sv}^2\over Ha}\left(\partial^jv_l^i+\partial^iv_l^j-{2\over3}\delta^{ij}\,\partial_kv_l^k\right) \Bigg{]}+\ldots\ , \eea where the various parameters $c_s^2,c_{bv}^2$ etc. are defined by proper correlation functions of short wavelength and long wavelength fluctuations. \item By directly evaluating the stress tensor from the microphysical theory, i.e., from $N$-body simulations, and computing the appropriate correlation functions, we calculate the value of the fluid parameters. For a $\Lambda$CDM universe with standard cosmological parameters at redshift $z=0$ and smoothing scale $\Lambda=1/3 h$ Mpc$^{-1}$, we find \bea c_{\rm comb}^2{(\Lambda=1/3)} &=0.96 \pm 0.1 \times 10^{-6}\ (c^2)\, , \eea where $c^2_{\rm comb}$ is the combination of $c_s^2, c_{bv}^2$ and $c_{sv}^2$ that is relevant for the leading non-linear correction to the power spectrum (one-loop in perturbation theory), and $c$ is the speed of light. \item Alternatively, by directly matching the couplings of the effective fluid to the measured power spectrum, we obtain $c_{\rm comb}^2{(\Lambda=1/3)}\simeq 0.9\times 10^{-6}c^2$ in remarkable agreement with the direct measurement from $N$-body simulations. \item The fluid parameters carry $\Lambda$ dependence (as does any `bare' parameter in an interacting field theory). This cutoff dependence is taken to cancel against the cutoff dependence of the loop integral. As usual in effective field theories, we `renormalize' the theory by sending the cutoff $\Lambda\to\infty$ and carefully changing the fluid parameters so that predictions at low wavenumbers are not changed in the process. The finite values of the fluid parameters such as $c_{\rm comb}^2$ in the $\Lambda\to\infty$ limit is a direct measure of the irreducible finite error made in standard approaches that approximate the dark matter on large scales as a pressureless ideal fluid. This is an irreducible error that is not recovered even by solving non-linearly the equations for a pressureless ideal fluid, as the various perturbative approaches attempt to do. This occurs simply because the equations they solve are not correct. Our approach, in contrast, should reach arbitrary precision, at least in principle, up to non-perturbative corrections. \item The pressure and viscosity dampen the power spectrum by acting in opposition to gravity, which makes sense intuitively. This is able to help explain the observed shape of the baryon-acoustic-oscillations in the power spectrum relative to standard perturbation theory (SPT). \item More precisely, at one-loop the density-density power spectrum receives a correction $\delta P$ from the fluid parameters, which we find to be \beq \delta P(k)\sim - c_{\rm comb}^2 \frac{k^2}{H^2} P_{11,l}(k) \label{Pcorr}\eeq where $P_{11,l}(k)$ is the linear power spectrum. Since this is negative and grows as a function of $k$, the power spectrum is reduced compared to SPT at high $k$'s, improving the agreement with the full non-linear spectrum. \item We will find that already at one-loop, the computed power spectrum agrees at percent level with the non-linear one up to $k\sim 0.24 h$ Mpc$^{-1}$. This suggest that in large scale structure surveys we should be able to extract primordial information all the way to at least such an high wavenumber, improving greatly with respect to the CMB our knowledge of the origin of the universe. \end{itemize} During the years there has been a very large and relevant amount of work in understanding perturbatively the large scale clustering of dark matter. An incomplete sample of these works is given by~\cite{Bernardeau:2001qr,Jain:1993jh,Shoji:2009gg,Jeong:2006xd,Crocce:2005xy,Crocce:2007dt,Matsubara:2007wj,McDonald:2006hf,Taruya:2007xy,Izumi:2007su,Matarrese:2007aj,Matarrese:2007wc,Nishimichi:2007xt,Takahashi:2008yk,Carlson:2009it,Fitzpatrick:2009ci,Enqvist:2010ex,Pietroni:2011iz, Tassev:2011ac,Tassev:2012cq,Tassev:2012hu}.
|
Large scale structure surveys have the potential of becoming the next leading observational window on the physics of the early universe, potentially greatly improving what we are already learning from the CMB. Large Scale Structure physics is however much more complicated than the CMB due to the presence of large matter clustering at small scales. Since at non-linear level different scales are coupled, these non-linearities affect even large scale perturbations that are mildly non linear and so potentially treatable in a perturbative matter. In this paper we have developed the effective field theory of cosmological large scale structures in order to achieve a reliable predictability. Calculations in the effective theory are performed in $k/k_{NL}$. The effective field theory is a cosmological fluid description for cold dark matter, and by extension all matter including baryons which trace the dark matter. The microphysical description is in terms of a classical gas of point particles, which we have smoothed at the level of the Boltzmann equation. We have exhibited and computed the various couplings that appear in the effective field theory, namely pressure and viscosity by matching to $N$-body simulations, finding $c_s^2\sim 10^{-6}\,c^2$, etc.\,. We have developed the perturbative expansion for the power spectrum, which we have carried out at the ${\cal O}(\delta_l^4)$. The fluid parameters arise from UV modes and alter standard perturbation theory. We have found that the corrections lead to a power spectrum in percent agreement with the full nonlinear spectrum as obtained by CAMB up to $k\simeq 0.24 h$~Mpc$^{-1}$. It is a peculiar coincidence that $k\simeq0.24 h$ Mpc$^{-1}$ is also the maximum $k$ at which the popular technique of Renormalized Perturbation Theory (RPT)~\cite{Crocce:2005xy} works. While RPT is a very nice technique to compute non-linear corrections to the power spectrum, we stress that our approach is different at a qualitative and a quantitative level. At a qualitative level, RPT tries to solve, as exactly as possible, non-linear equations for a pressureless ideal fluid. In our approach, instead, we try to solve non-linear equations for a different fluid. This has quantitative effects, as it is shown from the fact that as $\Lambda\to\infty$ our effective parameters like $c_{\rm comb}^2$ do not vanish. What is more important about our EFT is that in can be improved. By performing higher order computations and by adding suitable counterterms, in principle arbitrary precision for reconstructing the power spectrum, or indeed any dark matter observables, can be achieved by going to a sufficiently high order in perturbation theory, on scales $k\lesssim k_{NL}$. Techniques such as RPT or the renormalization group approach~\cite{Matarrese:2007aj} for example are still very nice techniques to perturbatively solve some non-linear equations, resumming many diagrams. It would be interesting to apply those techniques to solve the equations of our EFT. We leave this to future work. The effective field theory approach to large scale structure formation is complimentary to $N$-body simulations by providing an elegant fluid description. This provides intuition for various nonlinear effects, as well as providing computational efficiency, since the numerics required to measure the fluid parameters are expected to be computationally less expensive than a full scale simulation. Indeed we are able to achieve excellent agreement with the small down-sample of the full simulation we examined here. Of course, since the couplings are UV sensitive, it still requires the use of some form of $N$-body simulation to fix the physical parameters, either by matching to the stress-tensor directly or to observables. But this matching is only for a small number of physical parameters at some scale and then the constructed field theory is predictive at other scales. There are several possible extensions of this work. A first extension is to go beyond the one-loop order to two-loop, or higher. This will require the measurement of several new parameters that will enter the effective stress-tensor at higher order, including its stochastic terms. Another extension is to compute the velocity fields and to include the small but finite contributions from vorticity, or to compute higher order $N$-point functions, which can probe non-Gaussianity. Finally, another extension is to consider different cosmologies; in this work we have presented results on dark energy in the form of a cosmological constant. But one could equally consider other models for dark energy. This would presumably alter the value of the fluid parameters in a way that could be measured either from new simulations or determined by observation. Hopefully, our effective field theory for large scale structures will help us use large scale structure surveys to uncover the physics of the beginning of the universe.
| 12
| 6
|
1206.2926
|
|
1206
|
1206.4057_arXiv.txt
|
We obtain Keck HIRES spectroscopy of HVS5, one of the fastest unbound stars in the Milky Way halo. We show that HVS5 is a $3.62\pm0.11$~\msun\ main sequence B star at a distance of $50\pm5$~kpc. The difference between its age and its flight time from the Galactic center is $105\pm18$(stat)$\pm$30(sys)~Myr; flight times from locations elsewhere in the Galactic disk are similar. This $10^8$~yr `arrival time' between formation and ejection is difficult to reconcile with any ejection scenario involving massive stars that live for only $10^7$~yr. For comparison, we derive arrival times of $10^7$~yr for two unbound runaway B stars, consistent with their disk origin where ejection results from a supernova in a binary system or dynamical interactions between massive stars in a dense star cluster. For HVS5, ejection during the first $10^7$~yr of its lifetime is ruled out at the 3-$\sigma$ level. Together with the $10^8$~yr arrival times inferred for three other well-studied hypervelocity stars, these results are consistent with a Galactic center origin for the HVSs. If the HVSs were indeed ejected by the central black hole, then the Galactic center was forming stars $\simeq$200~Myr ago, and the progenitors of the HVSs took $\simeq$100~Myr to enter the black hole's loss cone.
|
\citet{hills88} first predicted unbound ``hypervelocity'' stars (HVSs) as the inevitable consequence of 3-body interactions close to the tidal radius of a massive black hole. There is overwhelming evidence for a $4\times10^6$~\msun\ central black hole in the Milky Way \citep{ghez08, gillessen09}. Theorists expect that the black hole ejects $\sim$$10^{-4}$ HVSs yr$^{-1}$ \citep[e.g.][]{perets07}, which means there are thousands of HVSs in the outer halo. \citet{brown05} discovered the first HVS, a luminous B-type star traveling twice the Galactic escape velocity at a distance of $\simeq$100 kpc, and \citet{brown12b} have subsequently discovered 15 more unbound B-type stars in their targeted HVS survey. Establishing the evolutionary state of the HVSs is important for establishing their ages, distances, and flight times. We define the difference between a HVS's age and its flight time as the `arrival time' (\tarr), the time between its formation and ejection. In this paper we derive \tarr\ for both HVSs and unbound runaway stars. The arrival time provides a useful discriminant between proposed ejection mechanisms. If HVSs are ejected in three-body interactions with the Milky Way's central black hole \citep{hills88}, then the arrival times reflect the timescale for HVSs to achieve orbits that interact with the central black hole. For HVSs formed in the central region of the Galaxy, we expect \tarr$=0.1$--1~Gyr \citep{merritt04, wang04}. On the other hand, in both mechanisms for ejecting runaway stars from the Galactic disk -- a supernova in a binary system or a dynamical interaction among massive stars in a dense star cluster -- a maximum \tarr$\approx10$~Myr is set by the main sequence lifetime of $\gtrsim$10~\msun\ stars. Thus, measuring \tarr\ for an ensemble of HVSs should distinguish between a Galactic center or Galactic disk origin. The evolutionary state of most known HVSs \citep{brown12b} is ambiguous because their effective temperatures and surface gravities are consistent with both old, evolved stars (blue horizontal branch stars) and short-lived main sequence stars. Thus we must turn to other measures to establish their nature. Metallicity is one possibility; we expect that recently formed stars should have solar or super-solar metallicities. Metallicity is inconclusive, however, given the observed metallicity distribution function of stars in the Milky Way. Projected stellar rotation \vsini\ is a better discriminant between evolved stars and main sequence stars. Blue horizontal branch stars have evolved through the giant branch phase and have median \vsini\ $=9$~\kms; the most extreme blue horizontal branch star rotates at 40~\kms\ \citep{behr03}. Late B-type main sequence stars, on the other hand, have median \vsini\ $=150$~\kms; the most extreme objects rotate at $\ge$350~\kms\ \citep{abt02, huang06a}. \citet{lockmann08} argue that HVSs may be spun-up by a binary black hole ejection, but there is presently no evidence for a binary black hole in the Galactic center. Close binaries that produce HVSs and runaways in the Milky Way may exhibit slower stellar rotation because of tidal synchronization; \citet{hansen07} predicts that late B-type HVSs ejected by the Hills mechanism should have \vsini\ $=70-90$ \kms. In any case, fast rotation is the signature of a main sequence star. Of the B-type HVSs discovered to date, only HVS3, HVS7, and HVS8 have been studied with high-resolution spectroscopy. In all cases they are main sequence B stars with 55$<$\vsini$<$260 \kms\ \citep{edelmann05, przybilla08, bonanos08, lopezmorales08, przybilla08b}. Moderate-dispersion spectroscopy of HVS1 suggests it has \vsini\ $=190$~\kms\ \citep{heber08b}, another short-lived B star. Here, we describe high resolution spectroscopy of HVS5, a $g=17.9$ mag star located at declination +67\arcdeg\ accessible only with Keck HIRES. HVS5 is a rapidly rotating 3.6~\msun\ main sequence B star. The difference between its age and its flight time from the Milky Way is $105\pm18$(stat)$\pm$30(sys)~Myr, inconsistent with ejection models involving massive stars. In Section 2 we describe the observations and stellar atmosphere analysis. In Section 3 we discuss the arrival times for the HVSs and unbound disk runaways. We conclude in Section 4.
|
We describe Keck HIRES spectroscopy of HVS5, one of the fastest known HVSs with a minimum Galactic rest frame velocity of $+663\pm3$~\kms. The observations reveal that HVS5 has a projected rotation of \vsini$=133\pm7$~\kms\ and is thus a main sequence B star. Comparing the measured \teff\ and \logg\ with stellar evolution tracks indicates that HVS5 is a $3.62\pm0.11$~\msun, $170\pm17$~Myr old star. Given its present distance and radial velocity, we calculate that HVS5's arrival time, the time between its formation and subsequent ejection, is \tarr$=105\pm18$(stat)$\pm$30(sys)~Myr. This timescale provides an interesting new constraint on the origin of unbound runaways and HVSs. Runaway B stars near the disk have \tarr$=0$--30~Myr, consistent with disk ejection scenarios involving a supernova in a binary system or a dynamical event among several massive stars. The set of B-type HVSs with known evolutionary states, on the other hand, have \tarr$=50$--100~Myr. This timescale is difficult to reconcile with any ejection mechanism requiring a massive star to attain unbound ejection. The central black hole ejection scenario, however, allows for any \tarr. Thus, the derived arrival times for HVSs support the black hole ejection model. Future progress requires obtaining high resolution observations of other HVSs to constrain their age and distance. The age distribution of HVSs has important implications for the epochs of star formation and the growth of the central black hole \citep{bromley12}. Combined with future proper motion measurements, we hope to directly constrain the full space velocity and place of origin of the HVSs.
| 12
| 6
|
1206.4057
|
1206
|
1206.1444_arXiv.txt
|
Molecular clouds (MCs) are the typical regions of star formation in galaxies. Recent high-resolution observational studies in the Milky Way reveal that MCs exhibit an extremely complex, clumpy and often filamentary structure (e.g. Andr\'e et al. 2010, Mensh'chikov et al. 2010), with column and spatial densities varying by many orders of magnitude. The detected large non-thermal linewidths which scale with the size of the cloud or of its larger substructures (e.g. Larson 1981, Solomon et al. 1987, Bolatto et al. 2008) have been interpreted as indicators of the presence of supersonic turbulence. Numerous works in the last two decades have demonstrated that this supersonic turbulence is among the primary physical agents regulating the birth of stars. It creates a complex network of interacting shocks, where dense cores form at the stagnation points of convergent flows. Thus, although at large scales turbulence can support MCs against contraction, at small scales it can provoke local collapse of the emerging prestellar cores. Hence, the timescale and efficiency of a protostar formation depend strongly on the wavelength and strength of turbulent driving source (Klessen, Heitsch \& Mac Low 2000, Krumholz \& McKee 2005). An important structural parameter in analytical and semi-analytical models of star formation in MCs (e.g. Padoan \& Nordlund 2002, Hennebelle \& Chabrier 2009, Veltchev, Klessen \& Clark 2011) is the probability density function ($\rho$-PDF), which gives the probability to measure a given density $\rho$ in a cloud volume $dV$. As demonstrated from many numerical simulations, its shape is approximately lognormal in isothermal, turbulent media that are not significantly affected by the self-gravity (e.g. V\'azquez-Semadeni 1994, Padoan, Nordlund \& Jones 1997, Ostriker, Gammie \& Stone 1999, Federrath, Klessen \& Schmidt 2008). Its lognormality should correspond to the same feature of the observed probability distributions of the {\it column} density $N$ ($N$-PDFs) in MCs, due to the correlation between the local values of $\rho$ along a single line of sight (V\'azquez-Semadeni \& Garc\'ia, 2001). On the other hand, it has been argued that the PDF displays scale-dependent features and/or its shape evolves significantly in time (Federrath, Klessen \& Schmidt 2008, Pineda et al. 2010). The lognormality is typical in the low-density, predominantly turbulent regime, whereas at higher column densities a power-law tail is emerging. Such high-density power-law (PL) tail is a characteristic feature of $N$-PDFs in evolved MCs where star formation processes already occur or just start (Kainulainen et al. 2009, Froebrich \& Rowles 2010). That is confirmed as well from analysis of numerical simulations of clouds dominated by gravity (Ballesteros-Paredes et al. 2011). A consistent theory of cloud structure must take into account the characteristics sketched above. Probing the $N$- and $\rho$-PDFs can be used to set up constraints to analytical star formation theories. In this work a statistical approach is suggested to derive the mass function of high-density (prestellar) cores that are possible progenitors of stars. Our starting point is a description of the PL tail of the $\rho$-PDF that is representative for the MC regions populated by these objects.
| 12
| 6
|
1206.1444
|
||
1206
|
1206.1391_arXiv.txt
|
\setcounter{footnote}{6} HATSouth is the world's first network of automated and homogeneous telescopes that is capable of year-round 24-hour monitoring of positions over an entire hemisphere of the sky. The primary scientific goal of the network is to discover and characterize a large number of transiting extrasolar planets, reaching out to long periods and down to small planetary radii. HATSouth achieves this by monitoring extended areas on the sky, deriving high precision \lcs\ for a large number of stars, searching for the signature of planetary transits, and confirming planetary candidates with larger telescopes. HATSouth employs six telescope units spread over three prime locations with large longitude separation in the southern hemisphere (Las Campanas Observatory, Chile; HESS site, Namibia; Siding Spring Observatory, Australia). Each of the HATSouth units holds four 0.18\,m diameter f/2.8 focal ratio telescope tubes on a common mount producing an $8.2\arcdeg\times8.2\arcdeg$ field-of-view on the sky, imaged using four \ccdsize{4K} CCD cameras and Sloan $r$ filters, to give a pixel scale of 3.7\pxs. The HATSouth network is capable of continuously monitoring 128 square arc-degrees at celestial positions moderately close to the anti-solar direction. We present the technical details of the network, summarize operations, and present detailed weather statistics for the three sites. Robust operations have meant that on average each of the six HATSouth units has conducted observations on $\sim 500$ nights over a two-year time period, yielding a total of more than 1 million science frames at four minute integration time, and observing $\sim10.65$\,hours per day on average. We describe the scheme of our data transfer and reduction from raw pixel images to trend-filtered \lcs\ and transiting planet candidates. Photometric precision reaches $\sim 6$\,mmag at 4 minute cadence for the brightest non-saturated stars at $r\approx10.5$. We present detailed transit recovery simulations to determine the expected yield of transiting planets from HATSouth. We highlight the advantages of networked operations, namely, a threefold increase in the expected number of detected planets, as compared to all telescopes operating from the same site. \setcounter{footnote}{0}
|
\label{sec:introduction} Robotic telescopes first appeared about 40 years ago. The primary motivations for their development included cost efficiency, achieving consistently good data quality, and diverting valuable human time from monotonous operation into research. The first automated and computer-controlled telescope was the 0.2\,m reflector of Washburn Observatory \citep{mcnall:1968}. Another noteworthy development was the Automated Photometric Telescope \citep[APT;][]{boyd:1984} project, which achieved a level of automation that enabled more than two decades of unmanned operations. As computer technology, microelectronics, software, programming languages, and interconnectivity (Internet) have developed, remotely-operated or fully-automated (often referred to as autonomous) telescopes have become widespread \citep[see][for a review]{ct:2010}. A few prime examples are: the 0.75\,m Katzman Automatic Imaging Telescope \citep[KAIT;][]{filippenko:2001} finding a large number of supernovae; the Robotic Optical Transient Search Experiment-I (ROTSE-I) instrument containing four 0.11\,m diameter lenses, which for exampled detected the spectacular $V=8.9$\,mag optical afterglow of a gamma ray burst at redshift of $z\approx1$ \citep{akerlof:1999}; the LIncoln Near Earth Asteroid Research \citep[LINEAR;][]{stokes:1998} and Near Earth Asteroid Tracking \cite[NEAT;][]{pravdo:1999} projects using 1\,m-class telescopes and discovering over a hundred thousand asteroids to date; the All Sky Automated Survey \citep[ASAS;][]{pojmanski:2002} employing a 0.1\,m telescope to scan the entire sky and discover $\sim 50000$ new variables; the Palomar Transient Factory \citep[PTF;][]{rau:2009} exploring the optical transient sky, finding on average one transient every 20 minutes, and discovering $\sim 1500$ supernovae so far; the Super Wide Angle Search for Planets \citep[SuperWASP;][]{pollacco:2006} and Hungarian-made Automated Telescope Network \citep[HATNet;][]{bakos:2004} projects employing 0.1\,m telescopes and altogether discovering $\gtrsim 100$ transiting extrasolar planets. To improve the phase coverage of time-variable phenomena, networks of telescopes distributed in longitude were developed. We give a few examples below. One such early effort was the Smithsonian Astrophysical Observatory's satellite tracker project \citep{whipple:1956,henize:1958}, using almost identical hardware (Baker-Nunn cameras) at 12 stations around the globe, including Cura\c{c}ao and Ethiopia. This network was manually operated. Another example is the Global Oscillation Network Group project \citep[GONG;][]{harvey:1988}, providing Doppler oscillation measurements for the Sun, using 6 stations with excellent phase coverage for solar observations ($|\delta|\!<23.5\arcdeg$). The Whole Earth Telescope \citep[WET;][]{nather:1990} uses existing (but quite inhomogeneous) 1\,m-class telescopes at multiple locations in organized campaigns to monitor variable phenomena \citep{provencal:2012}. The PLANET collaboration \citep{albrow:1998} employed existing 1\,m-class telescopes to establish a round-the-world network, leading to the discovery of several planets via microlensing anomalies. Similarly, RoboNet \citep{tsapras:2009} used 2\,m telescopes at Hawaii, Australia, and La Palma to run a fully automated network to detect planets via microlensing anomalies. ROTSE-III \citep{akerlof:2003} has been operating an automated network of 0.5\,m telescopes for the detection of optical transients, with stations in Australia, Namibia, Turkey and the USA. The study of transiting extrasolar planets (TEPs) has greatly benefited from the development of automated telescopes and networks. \citet{mayor:2009} and \citet{howard:2010,howard:2011} concluded that $\sim 1.7\%$ of dwarf stars harbor planets with radii between $3\rearth$ and $32\rearth$ and periods less than $20$\,d; such planets could be be detected by ground-based surveys such as ours.\footnote{The choice of these limits is somewhat arbitrary, but does not change the overall conclusions. See \refsec{perf} for more details.} When coupled with the geometric probability that these planets transit their host stars as seen from the Earth, only $\sim 0.11\%$ of dwarf stars have TEPs with the above parameters. Further, in a brightness limited sample with e.g.~$r\!<\!12$\,mag, only $\sim 40\%$ of the stars are A5 to M5 dwarfs (enabling spectroscopic confirmation and planetary mass measurement), thus fewer than 1 in 2000 of the $r\!<\!12$\,mag stars will have a moderately ($>3\,\rearth$) large radius and short period ($P<20$\,d) TEP. Consequently, monitoring of tens of thousands of stars at high duty cycle and homogeneously optimal data quality is required for achieving a reasonable TEP detection yield. \begin{figure*}[!ht] \plotone{img/hatsouth2_lowres.eps} \caption{ Engineering model of the \hsfour\ unit, depicting the dome, telescope mount, optical tubes and CCDs. The asymmetric clamshell dome can open/close with the telescope in any position. The equatorial fork mount holds a large frame that supports the four astrographs and CCDs, tilted $\sim 4\arcdeg$ with respect to each other, and therefore capable of producing a mosaic image of $8\arcdeg\times8\arcdeg$. \label{fig:eng}} \end{figure*} To date approximately 140 TEPs have been confirmed, characterized with RVs to measure the planetary mass, and published.\footnote{ See http://exoplanets.org \citep{wright:2011} for the list of published planets, and www.exoplanet.eu \citep{schneider:2011} for a compilation including unpublished results. In this discussion we refer to the published planets, focusing only on those for which the RV variation of the star due to the planet has been measured. } These have been found primarily by photometric transit surveys employing automated telescopes (and networks in several cases) such as WASP \citep{pollacco:2006}, HATNet \citep{bakos:2004}, CoRoT \citep{baglin:2006}, OGLE \citep{udalski:2002}, {\em Kepler} \citep{borucki:2010}, XO \citep{mccullough:2005}, and TrES \citep{alonso:2004}. In addition, {\em Kepler} has found over 2000 strong planetary candidates, which have been instrumental in determining the distribution of planetary radii. Many ($\sim 40$) of these planetary systems have been confirmed or ``validated'' \citep[][and references therein]{batalha:2012}, although not necessarily by radial velocity measurements. While the sample of $\sim 140$ fully confirmed planets with accurate mass measurements is large enough to reveal tantalizing correlations among various planetary (mass, radius, equilibrium temperature, etc.) and stellar (metallicity, age) properties, given the apparent diversity of planets, it is still insufficient to provide a deep understanding of planetary systems. For only the brightest systems is it currently possible to study extrasolar planetary atmospheres via emission or transmission spectroscopy; the faintest system for which a successful atmosphere study has been performed is WASP-12, which has $V \approx 11.6$\,mag; \citep{madhusudhan:2011}. Similarly, it is only for the brightest systems that one can obtain a high S/N spectrum in an exposure time short enough to resolve the Rossiter-McLaughlin effect \citep{holt:1893,schlesinger:1910,rossiter:1924,mclaughlin:1924}, and thereby measure the projected angle between the planetary orbital axis and the stellar spin axis. The existing sample of ground-based detections of TEPs around bright stars is highly biased toward Jupiter-size planets with periods shorter than 5 days. Only 13 of the $\sim 140$ RV-confirmed TEPs have masses below $0.1\,\mjup$, and only 12 have periods longer than 10 days. The bias towards short periods is due not only to the higher geometric probability of short-period transits, and relative ease of their confirmation with spectroscopic (radial-velocity) observations, but also to the low duty cycle of single-longitude surveys. Although the transiting hot Jupiters provide an opportunity to study the properties of planets in an extreme environment, they are not representative of the vast majority of planetary-mass objects in the Universe, which are likely to be of lower mass, and on longer period orbits. While other planet-detection methods, such as microlensing, have proven to be efficient at discovering long-period and low-mass planets \citep{gould:2010,dong:2009}, these methods are primarily useful for studying the statistical distributions of periods and masses of planets, and cannot be used to study the other physical properties of individual planets, which can only be done for TEPs. In this paper we descript HATSouth, a set of new ground-based telescopes which form a global and automated network with exactly identical hardware at each site, and with true 24-hour coverage all year around (for any celestial object in the southern hemisphere, and ``away'' from the Sun in a given season). HATSouth is the first such network, although many more are planned. The Las Cumbres Observatory Global Telescope \citep[LCOGT;][]{brown:2010}, SOLARIS \citep{konacki:2011}, and the KMTNet \citep{kim:2010} will all form global, homogeneous and automated networks when they are completed. The HATSouth survey, in operation since late 2009, has the northern hemisphere HATNet survey \citep{bakos:2004} as its heritage. HATSouth, however, has two important distinctions from HATNet, and from all other ground-based transit surveys. The first and most important is its complete longitudinal coverage. The network consists of six robotic instruments distributed across three sites on three continents in the southern hemisphere: Las Campanas Observatory (LCO) in Chile, the High Energy Stereoscopic System (HESS) site in Namibia, and Siding Springs Observatory (SSO) in Australia. The geographical coordinates of these sites are given in \reftab{specs} below. The longitude distribution of these observatories enables round-the-clock monitoring of selected fields on the sky. This greatly increases the detectability of TEPs, particularly those with periods in excess of a few days. This gives HATSouth an order of magnitude higher sensitivity than HATNet to planets with periods longer than 10 days, and its sensitivity towards $P\approx15-20$\,d planets is better than HATNet's sensitivity at $P\approx8$\,d . This is encouraging given that HATNet has demonstrated sensitivity in this regime with the discoveries of HAT-P-15b \citep{kovacs:2010} and HAT-P-17b \citep{howard:2012} at $P>10$\,d. Note that for mid- to late-M dwarf parent stars, planets with $\sim15$\,d periods lie in the habitable zone. The second difference between HATSouth and HATNet is that each HATSouth astrograph has a larger aperture than a HATNet telephoto lens (0.18\,m vs.~0.11\,m), plus a slower focal ratio and lower sky background (per pixel, and under the point spread function of a star), which allows HATSouth to monitor fainter stars than HATNet. Compared to HATNet, this increases the overall number of dwarf stars observed at 1\% photometry over a year by a factor of $\sim 3$; more specifically the number of K and M dwarf stars monitored effectively is increased by factors of 3.1 and 3.6, respectively (the numbers take into account the much larger surface density of dwarf stars and the somewhat smaller field-of-fiew of HATSouth, along with slight differences in the observing tactics). This increases the expected yield of small-size planets, and opens up the possibility of reaching to the super-Earth range. Furthermore, the ratio of dwarf stars to giant stars that are monitored at 1\% photometric precision in the HATSouth sample at $|b|\approx20\arcdeg$ is about twice\footnote{This ratio is much higher closer to the galactic plane, and is close to unity at the galactic pole.} that of HATNet, yielding a lower false alarm rate. Furthermore, despite greater stellar number densities, stellar crowding is less than with HATNet due to HATSouth's three times finer spatial (linear) resolution. Note that while the stellar population monitored is generally fainter than that of HATNet, they are still within the reach of follow-up facilities. The layout of the paper is as follows. In \refsecl{hw} we describe the HATSouth hardware in detail, including the telescope units (\refsec{hs4}), weather sensing devices (\refsec{wth}), and the computer systems (\refsec{compsys}). In \refsecl{csw} we detail the instrument control software. The HATSouth sites and operations are laid out in \refsecl{sitop}. We give details on the site specifics (\refsec{sites}), the scheme of nightly operations (\refsec{oper}), and present observing statistics for two years (\refsec{stat}). Data flow and analysis are described in \refsecl{dr}, and the expected planet yield is calculated using detailed simulations in \refsecl{perf}.
|
\label{sec:conc} HATSouth is the world's first global network of identical and automated telescopes capable of 24-hour observations all year around. The telescopes are placed at three southern hemisphere sites with outstanding observing conditions (LCO in Chile, HESS site in Namibia, SSO in Australia). Long stretches of continuous observations are often achieved. \reffig{stretches} shows the contiguous blocks of clear weather periods as a function of Julian Date for the past two years. The longest uninterrupted clear period, based on the detailed weather logs for each of the three sites, is 130 hours long. Relatively long stretches, exceeding 24\,hours, are quite frequent. HATSouth builds on the successful northern hemisphere HATNet project \citep{bakos:2004}. However, it implements numerous changes with respect to HATNet. Broadly speaking, we reach into a fainter stellar population, having many more dwarf stars per square degree, thus increasing our overall sample, and also having more dwarfs relative to giant stars (which dilute the sample). The fraction of K and M dwarfs is also significantly higher, facilitating more efficient detection of smaller planets, such as super-Earths. Each of the three sites hosts two HATSouth instruments, called \hsfour\ units. Each \hsfour\ unit holds four 0.18\,m, fast focal ratio hyperbolic astrographs, tilted $\sim 4\arcdeg$\ with respect to each other to produce a mosaic image spanning $8.2\arcdeg\times8.2\arcdeg$ on the sky, imaged onto four \ccdsize{4K} CCD cameras, at a resolution of 3.7\pxs. The photometric zero-point is $r\approx18.9$ (meaning 1\,ADU/s flux), and the 5-$\sigma$ detection threshold for the routinely taken 240\,s images is $r\approx18.5$. Stars become saturated at $r\approx10.5\pm0.5$ in the 240\,s exposures, depending on the focus (thus width of the stellar profile), and the degree of vignetting at the position of the star. We also monitor stars as bright as $r\approx 8.25$ using shorter exposure (30\,s) images. Meteorological conditions are monitored by a weather station (wind, humidity, temperature, precipitation), a cloud detector (primarily cloud cover), a lightning detector (forecasting lightning storms), and an all-sky fisheye camera, all installed on our HATSouth control building. In addition, the individual \hsfour\ units are aided by a hardwired rain detector and a photosensor (for avoiding daytime opening of the dome). Each \hsfour\ unit is controlled by a single rack-mounted computer running Linux and Xenomai. We have developed a dedicated software environment for operating the telescopes, including all hardware components, such as the dome, telescope mount and CCDs. Virtual Observer (\vo) is the intelligent software, managing all aspects of running the observatory, such as preparing devices, scheduling observations, monitoring the weather, handling exceptions, communicating with the outside world, and logging events and observations. The network monitors selected fields on the southern sky for about 2\,months per field. Fields are selected in a way that all dark time during the night is used. A significant effort is made to optimize the data quality, by re-focusing the optics every $\sim15$\,minutes, running real-time astrometry after the frames, and adjusting the pointing of the mount. \begin{figure*}[!ht] \plotone{img/goodwth_stretches.eps} \caption{ Contiguous good weather stretches as a function of Julian Date. The height (and width) of the boxes represent the number of dark hours that were clear from either of the LCO, HESS or SSO sites {\em without an interruption}. Any bad weather longer than 10\,minutes in duration is considered as an interruption. The longest stretch of clear period was about 130\,hours (5.4\,days) long. \label{fig:stretches}} \end{figure*} This combination of precise weather monitoring, the use of a very stable operating system, and running a dedicated software environment has resulted in very robust operations. Indeed, the HATSouth network has taken over 1 million \ccdsize{4K} science frames during its initial 2 years of operations. The six \hsfour\ units have each opened up an average of $\sim500$ nights so far, without a single case of opening when weather conditions were not suitable. We have developed a scheme that reduces the amount of data transferred from the remote sites to the HSDC in Princeton. The reduction pipeline keeps track of the large number of individual hardware components (24 OTAs, 24 CCDs, etc), and maintains a version control for the above, since hardware may change due to routine maintenance or instrument repairs. The current reduction procedure, after application of the Trend Filtering Algorithm \citep[TFA][]{kovacs:2005}, yields \lcs\ reaching 6\,mmag r.m.s.~at 240\,s cadence at around the saturation limit. The \lcs\ are searched for transit candidates using a well-established methodology that has been developed for HATNet, and relies on the BLS \citep[][]{kovacs:2002} algorithm and post-processing. The network is producing high quality planetary transit candidates, and general variability data (\reffig{examplelcs}). Follow-up observations for these candidates are being performed as an intensive team-effort, and will be described in subsequent publications. We have run detailed, realistic simulations on the expected yield of transiting planets from the HATSouth network. The simulations take into account the noise characteristics of the instruments, the weather pattern, the observing windows, the stellar population, the expected planetary population based on recent {\em Kepler} results, and our search methodology for transits. We compared two basic scenarios: all \hsfour\ telescopes are at a single site and observing the same field at higher S/N, or the telescopes are spread out to the current setup, and observe the same field at lower S/N (per unit time), but at much higher fill-factor. The simulations were performed both with uncorrelated and correlated ``red''-noise components in the \lcs. The results clearly prefer the networked setup, predicting a three-fold increase in the number of detected transiting planets, as compared to a single site setup. The long stretches of observations (\reffig{stretches}), and uncorrelated noise between the stations, are clearly fundamental in this increased yield. Notably, the fraction of planets recovered at $P\approx10$\,d period is about 10 times that of a single-site installation (\reffig{yieldsims}, top left panel), and the planetary {\em yield} after taking into account the {\em Kepler} distribution of planets and the geometric transit probability is also $\sim10$ times higher than for the single-site installation (\reffig{yieldsims}, bottom left panel). The peak sensitivity occurs at $P\approx 6$\,days (marginalized over all planetary radii), and exhibits much slower decline towards long periods than the single-site setup. Similarly, there is a significant increase (factors of 3 to 10) in the detection efficiency as a function of planetary radius, especially at small radii. The peak sensitivity, however, occurs at $\sim 1.5\,\rjup$ for both the networked and single-site setups. Altogether, we expect that HATSouth is capable of the detection of $\sim30$ transiting extrasolar planets per year, pending follow-up confirmation of these candidates. Note that while the HATSouth stellar sample is somewhat fainter than that of HATNet, the candidates are still within the reach of follow-up resources available in the southern hemisphere. Examples for such spectroscopic follow-up resources (including those with reconnaissance or high precision radial velocity capabilities) are the Wide Field Spectrograph \citep[WiFeS;][]{dopita:2007} on the ANU~2.3\,m telescope, FEROS on the MPG/ESO 2.2\,m telescope, CORALIE on the Euler 1.2\,m telescope, HARPS on the ESO 3.6\,m telescope (all at La Silla, Chile), the Echelle spectrograph on the 2.5\,m du Pont telescope at LCO, and the UCLES spectrograph on the 3.9\,m AAT at SSO. Global networks of telescopes present a powerful way of studying time-variable astronomical phenomena. By coupling this with telescopes that are identical and fully automated, it is possible to undertake large, long duration surveys that would have been completely unfeasible with manually operated or single site facilities. HATSouth is the first of many projects that will utilize the combination of these two concepts over the next decade and, we hope, make many exciting discoveries in the process.
| 12
| 6
|
1206.1391
|
1206
|
1206.1336.txt
|
%% Text of abstract This paper presents the design of a multi-spacecraft system for the deflection of asteroids. Each spacecraft is equipped with a fibre laser and a solar concentrator. The laser induces the sublimation of a portion of the surface of the asteroid, and the resultant jet of gas and debris thrusts the asteroid off its natural course. The main idea is to have a formation of spacecraft flying in the proximity of the asteroid with all the spacecraft beaming to the same location to achieve the required deflection thrust. The paper presents the design of the formation orbits and the multi-objective optimisation of the formation in order to minimise the total mass in space and maximise the deflection of the asteroid. The paper demonstrates how significant deflections can be obtained with relatively small sized, easy-to-control spacecraft.
|
\renewcommand*{\thefootnote}{\fnsymbol{footnote}} % \alph Near Earth Objects (NEO), the majority of which are asteroids, are defined as minor celestial objects with a perihelion less than 1.3~AU and an aphelion greater than 0.983~AU. A subclass of these, deemed Potentially Hazardous Asteroids (PHA), are defined as those with a Minimum Orbital Intersection Distance (MOID) from the Earth's orbit less than or equal to 0.05 AU and a diameter larger than 150~m (equivalent to an absolute magnitude of 22.0 or less). As of March 2012, 8758 NEO's have been detected \citep{mpc}; of those, 840 are estimated to have an effective diameter larger than 1~km\footnote{An asteroid with an effective diameter equal to or greater than 1 km is defined here to be any NEA with an absolute brightness or magnitude $H \le 17.75$, as per \citet{Stuart2003}.}, and 1298 are categorised as potentially hazardous. Impacts from asteroids over 1 km in diameter are expected to release over $10^5$ megatons of energy with global consequences for our planet \citep{stokes2003}, while those with an average diameter of 100~m can are expected to release over $10^2$ megatons of energy potentially causing significant tsunamis and/or land destruction of a large city \citep{Toon1997}. It is estimated that there are between 30,000--300,000 NEO's with diameters around 100~m, meaning a large number of NEO's are still undetected. A quantitative comparison of the various options for NEO deflection was conducted by \citet{Colombo2006,Sanchez2007}. Examining the results of the comparison, one of the more interesting methods employed solar sublimation to actively deviate the orbit of the asteroid. The original concept, initially evaluated by \citet{Melosh1994}, and later assessed by \citet{Kahle2006}, envisioned a single large reflector; this idea was expanded to a formation of spacecraft orbiting in the vicinity of the NEO, each equipped with a smaller concentrator assembly capable of focusing the solar power at a distance around 1 km and greater \citep{Maddock2007}. This concept addressed the proper placement of the concentrators in close proximity to the asteroid while avoiding the plume impingement and provided a redundant and scalable solution. However, the contamination of the optics still posed a significant limitation as demonstrated by \citet{vasile2010}. In the same paper, the authors demonstrated that the combined effect of solar pressure and enhanced Yarkovsky effect could lead to miss (or deflection) distances of a few hundred to a thousand kilometres over eight years of deflection time. However, this deflection is orders of magnitude lower that the one achievable with a prolonged sublimation of the surface. A possible solution is to use a collimating device that would allow for larger operational distances and protection of the optics. This paper presents an asteroid deflection method based on a formation of spacecraft each equipped with solar pumped lasers. The use of lasers has already proposed by several authors, although always in conjunction with a nuclear power source \citep{Phipps1992,Phipps1997,Park2005}. Extensive studies on the dynamics of the deflection with high power lasers were proposed by \citet{Park2005} envisaging a single spacecraft with a MW laser. This paper proposes a different solution with a formation of smaller spacecraft, each supporting a kW laser system indirectly pumped by the Sun. The paper starts with a simple sublimation model that is used to compute the deflection force. The orbits of the spacecraft formation are then designed by solving a multi-objective optimisation problem that yields an optimal compromise between distance from the target and impingement with the plume of debris. A Lyapunov controller is proposed to maintain the spacecraft in formation along the desired proximal orbit. A second multi-objective optimisation problem is then solved to compute a different type of controlled formation orbits in which the shape of the orbit is predefined. Finally, the number and size of the spacecraft is optimised to yield the maximum possible deflection. %The asteroid Apophis is used here as a representative case to test the proposed approach.
|
This paper presented the multidisciplinary design of a formation of spacecraft equipped with solar pumped laser for the deflection of asteroids. The paper demonstrated that the use of multiple spacecraft is an optimal solution to maximise the deflection while minimizing the mass of the overall system. In fact as the diameter of the primary mirror increases the radiator and laser mass increases up to the point at which the mass of a single spacecraft exceeds the total mass of two or more spacecraft of smaller size. This is a very important point that is in favour of the use of a formation instead of a single large spacecraft. A formation, or fractionated system, has the further advantage of increasing redundancy and scalability as for a bigger asteroid the solution is simply to increase the number of spacecraft. The sizing of the spacecraft was based on a simple model in which the mass of the main bus is considered constant and the propellant mass is not optimised. These are two limiting assumptions that cause an overestimation of the mass for small systems. At the same time the deployment and thermal control systems are assumed to be scalable within the range of variability of the design parameters. Looking at present technology, this assumption can correspond to an underestimation of the mass for large systems. The efficiency of the laser and solar cells are at the upper limit of what is currently achievable in a lab environment. Although this is an optimistic assumption, current developments are progressing towards those limits independently of the deflection of asteroids. It is therefore reasonable to expect the system efficiencies presented in this paper in the near future. The paper also analyzed the control of the spacecraft in the vicinity of the asteroid and showed that with minimal control and propellant consumption the spacecraft can be maintained in their desired formation orbits. Finally it was demonstrated that the laser ablation concept based on solar power is applicable also to high eccentric orbits (deep crossers) with even better performance with respect to the shallow crosser case. In fact, for deep crossers the deflection action is maximal where most effective, i.e., around the perihelion, and the steep intersection between orbit of the Earth and orbit of the asteroid amplifies the deflection effect.
| 12
| 6
|
1206.1336
|
1206
|
1206.0732_arXiv.txt
|
\noindent Forthcoming radio continuum surveys will cover large volumes of the observable Universe and will reach to high redshifts, making them potentially powerful probes of dark energy, modified gravity and non-Gaussianity. We consider the continuum surveys with LOFAR, WSRT and ASKAP, and examples of continuum surveys with the SKA. We extend recent work on these surveys by including redshift space distortions and lensing convergence in the radio source auto-correlation. In addition we compute the general relativistic (GR) corrections to the angular power spectrum. These GR corrections to the standard Newtonian analysis of the power spectrum become significant on scales near and beyond the Hubble scale at each redshift. We find that the GR corrections are at most percent-level in LOFAR, WODAN and EMU surveys, but they can produce $O(10\%)$ changes for high enough sensitivity SKA continuum surveys. The signal is however dominated by cosmic variance, and multiple-tracer techniques will be needed to overcome this problem. The GR corrections are suppressed in continuum surveys because of the integration over redshift -- we expect that GR corrections will be enhanced for future SKA HI surveys in which the source redshifts will be known. We also provide predictions for the angular power spectra in the case where the primordial perturbations have local non-Gaussianity. We find that non-Gaussianity dominates over GR corrections, and rises above cosmic variance when $f_{\rm NL}\gtrsim5$ for SKA continuum surveys.
|
Radio continuum surveys for cosmology are entering a new phase, given the imminent surveys with LOFAR (the LOw Frequency ARray for radio astronomy \cite{rottgering03}), WSRT (Westerbork Synthesis Radio Telescope \cite{oosterloo10}) and ASKAP (Australian SKA Pathfinder \cite{Johnston08}) telescopes, and the prospect of the Square Kilometre Array (SKA) in the coming decade. Increased sensitivity, a very wide sky coverage, and deep redshift reach will facilitate cosmological observations with significant accuracy. This has only recently been explored in \cite{Raccanelli:2011pu}, which analyzed what can be achieved by surveys with LOFAR \citep{rottgering10}, WSRT (WODAN, Westerbork Observations of the Deep Apertif Northern sky survey \citep{rottgering11}) and ASKAP (EMU, Evolutionary Map of the Universe \cite{Norris11}), via three experiments: auto-correlation of radio sources, cross-correlation of radio sources with the Cosmic Microwave Background (the late Integrated Sachs-Wolfe effect), and cross-correlation of radio sources with foreground objects (cosmic magnification). The auto-correlation function has been further investigated by \cite{camera}, which examines the impact of cross-identification of radio sources with optical redshift surveys. The huge volumes covered by forthcoming radio surveys, and their deep redshift reach in comparison to current and future optical surveys, mean that correlations on scales above the Hubble horizon $H^{-1}(z)$ will be measured. On these scales, the standard analysis of the power spectrum is inadequate -- because this analysis is Newtonian, i.e. it is based on the assumption of sub-Hubble scales. The Newtonian analysis must be replaced by the correct general relativistic (GR) analysis in order to consistently incorporate super-Hubble scales. On small scales, the Newtonian analysis is a very good approximation, but on larger and larger scales, the GR corrections become more significant. On these larger scales, any primordial non-Gaussianity in the matter distribution also grows larger. This probe of non-Gaussianity is expected to become competitive with the CMB for large-volume surveys such as those in the radio. Thus it is important to perform a GR analysis in order to correctly identify the non-Gaussian signal. Unfortunately cosmic variance also becomes more and more of a problem on these larger scales covered by radio surveys. However, it is possible to beat down cosmic variance by using different tracers of the underlying matter distribution. In this paper, we re-visit the Newtonian analysis of radio continuum surveys, and include for the first time the terms that were not considered in the auto-correlations computed by \cite{Raccanelli:2011pu,camera} -- i.e. redshift space distortions, lensing convergence and the GR corrections (potential and velocity terms). We do this first in the Gaussian case, and then when there is primordial non-Gaussianity of the local type.
|
We have presented an analysis of GR corrections to the angular power spectrum of radio sources that will be measured with forthcoming radio continuum surveys such as LOFAR, EMU and WODAN, along with predictions for SKA-like surveys. These surveys will be well suited for probing GR corrections to the standard Newtonian analysis of the spectra, since they will observe super-Hubble scales not yet surveyed, being very deep and wide. We have computed for the first time the contributions from redshift distortions and lensing convergence to the angular power spectrum for radio continuum surveys, thus generalizing previous results that incorporated only the overdensity contribution \cite{Raccanelli:2011pu}. Then we have included the further GR corrections, in the form of velocity and potential terms. We have shown how all these contributions will, individually and in combination, affect measurements of the angular power spectrum for the different surveys considered. The GR corrections to the standard Newtonian analysis are most significant on the largest scales, reaching $O(10\%)$ for the SKA. However, precisely because they grow with scale, they are dominated by cosmic variance for the near future SKA Pathfinder generation surveys, while they could be in principle observable by a sensitive enough SKA-like survey. With complementary information from other tracers (e.g. via the Euclid optical/ IR survey), cosmic variance can be overcome. A large-scale increase in power can also be due to primordial non-Gaussianity. This means that a GR analysis is essential for a correct calculation of the non-Gaussian signal. We have computed the corrections arising from the local form of non-Gaussianity for the different surveys. Comparing the non-Gaussian effect to that of GR corrections without non-Gaussianity, we find that non-Gaussian corrections to the power spectrum will dominate over GR corrections for continuum surveys. The non-Gaussian signal rises above cosmic variance on large enough scales as follows: for WODAN when $f_{\rm NL}\gtrsim80$; for EMU when $f_{\rm NL}\gtrsim50$; for LOFAR when $f_{\rm NL}\gtrsim20$ and for SKA when $f_{\rm NL}\gtrsim5$. Continuum radio surveys do not provide redshift information, so that GR corrections can be degenerate with a change in the distribution of matter, given by the product of radial source distributions with the bias. We expect that an SKA HI galaxy survey will show stronger and more clearly defined GR corrections, because spectroscopic information will break this degeneracy. \vspace{0.5in} \noindent {\bf Acknowledgments:}\\ We thank Matt Jarvis for helpful discussions. RM was supported by the South African Square Kilometre Array Project and National Research Foundation. RM, GZ, DB, KK were supported by the UK Science \& Technology Facilities Council (grant no. ST/H002774/1) and by a Royal Society (UK)/ National Research Foundation (SA) exchange grant. KK was also supported by the European Research Council and the Leverhulme Trust. Part of the research described in this paper was carried out at the Jet Propulsion Laboratory, California Institute of Technology, under a contract with the National Aeronautics and Space Administration.
| 12
| 6
|
1206.0732
|
1206
|
1206.5007_arXiv.txt
|
The intergalactic medium was reionized before redshift $z \sim 6$, most likely by starlight which escaped from early galaxies. The very first stars formed when hydrogen molecules (H$_2$) cooled gas inside the smallest galaxies, \emph{minihalos} of mass between $10^5$ and $10^8 \,M_\odot$. Although the very first stars began forming inside these minihalos before redshift $z \sim 40$, their contribution has, to date, been ignored in large-scale simulations of this cosmic reionization. Here we report results from the first reionization simulations to include these first stars and the radiative feedback that limited their formation, in a volume large enough to follow the crucial spatial variations that influenced the process and its observability. We show that, while minihalo stars stopped far short of fully ionizing the universe, reionization began much earlier \emph{with} minihalo sources than without, and was greatly extended, which boosts the intergalactic electron-scattering optical depth and the large-angle polarization fluctuations of the cosmic microwave background significantly. Although within current \emph{WMAP} uncertainties, this boost should be readily detectable by \emph{Planck}. If reionization ended as late as $z_{\rm ov} \lesssim 7$, as suggested by other observations, \emph{Planck} will thereby see the signature of the first stars at high redshift, currently undetectable by other probes.
|
\label{sec:intro} The theory of reionization has not yet advanced to the point of establishing unambiguously its timing and the relative contributions to it from galaxies of different masses. In a Cold Dark Matter (``CDM'') universe, these early galactic sources can be categorized by their host halo mass into minihalos (``MHs'') and ``atomic-cooling'' halos (``ACHs''). MHs have masses $M \sim 10^5 - 10^8$ $M_\odot$ and virial temperatures $T_{\rm vir}\sim 10^4$ K, and thus molecular hydrogen (H$_2$) was necessary to cool the gas below this virial temperature to begin star formation. ACHs have $M\gtrsim 10^8$ $M_\odot$ and $T_{\rm vir}\gtrsim 10^4$ K, for which H-atom radiative line cooling alone was sufficient to support star formation. The ACHs can be split further into low-mass atomic-cooling halos (``LMACHs''; $M\sim 10^8 - 10^9$ $M_\odot$), for which the gas pressure of the photoionization-heated intergalactic medium (``IGM'') in an ionized patch prevented the halo from capturing the gas it needed to form stars, and high-mass atomic-cooling halos (``HMACHs''; $M \gtrsim 10^9 M_\odot$), for which gravity was strong enough to overcome this ``Jeans-mass filter'' and form stars even in the ionized patches. Once starlight escaped from galactic halos into the IGM to reionize it, the ionized patches (``H II regions'') of the IGM became places in which star formation was suppressed in both MHs and LMACHs. At the same time, UV starlight at energies in the range 11.2 -- 13.6 eV also escaped from the halos, capable of destroying the H$_2$ molecules inside MHs through Lyman-Werner band (``LW'') dissociation, even in the neutral zones of the IGM. This dissociation eventually prevented further star formation in some of the MHs where the background intensity was high enough. Early estimates, in fact, suggested that this would have made the MH contribution to reionization small \citep{Haiman2000}, and, until now, large-scale simulations of reionization have neglected them altogether. In this letter we report the first radiative transfer (RT) simulations of reionization to include \emph{all three} of the mass categories of reionization source halos, along with their radiative suppression, in a simulation volume large enough to capture both the global mean ionization history and the observable consequences of its evolving patchiness in a statistically meaningful way\footnote{First results of these simulations were briefly summarized in \citet{Ahn2010a}.}. We overcame the limitation of previous large-volume simulations by applying a newly developed sub-grid treatment to include MH sources (section~\ref{sec:methods}), and calculating the transfer of LW-band radiation self-consistently with the source population using the scheme of ~\citet{Ahn2009}.
| 12
| 6
|
1206.5007
|
|
1206
|
1206.2411_arXiv.txt
|
In this paper we continue our theoretical studies on addressing what are the possible consequences of magnetic helicity accumulation in the solar corona. Our previous studies suggest that coronal mass ejections (CMEs) are natural products of coronal evolution as a consequence of magnetic helicity accumulation and the triggering of CMEs by surface processes such as flux emergence also have their origin in magnetic helicity accumulation. Here we use the same mathematical approach to study the magnetic helicity of axisymmetric power-law force-free fields, but focus on a family whose surface flux distributions are defined by self-similar force-free fields. The semi-analytical solutions of the axisymmetric self-similar force-free fields enable us to discuss the properties of force-free fields possessing a huge amount of accumulated magnetic helicity. Our study suggests that there may be an absolute upper bound on the total magnetic helicity of all bipolar axisymmetric force-free fields. And with the increase of accumulated magnetic helicity, the force-free field approaches being fully opened up, with Parker-spiral-like structures present around a current-sheet layer as evidence of magnetic helicity in the interplanetary space. It is also found that among the axisymmetric force-free fields having the same boundary flux distribution, the one that is self-similar is the one possessing the maximum amount of total magnetic helicity. This possibly gives a physical reason why self-similar fields are often found in astrophysical bodies, where magnetic helicity accumulation is presumably also taking place.
|
Magnetic helicity is a physical quantity that describes field topology, quantifying the twist (that is, self-helicity) and linkage (that is, mutual-helicity) of magnetic flux lines. One important property of magnetic helicity is that the total magnetic helicity is still conserved in the corona even when there is a fast magnetic reconnection (Berger 1984). This indicates that once the magnetic helicity is transported into the corona, it cannot be annihilated by solar flares. Over the past two decades, observations have shown that magnetic fields, created by dynamo processes in the solar interior, are emerging at the solar photosphere and then into the corona, with a preferred helicity sign in each hemisphere, namely, positive helicity sign in the southern hemisphere and negative helicity sign in the northern hemisphere (Pevtsov et al. 1995, Bao \& Zhang 1998, Hagino \& Sakurai 2004, Zhang 2006, Wang \& Zhang 2010, Hao \& Zhang 2011). Taking this observed hemispheric rule of helicity sign, together with the approximate law of helicity conservation in turbulent magnetic reconnection (Berger 1984), we can then infer that the total magnetic helicity is accumulating in the corona in each hemisphere (Zhang \& Low 2005). An interesting question then to address is, what are the consequences of this magnetic helicity accumulation in the corona. In a previous paper (Zhang et al. 2006), we showed that in an open atmosphere such as the solar corona, for a given boundary flux distribution on the solar surface, there is an upper bound on the magnitude of the total magnetic helicity of all axisymmetric power-law force-free fields. The accumulation of magnetic helicity in excess of this upper bound would initiate a non-equilibrium situation, resulting in solar eruptions such as coronal mass ejections (CMEs), as natural products of coronal evolution due to the magnetic helicity accumulation in the corona. In a follow-up paper (Zhang \& Flyer 2008), we studied the dependence of the helicity bound on the boundary condition. We found that the magnitude of the helicity upper bound of force-free fields is non-trivially dependent on the boundary flux distribution. Fields with a multipolar boundary condition can have a helicity upper bound ten times smaller than those with a dipolar boundary condition. This suggests that a coronal magnetic field may erupt into a CME when the applicable helicity bound falls below the already accumulated helicity as the result of a slowly changing boundary condition. This gives insights into the observed associations of CMEs with the magnetic variations on the surface. For example, it can explain why CME occurrences are often observed to be associated with flux emergences whereas not every flux emergence will trigger a CME (e.g. observations in Zhang et al. 2008). Here the role of flux emergence is modeled as a change in boundary condition. This change is not always a trigger of a CME, a role it can play only if the coronal magnetic field has accumulated a critical amount of magnetic helicity. In this paper, we continue our study in this direction by investigating magnetic helicity that relates to a family of axisymmetric self-similar force-free fields. The semi-analytical solutions of this family enable us to discuss the properties of force-free fields possessing a huge amount of accumulated magnetic helicity; impossible otherwise using conventional numerical methods as in our previous studies. We are interested in addressing the following two questions: 1) is there an absolute helicity upper bound for all bipolar axisymmetric force-free fields? 2) when a huge amount of magnetic helicity is finally released into the interplanetary space, what would the magnetic field look like? We organize our paper as follows. We describe our theoretical model in Section 2 and give our results and analysis in Section 3. Conclusion and discussion are given in Section 4.
|
In our previous papers, we studied series of axisymmetric power-law force-free fields and found that there may be an upper bound on the total magnetic helicity that force-free fields with the same boundary flux distribution can contain (Zhang et al. 2006) and this upper bound non-trivially depends on the boundary flux distribution (Zhang \& Flyer 2008). These studies put solar eruptions such as CMEs as natural products of coronal evolution and give physical understandings on why surface variations such as flux emergences can trigger CMEs. In this paper, we continue our studies in this direction with further analysis on the series of solutions to Eq.(6). We make use of a family of semi-analytical self-similar force-free fields constructed in Low and Lou (1990) to extend the force-free fields we studied beyond what our current numerical methods can obtain. The results we find are summarized below. 1. Within the same family of axisymmetric power-law force-free fields that have the same boundary flux distribution and the same power-law index $n$, the self-similar force-free field is always the end-state that possesses the maximum total magnetic helicity. This means that self-similar force-free fields are good magnetic helicity containers. It also gives an indication to why self-similar fields are often found in the astrophysical systems, where magnetic helicity accumulation is presumably also taking place. 2. With the increase of the index n, the total magnetic helicity of self-similar force-free fields levels off to an asymptotic value. This number, $0.5F_p^2$, possibly defines the absolute upper bound on the total magnetic helicity that all bipolar axisymmetric force-free fields can contain, independent of the boundary flux distribution. 3. With the increase of the index n, the magnetic field fully opens up, forming a current sheet at the equator and reaches the Aly-limit energy as Wolfson (1995) has described. However, different from his conclusions, our study shows that even though the field geometry in the $r-\theta$ plane of this limiting case of self-similar force-free fields looks almost identical to the potential fully-open Aly field, the two fields are not topologically the same in 3D space. The limiting case of self-similar force-free field possesses structures with Parker spirals and contains a huge amount of magnetic helicity. This seems suggesting that Parker-spiral-like structures are where the significant amount of magnetic helicity, released from the low solar corona, resides in the interplanetary space. Our finding that the limiting case of self-similar force-free fields is not the fully-open Aly field removes the concerns in Wolfson (1995). In fact, it actually strengthens the main result in his paper, i.e. the corona can become fully opened up by shearing the footpoints of its magnetic field, rather than weakening it. It is also interesting to mention that this process of opening up by shearing the footpoints in a particular manner, is another possible way to open up the magnetic field without magnetic reconnection, a theoretical phenomena described in Rachmeler et al. (2009).
| 12
| 6
|
1206.2411
|
1206
|
1206.0764_arXiv.txt
|
Times of arrival of high energy neutrinos encode information about their sources. We demonstrate that the energy-dependence of the onset time of neutrino emission in advancing relativistic jets can be used to extract important information about the supernova/gamma-ray burst progenitor structure. We examine this energy and time dependence for different supernova and gamma-ray burst progenitors, including red and blue supergiants, helium cores, Wolf-Rayet stars, and chemically homogeneous stars, with a variety of masses and metallicities. For choked jets, we calculate the cutoff of observable neutrino energies depending on the radius at which the jet is stalled. Further, we exhibit how such energy and time dependence may be used to identify and differentiate between progenitors, with as few as one or two observed events, under favorable conditions.
|
There is growing observational evidence and theoretical foundation for the connection between core-collapse supernovae (CCSNe) and long gamma-ray bursts (GRBs)~\cite{1998Natur.395..670G, bloom:02, 2003ApJ...591L..17S}. Location of GRBs in blue-luminosity regions of their host galaxies, where massive stars form and die, and CCSN signatures in the afterglows of nearby GRBs have provided strong evidence for the CCSN-GRB connection (see summaries by \citep{wb:06,2011arXiv1104.2274H,modjaz:11}). While many details of the CCSN-GRB relationship are still uncertain, current theoretical models suggest that a canonical GRB has a relativistic jet from a central~engine~\cite{Cavallo:1978zz, Goodman:1986az, Paczynski:1986px, Piran:1999kx, Zhang:2003uk, Piran:2004ba, 2006RPPh...69.2259M}. Gamma rays are emitted by high energy electrons in the relativistic jet~\citep{Fireball}. If the relativistic jet is able to escape from the star, the gamma rays can be observed and the GRB is coined ``successful.'' If the jet stalls inside the star, however, the gamma-ray signal is unobservable and the GRB is coined ``choked''~\citep{2001PhRvL..87q1102M}. While only $\sim 10^{-3}$ of CCSNe has extremely relativistic jets with $\Gamma\gtrsim 100$, which lead to successful GRBs, a much larger subset of non-GRB CCSNe appears to be accompanied by collimated mildly relativistic jets with $\Gamma\sim 10$~\cite{FailedGRBandorphan2002MNRAS.332..735H, soderberg:04, 2006Natur.442.1011P, modjaz:11, 2012MNRAS.420.1135S, 2010Natur.463..513S}. These CCSNe with mildly relativistic jets may make up a few percent of all CCSNe \cite{berger:03, soderberg:04, vanPutten:2004dh, HeEnvelopeBreakout}. Similarly, the jet's Lorentz factor for low-luminosity GRBs may also be much below those of high luminosity GRBs~\mbox{\citep{2006ApJ...651L...5M,2007APh....27..386G,2007PhRvD..76h3009W, Baret20111}}. GRB progenitors may also produce high energy neutrinos (HENs) \cite{2010RScI...81h1101H}. The relativistic jet responsible for the gamma rays has been argued to be responsible for shock-acceleration of protons to ultrarelativistic energies, leading to nonthermal HENs produced in photomeson interactions of the accelerated protons. Recent upper limits from the IceCube detector \citep{2004APh....20..507A} disfavor GRB fireball models with strong HEN emission associated with cosmic ray acceleration. However, milder HEN fluxes or alternative acceleration scenarios are not ruled out \cite{2012Natur.484..351A}. Moreover, the constraints weaken substantially when uncertainties in GRB astrophysics and inaccuracies in older calculations are taken into account, and the standard fireball picture remains viable \cite{PhysRevLett.108.231101,2012ApJ...752...29H}. Proton acceleration may occur in external shocks \cite{Fireball, Murase:2007yt}, both forward and reverse, as well as in internal shocks associated with the jet~\cite{Narayan:1992iy, Rees:1994nw, waxmanbachall, Rachen:1998fd, Waxman:1999ai, Li:2002dw, Dermer:2003zv}. HENs, unlike gamma rays, can thus be emitted while the jet is still inside the star. Therefore, while gamma rays are expected to be emitted only if the GRB is successful, HENs are expected for both successful and choked GRBs. Thus, not only does the expected rate increase to a larger fraction of the CCSN rate of $(2-3)\,{\rm yr}^{-1}$ in the nearest $10\,{\rm Mpc}$ \cite{Ando:2005ka, Horiuchi:2011zz, Botticella:2011nd}, these mildly relativistic jets also present a more baryon-rich environment conducive to neutrino emission. Emission of HENs from internal shocks of mildly relativistic jets has therefore been considered to be an important contribution to the overall observable neutrino flux \cite{HEN2005PhysRevLett.93.181101, HENAndoPhysRevLett.95.061103, chokedfromreverseshockPhysRevD.77.063007}. As only neutrinos and gravitational waves can escape from the inner regions of a star, it means that they are a unique tool to study the internal structure of GRB progenitors. Whether and when HENs escape the star depend on the star's optical depth to neutrinos, which in turn depends on where neutrinos are emitted inside the star, as well as the neutrino energy -- HENs can only escape after the relativistic jet reaches a radius where densities are low enough. This finite neutrino optical depth can delay and modify the spectrum of HEN emission from GRBs. Detecting HENs with time and energy information, at present and upcoming neutrino telescopes, therefore presents an unprecedented opportunity to probe GRB and CCSN progenitors, which may shed light on the GRB mechanism and the CCSN-GRB connection. Probing the interior structure of GRB/SN progenitors via HENs has first been suggested by Razzaque, M\'esz\'aros, and Waxman \cite{PhysRevD.68.083001}. They showed, for two specific stages of jet propagation, that the observable neutrino spectrum is affected by the stellar envelope above the jet head, which can in turn be used to examine this envelope via the detected neutrinos. Horiuchi and Ando \citep{chokedfromreverseshockPhysRevD.77.063007} also mention HEN interactions for jets within the stellar envelope (see also Section \ref{section:interaction}). This article examines the question: What do the times of arrival of detected high energy neutrinos tell us about the properties of their source? We investigate the role of the opacity of CCSN/GRB progenitors in the properties and distribution of observed HENs, and how these observed HEN properties can be used to probe the progenitors' structure. Studying the optical depth at which HENs can escape the progenitor, we find a progenitor- and energy-dependent temporal structure of the high energy neutrino emission and jet breakout. Observations of HEN signatures of CCSNe or GRBs at neutrino telescopes, even with one or two events, could provide crucial information for differentiating between progenitors and characterizing their properties. Such information would advance our understanding of CCSNe, GRBs, and their relationship to each other. The paper is organized as follows: In Sec.~\ref{sec:progenitors}, we review CCSN and GRB models. In Sec.~\ref{sec:HENS}, we briefly discuss HEN production in GRBs, propagation in the stellar material and flavor oscillations thereafter, and detection at Earth. In Sec.~\ref{sec:optical}, we describe our calculations of the neutrino interaction length and optical depth inside the stellar envelope, and present our results on the energy-dependent radius from which neutrinos can escape. In Sec.~\ref{sec:temporal}, we discuss the temporal structure of energy-dependent neutrino emission from advancing and stalled jets for different stellar progenitors. This is followed in Sec.~\ref{sec:onsets} by our results for energy dependent onset and emission duration of HENs. In Sec.~\ref{sec:results}, we present our interpretations for the energy-dependent onset of HEN emission and discuss how it probes the progenitors, particularly with a few detected neutrinos. We summarize our results in Sec.~\ref{sec:summary}.
|
\label{sec:summary} We investigated the opacity of massive pre-supernova stars to high energy neutrinos (HENs) in order to address the question: What do the times of arrival of detected high energy neutrinos tell us about the properties of their source? We investigated the effect of opacity on the observable HEN emission from both successful and choked jets. In particular, we have examined various zero-mass main sequence (ZAMS) stellar masses from (12-75)\,M$_\odot$ for low-metallicity non-rotating and stellar-metallicity rotating cases. For the considered progenitors, we presented the energy-dependent critical radius from which HENs cannot escape. We found that the presence of the stellar hydrogen envelope has a negligible effect on the optical depth for neutrino energies of $\epsilon_\nu\lesssim10^5$ GeV, i.e. the most relevant energy range for HEN detection. The critical radius, however, largely varies with $\epsilon_\nu$ within the helium core, which has relevant consequences on observations. The neutrino emission spectrum changes as the relativistic jet advances in the star. Considering mildly relativistic jets ($\Gamma_j\approx10$) and HEN production in internal shocks, the energy dependence of the onset of neutrino emission is shown in Figure\,\ref{figure:emissiononset}. Such time dependence can provide important information on the stellar structure. For instance, with observation of multiple HENs, the detected neutrino energies and times provide constraints on stellar density at different depths in the star. The energy of a detected precursor HEN can also provide information on the maximum time frame in which the relativistic jet will break out of the star and become observable. We examined how neutrino interaction with dense stellar matter can be used to probe stellar progenitors. We investigated both the strong and weak-signal limits, i.e. when many or only a few neutrinos are detected, respectively. We demonstrated that under favorable conditions one can use the time difference between a precursor neutrino and the jet breakout to exclude some progenitor models. The relative times of arrival for multiple neutrinos may also be sufficient to exclude progenitor models, even with no observed electromagnetic emission. Additionally, the detection of HENs with energies above $\gtrsim 10^3$~GeV from choked GRBs makes it likely that the progenitor possessed a hydrogen envelope prior to explosion. Also, while the detection of HENs and electromagnetic signals can provide information on the stellar region at and above the shock radius, the coincident detection of HENs with gravitational waves \cite{PhysRevLett.107.251101} may be informative w.r.t. jet development and propagation below the shock radius. We proposed the use of extremely-high energy neutrinos ($\epsilon_\nu\gtrsim100\,$TeV) detected by cubic-kilometer neutrino detectors such as IceCube for searches for coincident \emph{downgoing} neutrinos. These extremely-high energy events probably arrive after other lower energy events that can be used in a coincident analysis. Downgoing neutrinos with lower energies are typically not used in searches due to the large atmospheric muon background from these directions and energies. A future extension of this work will be the calculation of neutrino fluxes from different radii, similar to the calculations of Razzaque \emph{et al.} \cite{PhysRevD.68.083001}, who estimated the flux for two different radii, but without timing information. Such addition to the temporal structure of neutrino energies can provide a more detailed picture of not only the information in neutrinos about the progenitor, but also the likelihood of detecting neutrinos with given information content. We note here that the uncertainty in the reconstructed energy and timing of HENs introduces uncertainty in the measurement of the onset of HEN emissions at the energies of the detected neutrinos. These uncertainties need to be taken into account when comparing emission models to observations. Additionally, there could be other factors, e.g., physics related to jet propagation, that we have treated schematically in this work, but could have similar impact on the temporal structure of HEN events. However, we have pointed out some generic features which should motivate future work that investigates experimental detectability of these features. The results presented above aim at describing the interpretation of a set of detected high energy neutrinos from a collapsar event. One of the advantages of such interpretation is that it can be done independently of the emitted high energy neutrino spectrum inside the source, which greatly simplifies the understanding of observations. An important direction of extending this work will be the calculation of the emitted neutrino spectrum inside the source. Obtaining this time-dependent spectrum would let us understand the probabilities of obtaining a set of neutrinos with specific energies and arrival times, which can be used to further refine the differentiation between progenitor scenarios. The calculation of emission spectra can be done similarly to the work of Razzaque \emph{et al.} \cite{PhysRevD.68.083001}, who calculated the observable spectrum for two specific radii for one stellar model. For accurate flux estimates, one must also take into account neutrino flavor oscillations in the stellar envelope, which are non-negligible for the relevant energy range \cite{flavoroscillation2009arXiv0912.4028R}. For precisely assessing the capability of differentiating between various stellar models, one will further need to compare the estimated source flux with the atmospheric neutrino background of neutrino detectors (see, e.g., \cite{2004PhRvD..70b3006B,1996PhRvD..53.1314A,PhysRevD.83.012001}).
| 12
| 6
|
1206.0764
|
1206
|
1206.5807.txt
|
The {\it Zurich Environmental Study (ZENS)} is based on a sample of $\sim1500$ galaxy members of 141 groups in the mass range $\sim10^{12.5-14.5} M_\odot$ within the narrow redshift range $0.05<z<0.0585$. \emph{ZENS} adopts novel approaches, here described, to quantify four different galactic environments, namely: $(1)$ the mass of the host group halo; $(2)$ the projected halo-centric distance; $(3)$ the rank of galaxies as central or satellites within their group halos; and $(4)$ the filamentary large-scale structure (LSS) density. No self-consistent identification of a central galaxy is found in $\sim40\%$ of $<10^{13.5} M_\odot$ groups, from which we estimate that $\sim15\%$ of groups at these masses are dynamically unrelaxed systems. Central galaxies in relaxed and unrelaxed groups have in general similar properties, suggesting that centrals are regulated by their mass and not by their environment. Centrals in relaxed groups have however $\sim$30\% larger sizes than in unrelaxed groups, possibly due accretion of small satellites in virialized group halos. At $M>10^{10} M_\odot$, satellite galaxies in relaxed and unrelaxed groups have similar size, color and (specific) star formation rate distributions; at lower galaxy masses, satellites are marginally redder in relaxed relative to unrelaxed groups, suggesting quenching of star formation in low-mass satellites by physical processes active in relaxed halos. Finally, relaxed and unrelated groups show similar stellar mass conversion efficiencies, peaking at halo masses around $10^{12.5} M_\odot$. In the enclosed \emph{ZENS} catalogue we publish all environmental diagnostics as well as the galaxy structural and photometric measurements described in companion \emph{ZENS} papers II and III.
|
\label{sec:introduction} The study of the effect of the environment on the evolution of galaxies is beset by a number of difficulties that have made it hard to define a single coherent picture and to isolate the main physical processes. It has been clear for many years that both the mass and the environment of a galaxy affect its evolution and its appearance today. Since the pioneering work of e.g., \citet{Oemler_1974}, \citet{Dressler_1980}, \citet{Postman_Geller_1984}, many studies have highlighted clear trends between different observational diagnostics of evolution such as stellar absorption line strengths, color or morphology and either galactic mass or environment or both \citep[e.g.][]{Carollo_et_al_1993,Balogh_et_al_1999,Goto_et_al_2003,Blanton_et_al_2005,Zehavi_et_al_2002, Weinmann_et_al_2006,Weinmann_et_al_2009,Croton_et_al_2005, Park_et_al_2007,Kovac_et_al_2010,Peng_et_al_2010,Cooper_et_al_2010,Peng_et_al_2012,Wetzel_et_al_2012,Calvi_et_al_2012,Woo_et_al_2013}, but the detailed phenomenology, as well as physical understanding, remain unclear. This can be traced to several complicating factors or difficulties. First, there are a number of galactic properties that are relevant to define its evolutionary state. Galaxy evolution may be traced by changes in the star-formation rates (SFRs) of galaxies \citep[e.g.][]{Lilly_et_al_1996, Madau_et_al_1996, Chary_Elbaz_2001,Rodighiero_et_al_2010,Wetzel_et_al_2012,Woo_et_al_2013}, leading to differences in the integrated stellar populations, and therefore in spectral properties and colors of galaxies (e.g. \citealt{Carollo_Danziger_1994,Carollo_et_al_1997,Masters_et_al_2010,Bundy_et_al_2010}; perhaps modified by the effects of dust; e.g. \citealt{Labbe_et_al_2005,Williams_et_al_2009,Wolf_et_al_2009}). Galaxy evolution may also be manifested by changes in the morphologies of galaxies, both in terms of the overall structural morphology of bulge-to-disk ratios and the structural properties of each component \citep[][among others]{Carollo_et_al_1998,Carollo_1999,Brinchmann_Ellis_2000,Carollo_et_al_2007,Kovac_et_al_2010,Oesch_et_al_2010,Feldmann_et_al_2011,Skibba_et_al_2012,Calvi_et_al_2012,Cooper_et_al_2012,Raichoor_et_al_2012} and also in the appearance of features such as spiral arms or bars. Color and morphology clearly broadly correlate within the nearby galaxy population, but with a significant and poorly understood scatter \citep{Strateva_et_al_2001}. Morphology and color may reflect different aspects of a single evolutionary sequence, or may reflect the outcome of quite different physical processes that may conceivably occur either synchronously or asynchronously. Many previous studies have focused on just one or other of this color-morphology duality. A comprehensive picture is likely to require the simultaneous treatment of all such physically relevant properties. Second, with both mass and environment, it is not clear exactly $which$ mass or environment is likely to be the most relevant for ÒcentralsÓ, i.e., galaxies which appear to dominate their halos, and ÒsatellitesÓ, i.e. galaxies which orbit another more massive galaxy within a single dark matter halo \citep[e.g.][]{Cooper_et_al_2005,DeLucia_et_al_2012,Haas_et_al_2012,Muldrew_et_al_2012}. Observationally, the existing stellar mass of a galaxy is the most easily accessible, but the physical driver of the evolution could be the mass of the dark matter halo of a galaxy, or, in the case of satellite galaxies, the mass of the dark matter halo in which the galaxy resides, leading to an environment-like measure of mass. Similarly, the environment that could influence the evolution of a galaxy could reflect either very local effects, e.g. the location of a galaxy in a dark matter halo, or the interaction with nearby neighbors through the mass of the dark matter halo (as above), or the broader environment beyond the halo, as defined by the cosmic web of filaments and voids. Clearly some of the definitions of environment are closely linked to the mass of a galaxy, especially for galaxies which dominate their dark matter haloes. Even for galaxy stellar mass we could imagine some direct crosstalk between it and environment if the stellar mass function of galaxies was itself dependent on environment (\citealt{Bundy_et_al_2006}; \citealt{Baldry_et_al_2006}; \citealt{Bolzonella_et_al_2010}; \citealt{Kovac_et_al_2010}), necessitating the careful isolation of these two variables. A recent analysis in the Sloan Digital Sky Survey (SDSS; \cite{York_et_al_2000}) of the three-way relationships between color, stellar mass and environment, the latter defined simply in terms of a 5th-nearest galaxy-neighbor density, reveals some interesting simplicities within the galaxy population \citep{Peng_et_al_2010}. Not least, the effects of environment and stellar mass on the fraction of galaxies that are observed to be red (the Òred fractionÓ) are straightforwardly separable in the sense that the chance that a given galaxy is red is the product of two functions, one of mass independent of environment, and the other of environment, independent of mass. This led Peng et al.\ to identify two separate physical processes, termed Òmass-quenchingÓ and Òenvironment-quenchingÓ. A conclusion of this analysis was that for galaxy stellar masses below $\sim10^{10}M_\odot$ the effects of the environment dominate, while above $\sim10^{11}M_\odot$ the galaxy population is dominated by the effects of merging, which again is environmentally determined. The differential effects of galactic stellar mass and environment can be most clearly seen in the $\sim10^{10-11}M_\odot$ galaxy population. \citet{Peng_et_al_2012} extended their original formalism to the central-satellite dichotomy of galaxies, using a large group catalogue \citep{Yang_et_al_2005, Yang_et_al_2007}. Although the characteristics of mass- and environment-quenching were identified, their physical origin remains uncertain. Also unclear remains whether morphological transformations are causally connected with, and whether they anticipate or lag behind, the spectrophotometric transformations which shift blue, star forming galaxies onto the red sequence of bulge-dominated systems \citep[e.g.][]{Arnouts_et_al_2007,Faber_et_al_2007,Pozzetti_et_al_2010,Feldmann_et_al_2010,Feldmann_et_al_2011}. Many processes can lead to the disruption of disks and quenching of star formation, e.g., galaxy mergers or tidal interactions \citep[e.g.][and references therein]{Park_et_al_2007}, ram pressure stripping of cold gas \citep[][but see also Rasmussen et al. 2008]{Gunn_Gott_1972,Feldmann_et_al_2011} or ÒstrangulationÓ of the galactic system by removal of hot and warm gas, necessary to fuel star formation \citep{Larson_et_al_1980,Balogh_et_al_2000,Font_et_al_2008,Rasmussen_et_al_2012}. In a hierarchical picture, a gaseous disk can be re-accreted around pre-made spheroids at relatively late epochs. This evolutionary path is observed to happen in high-resolution cosmological hydrodynamical simulations \citep{Springel_Hernquist_2005,Feldmann_et_al_2010}. The intermediate-mass scales of galaxy groups, which are the most common environments of $\sim L^*$ galaxies in the local Universe (\citealt{Eke_et_al_2004}), have a reputation for being the place where environmental drivers of galaxy evolution should be at their peak efficiency. With an in-spiral timescale of dynamical friction that varies in proportion to $\sigma^3/\rho$, with $\sigma$ and $\rho$ the dark matter halo velocity dispersion and density, respectively, galaxy tidal interactions and mergers should take place on a cosmological short timescale in group potentials with relatively low velocity dispersion, unlike the most massive galaxy clusters where the velocity dispersion is much higher. Also, with ram pressure efficiency varying as $\rho_{igm}v^2$, with $\rho_{igm}$ and $v$ respectively the density of the intergalactic/intragroup medium (IGM) and relative velocity of the galaxy toward the IGM, galaxies may well begin to loose their gas already at intermediate environmental densities typical of galaxy groups \citep{Rasmussen_et_al_2006,Rasmussen_et_al_2008}. Resulting internal dynamical instabilities may also contribute to galaxian evolution, e.g., by fuelling star formation and supermassive black holes in the centers of galaxies (see e.g. \citealt{DiMatteo_et_al_2007,Hopkins_et_al_2008} for a theoretical perspective, and \citealt{Genzel_et_al_1998,Kewley_et_al_2006,Smith_et_al_2007,Silverman_et_al_2011} for some observational evidences) and establishing feedback loops that affect whole galaxies \citep{Croton_et_al_2006}. These considerations motivate the present study, termed \emph{ZENS} (the Zurich Environmental Study), where we use a statistically complete sample of 1627 galaxies brighter than $b_J=19.45$, known to be members of 141 nearby groups spanning the mass range between $\sim10^{12.5} M_\odot$ and $\sim10^{14.5} M_\odot$. The \emph{ZENS} sample is complete at stellar masses above $10^{10}\Msol$ for passively evolving galaxies with old stellar populations, and above $10^{9.2}\Msol$ for star forming galaxies. In \emph{ZENS} we aim at simultaneously $(1)$ characterizing the present evolutionary state of galaxies in as broad a way as possible, using both diagnostics based on stellar populations and structural morphology, and $(2)$ studying as broad a range of environments as possible and characterizing the environments in a number of ways that sample different physical scales, and include a careful distinction between central and satellite galaxies. Specifically, in our study we directly compare, at fixed galaxy stellar mass, the dependence of key galactic populations diagnostics on the large-scale environmental (over)density ($\delta_{LSS}$), on the mass of the host group halo ($M_{GROUP}$), on the location of galaxies within their group halos (the latter expressed in terms of projected distance from the halo center, $R/R_{200}$, with $R_{200}$ the characteristic size of the group) Ð maintaining a central-satellite distinction when possible and relevant. The \emph{ZENS} sample is extracted from the 2-degree Field Galaxy Redshift Survey (2dFGRS; \citealt{Colless_et_al_2001}), which contains nearly 225,000 redshifts for galaxies with $14 < b_J < 19.45$ and a median redshift $z\sim0.11$, with a redshift completeness of $85\pm 5\%$. In combination with a dynamic range of five magnitudes at each redshift, the 2dFGRS is the ideal basis for constructing a homogeneous catalogue of nearby galaxies in a wide range of environments. We have followed up the \emph{ZENS} sample with $B$ and $I$ deep WFI imaging at the ESO/2.2m to derive, for all galaxies in the sample, detailed properties of substructure such as bulges, disks, bars and tidal tails. The wealth of data on the \emph{ZENS} groups enables us to define very carefully the nature of the group, including its likely dynamical state (relaxed or unrelaxed), to do a careful group-by-group identification of the most likely dominant member, to derive accurate photometric and structural measurements for galactic subcomponents (disks, bulges and bars) Ð all analyses unaffected by distance, size, magnitude, mass, type and other biases, which often complicate the interpretation of comparisons of independent studies published in the literature. In this first paper in the \emph{ZENS} series: $(i)$ We describe the \emph{ZENS} design and database (Section \ref{design}); $(ii)$ We present our definitions and calculations of the four environmental parameters $\delta_{LSS}$, $M_{GROUP}$, $R/R_{200}$ plus the central-satellite distinction (Section \ref{sec:environs}). Specifically, in this section we detail the approaches that we adopt to identify central and satellite galaxies and thus the centers of the groups, and to measure a large-scale structure (over)density proxy which, at relatively low group masses, provides a measurement which, in contrast with the often used Nth-neighbor-galaxies estimators, is independent of the richness and mass of the host group halos. We furthermore quantify how random and systematic errors in the computation of each environmental parameter affect the studied trends of galaxy properties with such environment; $(iii)$ We publish the \emph{ZENS} catalogue (Section \ref{sec:ZENS_catalogue}) which lists, for every galaxy in the sample, the environmental parameters derived in this paper, as well as structural (from \citealt{Cibinel_el_al_2013a}, Paper II) and spectrophotometric measurements (from \citealt{Cibinel_el_al_2013b}, Paper III). The structural measurements are corrected for magnitude-, size-, concentration-, ellipticity-, and PSF-dependent biases; $(iv)$ We discuss our classification of groups in dynamically `relaxed' and `unrelaxed' systems (Section \ref{sec:relaxedUnrelaxed}), and briefly investigate whether their galaxy members, both central and satellites, differ in fundamental structural (size), star formation (specific star formation rate, sSFR) and surface density of star formation rate ($\Sigma_{SFR}$) and optical $(B-I)$ properties (see also Appendix \ref{newtable}). Finally, $(v)$ we summarize our main points in Section \ref{sec:endfinally}. In Appendices \ref{App:2dFGRS_limits}, \ref{App:missedgals}, \ref{App:densityTests} and \ref{App:readmecat} we present details on, respectively, $(a)$ the impact on our study of the 2dFGRS magnitude limits in the \emph{ZENS} fields, $(b)$ the impact of `missed' galaxies, either by the 2dFGRS, or by the new $B$ and $I$ ESO 2.2m/WFI imaging for the \emph{ZENS} sample, $(c)$ 2PIGG incompleteness in group-membership, and $(d)$ additional tests on the robustness of our fiducial LSS density estimates and the comparison with traditional Nth-neighbor-galaxies estimators, and, finally, $(e)$ the $Readme$ file of the published \emph{ZENS} catalogue. For the relevant cosmological parameters we assume the following values: $\Omega_m=0.3$, $\Omega_{\Lambda}=0.7$ and $h=0.7$. Unless otherwise stated, group masses and luminosities are given in units of $\Msol$ and $\Lsol$, i.e., we incorporate the value $h=0.7$ in the presentation of our results. All magnitudes are in the AB system. These choices are also adopted in \citealt{Cibinel_el_al_2013a,Cibinel_el_al_2013b} which present respectively the structural and photometric measurements included in the catalogue associated with this paper.
|
\label{sec:endfinally} Motivated by the picture that both the mass of a galaxy, and its immediate and distant environment, may impact how the galaxy evolves and its redshift zero properties, and by the uncertainty on which mass and which environment are the relevant ones to galactic life, we undertake the \emph{ZENS} project, which uses new and archival multi-wavelength data for a statistically complete sample of 1627 galaxies brighter than $b_J=19.45$ which are members of 141 $\sim10^{12.5-14.5} M_\odot$, $0.05<z<0.0585$ groups. The emphasis of \emph{ZENS} is to explore the dependence of key galactic populations diagnostics on the large-scale environment, on the mass of the host group halo, on the location of galaxies within their group halos, and on the central/satellite rank of a galaxy within its host group halo. The \emph{ZENS} sample is extracted from the 2PIGG catalogue of the 2dFGRS. We publish the \emph{ZENS} catalogue which combines the environmental diagnostics computed in this article with the structural and spectrophotometric galactic measurements described in \citealt{Cibinel_el_al_2013a,Cibinel_el_al_2013b} . In this first paper, introducing the project, we have described improved algorithm adopted to define the group centers, to rank galaxies as centrals or satellites in their host groups, and to separate the effects on galaxies of groups mass and LSS density. Specifically: $(i)$ We have introduced a three-faceted self-consistency criterion for identifying central galaxies. These must, simultaneously, be the most massive galaxies in the group within the errorbars estimated for the galaxy stellar masses, and must be consistent with being the spatial and dynamical centers of the host groups. $(ii)$ We have adopted a Nth-nearest {\it group}-neighbors computation to estimate the LSS density underlying the groups which, especially at group masses $M_{GROUP}<10^{\sim13.5} M_\odot$, and in contrast with the commonly used Nth-nearest {\it galaxy}-neighbors approach, is independent of group mass/richness and enables us to study separately the effects of these two distinct environments on galaxy properties. Furthermore, we have used simulations, also based on semi-analytic models of galaxy evolution, to quantify the intrinsic uncertainties in the trends of galaxy properties with the environmental parameters, that are propagated from the random and systematic errors in these parameters. We have found that at least $\sim$60$\%$ of groups are dynamically-relaxed systems with a well-identifiable central galaxy that satisfies the stringent criterion above -- and thus a well-defined center of the group. These groups enable a robust investigation of galaxy properties with group-centric distance down to the smallest group masses sampled in \emph{ZENS}. In the remaining $\sim40\%$ of groups there is no galaxy which satisfies the required criteria to be a central galaxy -- and thus the center of the group potential well. We estimate that a non-negligible fraction of these -- up to of order $\sim10-15\%$ of groups in the total \emph{ZENS} sample, are likely genuinely dynamically young, possibly merging groups. At constant stellar mass, central galaxies in relaxed and unrelaxed groups have similar color and star formation properties, although they show larger sizes in relaxed versus unrelaxed groups. Centrals in unrelaxed groups have sizes comparable to satellite galaxies of similar masses. These results may partly arise from the misclassification of satellite galaxies as central galaxies in groups, in our analysis labelled as unrelaxed, for which however the identification of their dynamical state and of the central galaxy might be hampered by observational errors. We estimate that in about two-thirds of nominally `unrelaxed' groups, the lack of identification of a self-consistent central galaxy has its roots in the incomplete spectroscopic and photometric coverage of the 2PIGG and 2dFGRS surveys, respectively. Therefore, our use of the term 'unrelaxed' should be read as highlighting the important fact that the alleged central galaxies, and thus the centers in these groups, should be handled with care. Partly however the lack of dependence central galaxies properties on nominal dynamical state of the group may be evidence that the properties of central galaxies are shaped by their own mass content and not by their group environment, with the exception of a growth in size in dynamically-relaxed halos due to secular accretion of smaller satellites. Over the whole galaxy mass range of our study, satellites have indistinguishable physical properties (in terms of sizes, optical colors, sSFRs and $\Sigma_{SFR}$) independent of whether they are hosted by relaxed or unrelaxed groups. Furthermore, relaxed and unrelated groups appear to have similar gas-to-star conversion efficiencies, which, as found in other studies, peak around the$10^{12.5} M_\odot$ halo mass; this suggests that the efficiency of conversion of gas into stars within halos may be largely independent of the dynamical state of the group. A more detailed investigation of this important issue is postponed to a future dedicated paper. The only possible difference between relaxed and unrelaxed potentials is a very modest shift towards redder $(B-I)$ colors for $<10^{10} M_\odot$ satellites in relaxed relative to unrelaxed groups. A possible explanation is that, at the higher masses, satellites are either unaffected by the group environment, or they reach their final state as they first enter the potential of a relatively small group, with subsequent group-group mergers having no further impact on their properties (see also \cite{DeLucia_et_al_2012} for theoretical support to this scenario). The marginally redder color of low-mass satellites in relaxed relative to unrelaxed groups may be due to satellites orbiting since longer times within the former relative to the latter, or to quenching of star formation in these systems for processes that are active or at least most efficient in relaxed group potentials. Independent studies also point at the low-mass satellite-quenching by physical processes acting within virialized halos. In future \emph{ZENS} analyses we will investigate whether including or excluding the unrelaxed groups from any given specific diagnostic will impact our main conclusions, and how, and will explicitly comment on this when it will.
| 12
| 6
|
1206.5807
|
1206
|
1206.6374_arXiv.txt
|
{ One of the greatest problems of primordial inflation is that the inflationary space-time is past-incomplete. This is mainly because Einstein's GR suffers from a space-like Big Bang singularity. It has recently been shown that ghost-free, non-local higher-derivative ultra-violet modifications of Einstein's gravity may be able to resolve the cosmological Big Bang singularity via a non-singular bounce. Within the framework of such non-local cosmological models, we are going to study both sub- and super-Hubble perturbations around an inflationary trajectory which is preceded by the Big Bounce in the past, and demonstrate that the inflationary trajectory has an ultra-violet completion and that perturbations do not suffer from any pathologies. } \begin{document}
|
Primordial inflation is one of the most compelling theoretical explanations for creating the seed perturbations for the large scale structure of the universe and the temperature anisotropy in cosmic microwave background radiation~\cite{WMAP}. Since inflation dilutes everything, the end of inflation is also responsible for creating the relevant matter required for the Big Bang Nucleosynthesis besides matching the perturbations observed in the CMB, see for instance~\cite{Allahverdi:2006iq,Allahverdi:2011aj}, where inflation has been embedded completely within a supersymmetric Standard Model gauge theory which satisfies all the observed criteria, for a recent review on various models of inflation, see Ref.~\cite{Mazumdar:2010sa}. In spite of the great successes of inflation, it still requires an ultra-violet (UV) completion. As it stands, within Einstein's gravity inflation cannot alleviate the Big Bang singularity problem, rather it pushes the singularity backwards in time~\cite{Borde,Linde}. In a finite time in the past the particle trajectories in an inflationary space-time abruptly ends, and therefore inflationary trajectory cannot be made past eternal~\cite{Guth}. Within the framework of Friedmann--Lema\^{i}tre--Robertson--Walker (FLRW) cosmology, this can be addressed by ensuring the inflationary phase to be preceded either by an ``emergent'' phase where the space-time asymptotes to a Minkowski space-time in the infinite past~\cite{emerge}, or by a non-singular bounce where a prior contraction phase gives way to expansion. Unfortunately, within the context of General Relativity (GR), as long as the average expansion rate in the past is greater than zero, i.e. $H_{av}>0$, the standard {\it singularity theorems} due to Hawking and Penrose hold, and the fluctuations grow as the universe approaches the singularity in the past, which inevitably leads to a collapse of the space-time~\cite{Hawking} commonly referred to as the Big Bang (see also~\cite{Guth}). Thus, in order to provide a geodesic completion to inflation one needs to modify GR.\footnote{For general reviews of modified gravity cosmological models see for example~\cite{Koivisto-rev}.} There have been some recent progress in ameliorating the UV properties of gravity in the context of a class of ``mildly'' non-local~\footnote{By non-local we only mean that the Lagrangian contains an infinite series of derivatives and hence can only be determined if one knows the function around a finite region surrounding the space-time point in question. This property also manifests in the way one is often able to replace the infinite differential equation of motion with an integral equation~\cite{Zwiebach}. The theories however are only mildly non-local in the sense that the physical variables and observables are local, unlike some other theories of quantum gravity/field theory whose fundamental variables are themselves non-local, being defined via integrals.} higher derivative theory of gravity~\cite{BMS,BKM,BGKM}, involving terms that are quadratic in the Reimann curvature but contains all orders in higher derivatives. It was found that such an extension of Einstein's gravity renders gravity {\it ghost-free} and {\it asymptotically-free} in the $4$ dimensional Minkowski space-time.\footnote{For attempts along these lines with a finite set of higher derivative terms, see~\cite{Stelle,Zwiebach85,deser}. However in four dimensions they were not successful in avoiding ghosts and obtaining asymptotic freedom simultaneously, due to essentially the Ostrogradski constraint~\cite{ostrogradski}.} As a result the ordinary massless graviton sees a modified interaction and propagation at higher energies, but at low energies in the infra-red (IR) it exactly yields the properties of Einstein's gravity. It has been demonstrated that such a modification of Einstein's gravity ameliorates the UV behaviour of the Newtonian potential, and also permits symmetric non-singular oscillating solutions around the Minkowski space-time~\cite{BGKM} signalling that the action can resolve the cosmological singularities, see also~\cite{BMS,BKM,linearized}. Similar non-local higher derivative theories of gravity have also been considered in connection with black holes~\cite{warren}, inflationary cosmology~\cite{inflation}, string-gas cosmology~\cite{sgc} and in efforts to understand the quantum nature of gravity~\cite{nlgravity}.\footnote{For investigations along these lines in the more broader class of non-local gravity theories which include non-analytic operators such as $1/\Box$, see~\cite{One_over_Box,oobBI} and references therein.} There was also a proposal to solve the cosmological constant problem by a non-local modification of gravity~\cite{ArkaniHamed:2002fu} (see also~\cite{Barvinsky2003}). What is rather intriguing is that non-local actions with an infinite series of higher derivative terms as a series expansion in $\alpha'$, the string tension, have also appeared in stringy literature. One mostly finds them in the open string sector, notably in toy models such as p-adic strings~\cite{padic_st}, zeta-strings~\cite{ZS}, strings on random lattice~\cite{douglas,marc}, and in the open string field theory action for the tachyon~\cite{sft}. For more general aspects of string field theory see for instance~\cite{sft_review}. There have also been progress in understanding quantum properties of such theories, \ie how to obtain loop expansion in the string coupling constant $g_s$, which has lead to some surprising insights into stringy phenomena such as thermal duality~\cite{BJK}, phase transitions~\cite{abe} and Regge behavior~\cite{marc}. It is indeed a wishful thinking to derive such an action in the {\it closed string} sector involving gravity.\footnote{We note that significant progress has been made in studying non-local stringy scalar field models coupled minimally and non-minimally to the Einstein gravity \cite{Non-local_scalar}-\cite{KV}.} In Ref.~\cite{BMS}, with the means of an exact solution, it was shown how a subset of the actions that have been proposed more recently in Ref.~\cite{BGKM}, can ameliorate the Big Bang cosmological singularity of the FLRW metric. The action discussed in~\cite{BMS} only contained the Ricci scalar and it's derivatives up to arbitrary orders and yielded non-singular bouncing solution of {\it hyperbolic cosine } type. However, it was not clear how generic these solutions were, whether these solutions were stable under small perturbations, and whether these theories contain other singular solutions or not. Some of these issues, in particular concerning only {\it time dependent} fluctuations around a bouncing solution were addressed in Ref.~\cite{BKM}, where the authors have analysed the stability of the background trajectory for very long wavelength (super-Hubble) perturbations, either when the cosmological constant is positive, i.e. $\Lambda >0$, or negative, i.e. $\Lambda <0$.\footnote{Our preliminary investigations indicate that when $\La=0$, the higher derivative modifications allows one to realize the emergent universe scenario.} The latter provided a way to construct cyclic cosmologies, and in particular cyclic inflationary scenarios that have been studied in~\cite{CI1}, and in \cite{CI2} where cosmological imprints were also discussed. In this paper we aim to focus on the case when $\Lambda >0$. This particular case is very interesting from the point of view of inflation, as it paves the way for a {\it geodesically} complete paradigm of inflation in past and future. It was found that there exists a de Sitter solution as an attractor for large enough times before and after the bounce. The weakening of gravity in the UV regime facilitates the bounce for the FLRW metric. Our main goal will be to study what happens to the non-singular background solution under arbitrary classical and quantum fluctuations (super and sub-Hubble type) during the bounce period. We have a two-fold aim. First, we want to verify that at late times there are no fluctuations that are growing during inflation. This would clearly signal an instability in the system which would certainly jeopardize the inflationary mechanism of creating superHubble fluctuations which becomes a constant at late times. Second, we want to check that the bounce mechanism itself is robust, \ie the perturbations do not grow unboundedly near the bounce. In the process of our investigations, we will develop techniques to deal with non-local perturbation equations that one encounters in these theories, and this we believe is a major achievement of the paper. We hope that this will help us in analyzing other cosmological issues in future. Finally, let us comment on a rather interesting feature we noted. It turns out that the requirement of the bounce automatically introduces an extra scalar degree of freedom, and if the mass of this scalar is light, it may be able to play the role of a curvaton \cite{Mazumdar:2010sa}. We however leave a more detailed phenomenological investigation of such a scenario for future. The paper is organized as follows: In section~\ref{sec:background}, we review some of the results for non-local gravity models and generalize some of the techniques previously used to find exact solutions. In section~\ref{sec:perturbations}, we derive the non-local perturbation equations for the Bardeen potentials around a large class of cosmological background solutions. Next, in section~\ref{sec:late-time} we proceed to solve these equations around the ``inflationary'' de Sitter late-time attractor solutions that exist in these models. In particular, we obtain the criteria for which the perturbations freeze out or decay after crossing the Hubble radius which allows one to maintain the inflationary mechanism of producing scale-invariant CMB fluctuations. Thereafter, we look at the stability of the fluctuations in section~\ref{sec:bounce} around the bounce. Finally in section~\ref{sec:conclusion}, we conclude by summarizing our results and discussing their implications for inflationary cosmology.
|
\label{sec:conclusion} In this paper we studied the perturbations of a special class of string-motivated higher derivative extension of Einstein's gravity, which is well-known to ameliorate the Big Bang cosmological singularity of the FLRW metric. More specifically, our action contains the Ricci scalar but is non-local in the sense that it contains higher derivative terms up to all orders. It can yield non-singular bouncing solutions similar to that of the {\it hyperbolic cosine } bounce which was first found in~\cite{BMS}. The higher derivatives lead to weakening of the gravity in the UV regime which facilitates the bounce for the FLRW metric in presence of non-ghost like radiation and the cosmological constant. For the first time we analyzed equations for the scalar perturbations for such non-local higher derivative theories of gravity in complete generality. We provided the general non-local equations that the two Bardeen potentials satisfy, and studied the evolution of the generic fluctuations in the relativistic fluid and the scalar potentials during the bounce as well as at late times corresponding to the de Sitter attractor solution. Although we have not included the dynamics and perturbations of the inflaton field, our analysis assessed the robustness of the inflationary scenario in terms of perturbations. We were able to arrive at a simple criteria such that the modes (small and large $k$ ) pass the bounce phase smoothly, and consequently decay during the de Sitter phase. This means, at least at the classical level there are no pathologies present during the evolution. We also noted the possibility of realizing a curvaton scenario given the extra scalar degree of freedom present in these models are light. One of the issues we haven't discussed in our paper is anisotropic deviations from the bouncing background. This requires looking at homogeneous (time dependent) but anisotropic Bianchi I type of models. (The isotropic homogeneous mode was already considered in~\cite{BKM} and issues of stability were discussed.) This is an important issue to determine the robustness of the bouncing mechanism as bouncing/cyclic models are notoriously plagued with Mixmaster type chaotic behavior related to the growth of anisotropies in a contracting universe. The generalization of the application of the ansatz (\ref{ansatz2}) that has been performed in this paper to arbitrary metrics (potentially inhomogeneous and anisotropic) should help in addressing this issue, but we leave this for future considerations. For stability analysis related to anisotropies in other non-local gravity and string-inspired models one can see \cite{BI,oobBI} and references therein. It is probably worth pointing out though that anisotropies are expected to be the largest at the bounce point, and thus if there is a ``smooth-enough'' bouncing patch, anisotropies are only going to decrease at large times in the past or future. For simplicity, some of our analysis was done for the {\it hyperbolic cosine } bounce, but we discussed how the results should generalize to other bouncing solutions. This strongly suggests that higher derivative actions of gravity considered in this paper do indeed yield a non-singular bounce without a pathology, and in doing so can provide a model of inflation, both in the past and the future.
| 12
| 6
|
1206.6374
|
1206
|
1206.4692_arXiv.txt
|
We present IRAM Plateau de Bure Interferometer observations of the $^{12}$CO\,(3--2) emission from two far-infrared luminous QSOs at $z\sim $\,2.5 selected from the {\it Herschel}-ATLAS survey. These far-infrared bright QSOs were selected to have supermassive black holes (SMBHs) with masses similar to those thought to reside in sub-millimetre galaxies (SMGs) at $z\sim$\,2.5; making them ideal candidates as systems in transition from an ultraluminous infrared galaxy phase to a sub-mm faint, unobscured, QSO. We detect $^{12}$CO\,(3--2) emission from both QSOs and we compare their baryonic, dynamical and SMBH masses to those of SMGs at the same epoch. We find that these far-infrared bright QSOs have similar dynamical but lower gas masses than SMGs. In particular we find that far-infrared bright QSOs have $\sim$\,50\,$\pm$\,23\% less warm/dense gas than SMGs, which combined with previous results showing the QSOs lack the extended, cool reservoir of gas seen in SMGs, suggests that they are at a different evolutionary stage. This is consistent with the hypothesis that far-infrared bright QSOs represent a short ($\sim 1$\,Myr) but ubiquitous phase in the transformation of dust obscured, gas-rich, starburst-dominated SMGs into unobscured, gas-poor, QSOs.
|
\label{sec:intro} An evolutionary link between local ultraluminous infrared galaxies (ULIRGs, with far-infrared [FIR] luminosities of L$_{\rm FIR}\geq 10^{12}$\,L$_\odot$) and QSOs\footnote{QSO, quasi-stellar object, defined as having M$_{\rm B } \leq -22$ and one or more broad emission lines with a width $\geq$ 1000kms$^{-1}$} was first suggested by \citet{Sanders88} and considerable effort has been expended to test this connection in the local Universe (e.g.\ \citealt{Tacconi02,Veilleux09}). This hypothesis has been strengthened by the discovery of a relation between the mass of supermassive black holes (SMBHs) and the mass of their host spheroids (e.g.\ \citealt{Magorrian98,Gebhardt00,Ferrarese00}), which suggests a physical connection between the growth of spheroids and their SMBHs. As shown by theoretical simulations, such a link can be formed through the suppression of star formation in a host galaxy by winds and outflows from an active galactic nucleus (AGN) at its centre \citep{DiMatteo05,Hopkins05}. Both the ULIRG and QSO populations evolve very rapidly with redshift, and although subject to selection biases, intriguingly both populations appear to reach a broad peak in their activity at $z\sim 2.5$ \citep{Shaver88, Chapman05, Wardlow11}. The high-redshift ULIRGs, which are typically bright at sub-millimetre wavelengths, and hence are called sub-millimetre galaxies (SMGs), are thought to represent the formation of massive stellar systems through an intense burst of star formation in the early Universe, triggered by mergers of gas-rich progenitors~\citep{Frayer98,Blain02,Swinbank06b,Swinbank10,Tacconi06,Tacconi08, Engel10}. The similarity in the redshift distribution of the SMGs and QSOs may be indicating that the evolutionary link between these populations, which has been postulated locally, also holds at $z\sim 2$, when both populations were much more numerous, and significant in terms of stellar mass assembly. Indeed recent work on the clustering of $z\sim 2$ QSOs and SMGs has shown that both populations reside in parent halos of a similar mass, M$_{\rm halo} \sim 10^{13}$\,M$_\odot$ \citep{Hickox12}, adding another circumstantial connection between them. Moreover, this characteristic M$_{\rm halo}$ is similar to the mass at which galaxy populations transition from star forming to passive systems (e.g.\ \citealt{Brown08,Coil08,Hartley10}) which may be indirect evidence for the influence of QSO-feedback on star formation in galaxies~\citep{Granato01,Lapi06}. One other piece of circumstantial evidence for a link between SMGs and QSOs comes from estimates of the masses of SMBHs in SMGs. Due to the dusty nature of SMGs these studies are inherently challenging, yielding results with more scatter than seen for optical quasars. However, they suggest that the SMBH masses are $\sim10^8$\,M$_\odot$, significantly lower than seen in comparably massive galaxies at the present-day \citep{Alexander08}. This indicates the need for a subsequent phase of SMBH growth, which could be associated with a QSO phase. One puzzling issue about the proposed evolutionary connection between high-redshift QSOs and SMGs is only a small fraction of sub-millimeter detected galaxies (S$_{850} \gs 5$mJy), are identified as optically-luminous, M$_{\rm B } \leq -22$, $z\sim 1-3$ QSOs ($\sim 2$\%, \citealt{Chapman05,Wardlow11}), indicating that, if they are related, then the QSO and ULIRG phases do not overlap significantly. Taken together, these various results provide support for an evolutionary link between high-redshift ULIRGs (SMGs) and QSOs, with a fast transition ($\sim 1$Myr) from the star-formation-dominated SMG-phase to the AGN-dominated QSO-phase (e.g.\ \citealt{Page12}), where rapid SMBH growth can then subsequently account for the present-day relation between spheroid and SMBH masses. In this scenario, when a QSO {\it is} detected in the far-infrared/sub-millimetre it will be in the transition phase from an SMG to an unobscured QSO, making its physical properties (e.g.\ gas mass, dynamics, etc.) a powerful probe of the proposed evolutionary cycle (e.g.\ \citealt{Page04,Page11,Stevens05}). In this paper we present a study with the IRAM Plateau de Bure interferometer (PdBI) of the cold molecular gas in far-infrared-selected ``transition'' candidate QSOs from the {\it Herschel}\,\footnote{Herschel is an ESA space observatory with science instruments provided by European-led Principal Investigator consortia and with important participation from NASA} H-ATLAS survey \citep{Eales10}. We detect $^{12}$CO\,(3--2) in both far-infrared bright QSOs studied, and compare the gas and kinematic properties of these to other $^{12}$CO-detected QSOs and SMGs to relate their evolution. Throughout we adopt cosmological parameters from \citet{Spergel03} of: $\Omega_m =0.27$, $\Omega_\Lambda = 0.73$ and H$_0=71$\,km\,s$^{-1}$\,Mpc$^{-1}$. \begin{figure} \centerline{ \psfig{figure=full_sdss.ps,width=0.46\textwidth}} \caption{The 2SLAQ UV rest frame spectra for both QSOs in our sample from \citet{Croom09}, with the 2QZ composite QSO spectrum overlaid (red; \citealt{Croom02}). From left-to-right, a dashed line indicates the rest wavelength of Ly$\alpha$, N{\sc v}, Si{\sc iv}, C{\sc iv} and C{\sc iii}] emission. Both QSOs, but especially J0911$+$0027, have relatively weak Ly$\alpha$\,1215 emission when compared to C{\sc iv}\,1549, suggesting the presence of significant quantities of neutral gas in their vicinity (see also \citealt{Omont96}). For each QSO we estimate the SMBH mass from the FWHM of the C{\sc iv}\,1549 line and the rest frame 1350\AA\ luminosity \citep{Vestergaard06}. The FWHM of the C{\sc iv} line is derived from the best fitting Gaussian and continuum model (shown in blue), which provides an adequate fit to the emission line in both cases: $\chi^2_{\rm{r}}=1.4$ for J0908$-$0034 and $\chi^2_{\rm{r}}=1.5$ for J0911$+$0027. The rest wavelength scale of the plots is based on the systemic redshifts derived from $^{12}$CO\,(3--2), see section \S~\ref{sec:analysis}.} \label{fig:2slaq} \end{figure}
|
The main conclusions from our study are: \begin{itemize} \item We have used IRAM PdBI to search for redshifted $^{12}$CO\,(3--2) emission from two far-infrared bright QSOs at $z\sim 2.5$ selected from the H-ATLAS survey. These QSOs were selected to have SMBH masses of M$_{\rm BH}\leq 3\times 10^8$\,M$_\odot$, which are more comparable to typical SMGs than those in samples of high-redshift far-infrared luminous QSOs previously detected in $^{12}$CO. Our observations detect $^{12}$CO\,(3--2)) emission from both QSOs and we derive line luminosities of L$'_{\rm CO(3-2)} = (0.77\pm 0.15) \times 10^{10}$\,K\,km\,s$^{-1}$\,pc$^2$ and L$'_{\rm CO(3-2)} = (1.87\pm 0.33)\times10^{10}$\,K\,km\,s$^{-1}$\,pc$^2$ for J0908$-$0034 and J0911$+$0027 respectively (Table~2). \item Comparing our FIR-bright QSOs (and similar systems from the literature) with SMGs, we find that the QSOs have similar values of L$'_{\rm CO(3-2)} \sin^2 i / {\rm FWHM}^2$, our proxy for gas mass fraction, to SMGs. However this is subject to a number of assumptions. If we consider that QSOs have a biased inclination angle of 13$\degree$, as has been suggested~\citep{Carilli06}, then the QSOs have an $\sim$85\% lower gas mass fraction, at a significance of 2.0--$\sigma$. In the absence of gas replenishment this is consistent with QSOs being more evolved systems. Furthermore, adopting the appropriate line brightness ratios for each population roughly doubles the gas mass in SMGs, increasing this difference further, and to a significance of 3.6--$\sigma$. So while we see no evidence for evolution in the gas fraction, we require spatially-resolved $^{12}$CO\,(1--0) observations to test this conclusively. \item By comparing the gas consumption timescales, estimated from L$'_{\rm CO(3-2)}$\,/\,L$_{\rm FIR}$, for far-infrared bright QSOs with SMGs, we find that the QSOs have $\sim$\,50\,$\pm$\,23\% lower consumption timescales, at a significance of 1.8--$\sigma$. Adopting appropriate line brightness ratios, of r$_{31}$=1 for QSO, and r$_{31}$=0.52 for SMGs strengthens this conclusion to 3.1--$\sigma$. We conclude that far-infrared bright QSOs have a lower mass of warm/dense gas (probed directly through $^{12}$CO\,(3--2)). Combined with previous results~\citep{Riechers11d}, showing that QSOs also lack an extended, cool reservoir of gas seen in SMGs, we interpret this as evidence that the far-infrared bright QSOs are at adifferent evolutionary stage than typical SMGs. In our evolutionary scenario this is consistent with far-infrared bright QSOs being in `transition' from an SMG to a QSO. \item We show that the gas and the SMBH masses in far-infrared bright QSOs and SMGs are consistent with a model where SMGs transform into QSOs on a timescale of $\sim 100$\,Myrs. Furthermore, the relative volume densities and expected durations of the SMG and QSO phases are consistent with all SMGs passing through a subsequent QSO phase, and we estimate that the likely duration of the far-infrared bright QSO phase is just $\sim 1$\,Myr. We note that if necessary this duration is still sufficient to allow the QSO to influence the star formation and gas reservoirs across the full extent of the host galaxy through 1000-km\,s$^{-1}$ winds and outflows. \end{itemize} The scale of this study is too small to single handedly prove or disprove an evolutionary link between SMGs and QSOs. However the data we have obtained provides further evidence supporting the idea originally proposed by \citet{Sanders88} that these populations are linked by an evolutionary sequence. We have compared $^{12}$CO\,(3--2) detected QSOs and SMGs through a number of observable quantities, and find the timescales for gas depletion, and SMBH growth, needed to link SMGs to these sources are consistent.
| 12
| 6
|
1206.4692
|
1206
|
1206.0943.txt
|
We measure the two-point angular correlation function of a sample of 4,289,223 galaxies with $r < 19.4$ mag from the Sloan Digital Sky Survey as a function of photometric redshift, absolute magnitude and colour down to $M_r - 5 \log h = -14$ mag. Photometric redshifts are estimated from $ugriz$ model magnitudes and two Petrosian radii using the artificial neural network package ANNz, taking advantage of the Galaxy and Mass Assembly (GAMA) spectroscopic sample as our training set. The photometric redshifts are then used to determine absolute magnitudes and colours. For all our samples, we estimate the underlying redshift and absolute magnitude distributions using Monte-Carlo resampling. These redshift distributions are used in Limber's equation to obtain spatial correlation function parameters from power law fits to the angular correlation function. We confirm an increase in clustering strength for sub-$L^*$ red galaxies compared with $\sim L^*$ red galaxies at small scales in all redshift bins, whereas for the blue population the correlation length is almost independent of luminosity for $\sim L^*$ galaxies and fainter. A linear relation between relative bias and log luminosity is found to hold down to luminosities $L\sim0.03L^*$. We find that the redshift dependence of the bias of the $L^*$ population can be described by the passive evolution model of \citet{TegmarkPeebles1998}. A visual inspection of a random sample of our $r < 19.4$ sample of SDSS galaxies reveals that about 10 per cent are spurious, with a higher contamination rate towards very faint absolute magnitudes due to over-deblended nearby galaxies. We correct for this contamination in our clustering analysis.
|
Measurement of galaxy clustering is an important cosmological tool in understanding the formation and evolution of galaxies at different epochs. The dependence of galaxy clustering on properties such as morphology, colour, luminosity or spectral type has been established over many decades. Elliptical galaxies or galaxies with red colours, which both trace an old stellar population, are known to be more clustered than spiral galaxies (e.g.~\citealt{Davis1976,Dressler1980,Postman1984,Loveday1995,Guzzo1997,Goto2003}). Recent large galaxy surveys have allowed the investigation of galaxy clustering as a function of both colour and luminosity (\citealt{Norberg2002,Budavari2003,zehavi05,Wang2007,McCracken2008,Zehavi2010}). Among the red population, a strong luminosity dependence has been observed whereby luminous galaxies are more clustered, because they reside in denser environments. The galaxy luminosity function shows an increasing faint-end density to at least as faint as $M_r - 5 \log h = -12$ mag (\citealt{Blanton2005, Loveday2012}), thus intrinsically faint galaxies represent the majority of the galaxies in the universe. These galaxies with luminosity $L \ll L^*$ have low stellar mass and are mostly dwarf galaxies with ongoing star formation. However, because most wide-field spectroscopic surveys can only probe luminous galaxies over large volumes, this population is often under-represented. Previous clustering analyses have revealed that intrinsically faint galaxies have different properties to luminous ones. A striking difference appears between galaxy colours in this regime: while faint blue galaxies seem to cluster on a scale almost independent of luminosity, the faint red population is shown to be very sensitive to luminosity \citep{Norberg01,Norberg2002,Zehavi02,Hogg2003,zehavi05,Swanson2008,Zehavi2010,Ross2011}. As found by \citet{zehavi05}, this trend is naturally explained by the halo occupation distribution framework. In this picture, the faint red population corresponds to red satellite galaxies, which are located in high mass halos with red central galaxies and are therefore strongly clustered. Recently, \citet{Ross2011} compiled from the literature bias measurements for red galaxies over a wide range of luminosities for both spectroscopic and photometric data. They showed that the bias measurements of the faint red population are strongly affected by non-linear effects and thus on the physical scales over which they are measured. They conclude that red galaxies with $M_r>-19$ mag are similarly or less biased than red galaxies of intermediate luminosity. In this work, we make use of photometric redshifts to probe the regime of intrinsically faint galaxies. Our sample is composed of SDSS galaxies with $r$-band Petrosian magnitude $r_{\text{petro}}<19.4$. As we have an ideal training set for this sample, thanks to the GAMA survey \citep{Driver2010}, we use the artificial neural network package ANNz \citep{ANNz} to predict photometric redshifts. We then calculate the angular two-point correlation function as a function of absolute magnitude and colour. The correlation length of each sample is computed through the inversion of Limber's equation, using Monte-Carlo resampling for modelling the underlying redshift distribution. Recently, \cite{Zehavi2010} presented the clustering properties of the DR7 spectroscopic sample of SDSS. They extracted a sample of $\sim$ 700,000 galaxies with redshifts to $r\leq17.6$ mag, covering an area of 8000 $\text{deg}^2$. Their study of the luminosity and colour dependence uses power law fits to the projected correlation function. Our study is complementary to theirs, since we are using calibrated photo-$z$s of fainter galaxies from the same SDSS imaging catalogue. We use similar luminosity bins to Zehavi et al., with the addition of a fainter luminosity bin $-17 < M_r - 5\log h < -14$. Small-scale $(r<0.1h^{-1}\text{Mpc})$ galaxy clustering provides additional tests of the fundamental problem of how galaxies trace dark matter. Previous studies have used SDSS data and the projected correlation function to study the clustering of galaxies at the smallest scales possible \citep{Masjendi2006}, using extensive modeling to account for the fibre constraint in SDSS spectroscopic data. The interpretation of these results offers unique tests about how galaxies trace dark matter and the inner structure of dark matter halos \citep{Watson2011}. Motivated by these studies we present measurements of the angular correlation function down to scales of $\theta \approx 0.005$ degrees. We work solely with the angular correlation function and we pay particular attention to systematics errors and the quality of the data. On the other hand, on sufficiently large scales ($r > 60 \ h^{-1} \text{Mpc}$), it is expected that the galaxy density field evolves linearly following the evolution of the dark matter density field \citep{Tegmark06}. However, it is less clear if this assumption holds on smaller scales, where complicated physics of galaxy formation and evolution dominate. In the absence of sufficient spectroscopic data to comprehensively study the evolution of clustering, \cite{Ross2010} used SDSS photometric redshifts to extract a volume-limited sample with $M_r<-21.2$ and $\zphot<0.4$. Their analysis revealed significant deviations from the passive evolution model of \citet{TegmarkPeebles1998}. Here we perform a similar analysis, again using photometric redshifts, for the $L^*$ population. This paper is organised as follows. In Section~\ref{clust:2pcorr}, we introduce the statistical quantities to calculate the clustering of galaxies, with an emphasis on the angular correlation function. In Section~\ref{clust:data} we present our data for this study and the method for estimating the clustering errors. In Section~\ref{clust:photoz} we describe the procedure that we followed in order to obtain the photometric redshifts. We then investigate the clustering of our photometric sample, containing a large number of intrinsically faint galaxies, in Section~\ref{clust:corres}. In Section~\ref{clust:bias} we present bias measurements as functions of colour, luminosity and redshift. Our findings are summarised in Section~\ref{clust:concl}. In Appendix~\ref{clust:query} we show how we extracted our initial catalogue from the SDSS DR7 database and finally in Appendix~\ref{clust:systematics} we describe in some detail the tests performed to assess systematic errors. Throughout we assume a standard flat $\Lambda$CDM cosmology, with $\Omega_m=0.30$, $\Omega_{\Lambda}=0.70$ and $H_0 = 100h$ km s$^{-1}$ Mpc$^{-1}$. %*****************************************************************************
|
\label{clust:concl} Despite their inherent limitations, photometric redshifts offer the opportunity to study the clustering of various galaxy populations using large numbers of objects over a wide range of angular scales with improved statistics, with the caveat that their systematic uncertainties are significantly more complex to deal with. In this section we summarize and discuss the main implications of our results. Using GAMA spectroscopic redshifts as a training set, we have compiled a photometric redshift catalogue for the SDSS DR7 imaging catalogue with $r_{\text{petro}}<19.4$. We carried out extensive tests to check the robustness of the photo-$z$ estimates and use them for calculating r-band absolute luminosities. We split our sample of 4,289,223 galaxies into samples selected on photometric redshift, colour and luminosity and estimate their two point angular correlation functions. Redshift distributions for the Limber inversion are calculated using Monte-Carlo resampling, which we show are very reliable. Our clustering results are in agreement with other clustering studies such as \cite{Norberg2002} and \cite{Zehavi2010} who used spectroscopic redshifts. We extend the analysis to faint galaxies where photo-$z$s allow us to obtain representative numbers for clustering statistics. We find that the correlation length decreases almost monotonically toward fainter absolute magnitudes and that the linear relation between $b/b^*$ and $L/L^*$ holds down to luminosities $L\sim0.03L^*$. For the $L^*$ population we observe a bias evolution consistent with the passive evolution model proposed by \citet{TegmarkPeebles1998}. As shown by others \citep{Norberg2002,Hogg2003,zehavi05,Swanson2008,Zehavi2010} and confirmed here, the colour dependence is more intriguing because faint red galaxies exhibit a larger correlation length than red galaxies at intermediate luminosities. This trend is explained by HOD models, as shown by \cite{zehavi05}. Clustering for blue galaxies depends much more weakly on luminosity. We find that at faint magnitudes the SDSS imaging catalogue is badly contaminated by shreds of over-deblended spiral galaxies, which makes the interpretation of the clustering measurements difficult. We determine an angular scale beyond which our results are not affected by this contamination, and test this by modelling the scale-dependance of the contamination as well as studying its luminosity dependence. The use of photometric redshifts is likely to dominate galaxy clustering studies in the future. A number of assumptions made in this work might need to be reviewed when we have even better imaging data and training sets. In particular, for cosmology, the non-Gaussianity of photo-$z$ and robust reconstruction of redshift distributions will become a very pressing issue. For galaxy evolution studies, it is essential to study the mapping between a photo-$z$ derived luminosity range and the true underlying one, as HOD modelling of the galaxy two point correlation function relies heavily on the luminosity range considered. In this paper, we report only qualitative agreement and leave any HOD study using these photometric redshift inferred clustering results to future work. %*****************************************************************************
| 12
| 6
|
1206.0943
|
1206
|
1206.4237_arXiv.txt
|
We compare the observed mass functions and age distributions of star clusters in six well-studied galaxies: the Milky Way, Magellanic Clouds, M83, M51, and Antennae. In combination, these distributions span wide ranges of mass and age: $10^2 \lea M/M_{\odot} \lea 10^6$ and $10^6 \lea \tau/ \mbox{yr} \lea 10^9$. We confirm that the distributions are well represented by power laws: $dN/dM \propto M^{\beta}$ with $\beta \approx -1.9$ and $dN/d \tau \propto \tau^{\gamma}$ with $\gamma \approx -0.8$. The mass and age distributions are approximately independent of each other, ruling out simple models of mass-dependent disruption. As expected, there are minor differences among the exponents, at a level close to the true uncertainties, $\epsilon_{\beta} \sim \epsilon_{\gamma} \sim$~0.1--0.2. However, the overwhelming impression is the similarity of the mass functions and age distributions of clusters in these different galaxies, including giant and dwarf, quiescent and interacting galaxies. This is an important empirical result, justifying terms such as ``universal" or ``quasi-universal." We provide a partial theoretical explanation for these observations in terms of physical processes operating during the formation and disruption of the clusters, including star formation and feedback, subsequent stellar mass loss, and tidal interactions with passing molecular clouds. A full explanation will require additional information about the molecular clumps and star clusters in galaxies beyond the Milky Way.
|
Star clusters form in the dense parts (``clumps'') of molecular clouds (Lada \& Lada 2003; McKee \& Ostriker 2007). They are subsequently destroyed by several processes, beginning with the expulsion of residual gas by massive young stars (``feedback''), further mass loss from intermediate- and low-mass stars, tidal disturbances by passing molecular clouds, and stellar escape driven by internal two-body relaxation (Binney \& Tremaine 2008). These processes disperse the stars from clusters into the surrounding stellar field. Star clusters therefore represent an intermediate stage in the transformation of the clumpy interstellar medium (ISM) of a galaxy into a relatively smooth stellar distribution. In this paper, we show that there are some remarkable similarities in the statistical properties of cluster populations in different galaxies. In particular, we focus on the univariate mass and age distributions, $\psi(M) \propto dN/dM$ and $\chi(\tau) \propto dN/d\tau$, and the bivariate mass--age distribution, $g(M,\tau) \propto \partial^2N/ \partial{M}\partial{\tau}$. These distributions encode valuable information about the formation and disruption of clusters, for example, whether less massive ones dissolve faster than more massive ones. In the next section, we present the mass and age distributions of the clusters in six well-studied galaxies, based on published data, but now displayed in a uniform manner. In the following section, we provide a partial theoretical explanation for these observations, and in the last section, we summarize and place our results in the context of other work in this field. As in our previous papers, we use the term ``cluster'' for any concentrated aggregate of stars with a density much higher than that of the surrounding stellar field, whether or not it also contains gas, and whether or not it is gravitationally bound. This is the standard definition in the star formation community (Lada \& Lada 2003; McKee \& Ostriker 2007). Some authors use the term ``cluster'' only for gas-free or gravitationally bound objects. We reject these definitions for several reasons. (1) They ignore the fact that gas-free and gas-rich clusters (including molecular clumps and HII regions) are the same objects observed in different evolutionary phases. (2) It is impossible to tell from images alone which clusters are gravitationally bound (have negative energy) and which are unbound (have positive energy). In principle, spectra would help, but in practice, they are seldom available and are often contaminated by non-virial motions (stellar winds, binary stars, etc). (3) $N$-body simulations show that unbound clusters retain the appearance of bound clusters for many ($\sim$10--50) crossing times (Baumgardt \& Kroupa 2007). By ``disruption,'' we mean the removal of mass from a cluster, whether this occurs gradually or suddenly, and whether it leaves the cluster bound or unbound.
|
We have reexamined the mass and age distributions of star clusters in the Milky Way, LMC, SMC, M83, M51, and Antennae galaxies. These distributions are well represented by power laws: $\psi(M) \propto dN/dM \propto M^{\beta}$ with $\beta \approx -1.9$ and $\chi(\tau) \propto dN/d\tau \propto \tau^{\gamma}$ with $\gamma \approx -0.8$. Furthermore, the mass and age distributions are approximately independent of each other: $g(M,\tau) \propto \psi(M) \chi(\tau)$. Since the rates of star and hence cluster formation have varied relatively slowly, the more rapid decline of the age distributions must be the result of cluster disruption. We find no evidence for mass-dependent disruption; indeed, simple models with $\tau_d \propto M^{0.6}$ fail by wide margins to match the observed mass and age distributions. The mass functions of the clusters are similar to those of the molecular clouds and clumps in which they form, requiring a nearly constant (but small) efficiency of star formation. This is an expected consequence of stellar feedback, given the observed radius--mass relation of the clumps ($R \propto M^{\alpha}$ with $\alpha \approx 0.5$). The clusters are also disrupted by subsequent stellar mass loss and passing molecular clouds. These processes preserve the power-law shape of the mass function unless there are correlations between the concentration $c$ and $M$ or between the internal density $\rho_h$ and $M$ (until two-body evaporation becomes important). The available data, while limited, reveal no such correlations. The main message here is the similarity among the mass functions and age distributions of clusters in different galaxies, strikingly evident in Figures~1 and~2. The literature on this subject includes some contradictory claims: for prominent features in the mass and age distributions, for strong dependencies on mass or differences among galaxies. We have shown that selection biases are responsible for several of these claims and we suspect they are responsible for the others. It is hard to see how selection biases and/or analysis errors would make the mass and age distributions appear more similar than they really are, whereas it is easy to see how such problems could introduce spurious differences. In any case, since the distributions presented here are based on complete or unbiased samples, we are confident that the similarity they exhibit is real. Nevertheless, we do not expect the mass and age distributions to be exactly the same in all parts of all galaxies. As we have emphasized here and in our previous papers, we expect minor differences in the age distributions as a consequence of different histories of cluster formation and/or disruption by passing molecular clouds. We would also not be surprised to find minor differences in the mass functions of molecular clumps and hence those of young clusters. Indeed, we find minor differences in the observed distributions; the best-fit exponents have dispersions $\sigma_{\beta} = 0.15$ and $\sigma_{\gamma} = 0.18$. The reality of these differences remains questionable, however, because $\sigma_{\beta}$ and $\sigma_{\gamma}$ are close to the true uncertainties in the exponents, $\epsilon_{\beta} \sim \epsilon_{\gamma} \sim $~0.1--0.2. We also expect any differences in the mass and age distributions to depend on spatial scale. In small regions ($\la {\rm few} \times10^2$~pc), there may be large variations in the formation and disruption rates, and hence in the mass and age distributions. As more of these small regions are combined into larger ones, the variations will average out, and the differences in the mass and age distributions will diminish. Figures~1 and~2 show the important result that these differences are negligible or barely detectable on the scale of whole galaxies. Evidently, the similarity among the mass and age distributions overwhelms such differences. This is the sense in which we consider the distributions to be ``universal'' or ``quasi-universal.'' Finally, we mention several future studies that would help to advance this subject. On the observational side, it would be interesting to survey the molecular clumps in nearby galaxies, to derive their mass functions, radius--mass relations, and other properties, for comparisons with those in the Milky Way. Another important goal is to measure the internal structure of clusters from {\it HST} images (especially $c$ and $\rho_h$) in large unbiased samples in several nearby galaxies. From our experience, H$\alpha$ measurements are necessary for accurate determinations of the mass and age distributions of young clusters ($\tau \la 3 \times 10^7$~yr) and should be included in all future studies. On the theoretical side, it would be interesting to explore further the effects of stellar mass loss on the evolution, stability, and disruption of weakly-bound clusters in tidal fields.
| 12
| 6
|
1206.4237
|
1206
|
1206.0691_arXiv.txt
|
{X-ray binaries are usually divided in persistent and transient sources. For ultracompact X-ray binaries (UCXBs), the mass transfer rate is expected to be a strong function of orbital period, predicting persistent sources at short periods and transients at long periods.} % {For 14 UCXBs including two candidates, we investigate the long-term variability and average bolometric luminosity with the purpose of learning how often a source can be expected to be visible above a given luminosity, and we compare the derived luminosities with the theoretical predictions.} % {We use data from the \emph{RXTE} All-Sky Monitor because of its long-term, unbiased observations. Many UCXBs are faint, i.e., they have a count rate at the noise level for most of the time. Still, information can be extracted from the data, either by using only reliable data points or by combining the bright-end variability behavior with the time-averaged luminosity.} % {Luminosity probability distributions show the fraction of time that a source emits above a given luminosity. All UCXBs show significant variability and relatively similar behavior, though the time-averaged luminosity implies higher variability in systems with an orbital period longer than $40$ min.} % {There is no large difference in the statistical luminosity behavior of what we usually call persistent and transient sources. UCXBs with an orbital period below $\sim \! 30$ min have a time-averaged bolometric luminosity that is in reasonable agreement with estimates based on the theoretical mass transfer rate. Around $40$ min the lower bound on the time-averaged luminosity is similar to the luminosity based on the theoretical mass transfer rate, suggesting these sources are indeed faint when not detected. Above $50$ min some systems are much brighter than the theoretical mass transfer rate predicts, unless these systems have helium burning donors or lose additional angular momentum.} %
|
Ultracompact X-ray binaries (UCXBs) are low-mass X-ray binaries characterized by an orbital period of less than one hour. Given that the donor stars in these binary systems fill their Roche lobe, they must be white dwarfs or helium burning stars, since only these types of stars have an average density that corresponds to such a short orbital period \citep{paczynski1981,nelson1986}. The accretors can be neutron stars or black holes, though the latter have not been identified yet. Historically, X-ray sources including UCXBs have been categorized as persistent or transient, based on whether a source was permanently visible by a given instrument or only occasionally. However, such a subdivision is an oversimplification, as sources have been found to vary over large ranges of timescales and amplitudes \citep{levine2011}. One type of variability, first observed in dwarf novae, is attributed to a thermal-viscous instability in the accretion disk \citep{osaki1974,smak1984,lasota2001}. As the disk grows in mass and heats up to close to the ionization temperature of the dominant element, the opacity strongly increases and radiation is trapped, and the disk's surface density approaches a threshold value. The temperature continues to rise more rapidly and the disk enters the hot state. The resulting high viscosity enhances the outward angular momentum transport in the disk, allowing for a higher accretion rate. This can be observed as an outburst, after which the cycle repeats. For UCXBs the mass transfer rate (and thus the disk temperature) is expected to decrease as a function of orbital period, suggesting that systems with orbital periods above $\sim \! 30$ min should undergo this instability in their outer disks \citep[see][]{deloye2003}. \subsection{Present research} We investigate the bolometric luminosity distributions of the known UCXB population including candidates with tentative orbital periods, which show the fraction of time a source spends at a given luminosity. The relation between the luminosity distribution and the average luminosity may give insight in the way transient and persistent behavior should be interpreted. We do not investigate periodicities in the light curves since this has been done already \citep{levine2011}, and periodicities are not relevant for the present purposes.
|
We have investigated the long-term X-ray light curves collected by the \emph{RXTE} ASM for $14$ ultracompact X-ray binaries including two candidates. All UCXBs have a significantly varying flux during their brightest phases (above $10^{37}\ \mbox{erg s}^{-1}$) as shown by the relatively flat slopes in Fig. \ref{fig:signi}. By comparing their (integrated) time-averaged bolometric luminosities with theoretical estimates (Fig. \ref{fig:avglum}), we conclude that for short-period systems (systems with an orbital period below $\sim \! 30$ min) these flat slopes probably continue at more typical luminosities (below $10^{37}\ \mbox{erg s}^{-1}$), since these relatively flat cumulative luminosity distributions are needed to match the observed and theoretical average luminosities. Longer-period systems must have a much flatter slope than short-period systems at luminosities below $10^{37}\ \mbox{erg s}^{-1}$ in order to match the theoretical models, in other words, they are more variable when faint relative to short-period systems. This follows from the steep decrease in theoretical luminosity with increasing orbital period. In general, all sources probably vary strongly at all luminosities. This confirms that long-term observations are indeed needed in order to compare a source's luminosity to a theoretical model of the mass transfer rate. Considering the time-averaged ASM luminosity over all dwells, for the long-period systems a single power law yields an average luminosity that is too high. Instead, the slope must become flatter at lower luminosities, as already suggested by the theoretical model. The short-period systems on the other hand require a steeper low-luminosity slope. Furthermore, several long-period systems have an average luminosity that is much higher than predicted by theory in the case of degenerate donors. A possible (partial) explanation is that the low-mass donors in these UCXBs are being heated and evaporated by irradiation from the millisecond pulsar they are orbiting. The angular momentum loss from the system resulting from a stellar wind from the donor enhances the mass transfer and hence increases the luminosity, as has been demonstrated in explaining the millisecond pulsar binary system \object{PSR J1719--1438} \citep{vanhaaften2012j1719}. A larger donor radius in itself also leads to a higher mass transfer rate. There is no clear distinction between transient and persistent behavior in the sense that the estimated average luminosities are similar for sources over a significant range in orbital periods, which is partially a result of the similar bright-end slopes in the cumulative luminosity distributions. However, the trend that long-period systems typically have a flatter slope when they are faint compared to when they are bright can be seen as transient behavior. The variability of short-period UCXBs is more consistent with a single power-law cumulative luminosity distribution that holds up to a fraction of time of $1$, which can be understood as `persistent' behavior. Some UCXBs seem too bright to have a degenerate donor. Especially for the long-period systems, orbital period derivatives would be very useful to decide whether the donor is helium burning or degenerate. This knowledge in turn would limit the range of theoretically predicted time-averaged luminosities, and thus constrain the faint-end luminosity distribution. The orbital period of \object{4U 0614+091} may be much shorter than the $\sim \! 50$ min that is suggested by several observations (see Table \ref{table:params}). Both the high lower bound on the luminosity compared to the degenerate-donor model for long orbital period UCXBs, and the apparent steepening of the slope of the cumulative luminosity distribution at low luminosities, resemble the behavior of short-period systems. At least the latter argument is not conclusive, given that \object{SWIFT J1756.9--2508} combines a $54.7$ min orbital period with a steepening slope at low luminosities. Even though the \emph{RXTE} ASM observations are the longest available, there may very well be variability on timescales (much) longer than $16$ yr, which could imply that the data for some of these systems available today are atypical. This could affect the average luminosity as derived by each of the methods, including the lower bounds. As for the long-period UCXBs, apart from being consistently brighter than theoretical estimates for systems with degenerate donors, variability on very long timescales (hundreds or thousands of years) would ensure that during any $16$ yr period, a small part of the population is exceptionally bright. This could be the population we observe. The \emph{Monitor of All-sky X-ray Image} Gas Slit Camera (\emph{MAXI GSC}) has a higher stated sensitivity \citep{matsuoka2009} than the ASM over a shorter period of time of $2.5$ yr. Because of the much longer observation baseline, we decided to use only the ASM data. We compared the \emph{MAXI} data with the ASM data and found that the \emph{MAXI} sensitivity is better than the ASM data for 7 UCXBs with known (or tentative) orbital period, similar for 2, worse for another 2 and unavailable for 4 UCXBs. Overall the difference was not large enough to outweigh the shorter observation baseline. We will use the luminosity distributions found in this paper in a forthcoming study of the UCXB population in the Galactic Bulge, to estimate the X-ray luminosities of systems predicted by a population model. In the context of predicting the observable population of UCXBs, the Galactic Bulge Survey \citep{jonker2011} may constrain the number of faint sources, which also gives information on the luminosity distribution.
| 12
| 6
|
1206.0691
|
1206
|
1206.6886_arXiv.txt
|
\noindent With the advent of the recent measurements in neutrino physics, we investigate the role of high-energy neutrino flux ratios at neutrino telescopes for the possibility of determining the leptonic CP-violating phase $\delta$ and the underlying pattern of the leptonic mixing matrix. We find that the flux ratios show a dependence of ${\cal O}(10~\%)$ on the CP-violating phase, and for optimistic uncertainties on the flux ratios less than 10~\%, they can be used to distinguish between CP-conserving and CP-violating values of the phase at 2$\sigma$ in a non-vanishing interval around the maximal value $|\delta|=\pi/2$.
|
Ever since 1998 with the results of Super-Kamiokande on atmospheric neutrinos \cite{Fukuda:1998mi}, there is strong evidence for neutrino oscillations as the main mechanism for flavor transitions. Indeed, the fundamental parameters such as the two neutrino mass-squared differences $\Delta m_{31}^2$ and $\Delta m_{21}^2$ as well as the three leptonic mixing angles $\theta_{23}$, $\theta_{12}$, and $\theta_{13}$ are now determined with increasing accuracy. Recently, the third and last mixing angle $\theta_{13}$ has been measured by Daya Bay \cite{An:2012eh}, but also Double Chooz, MINOS, RENO, and T2K have made important contributions \cite{Abe:2011fz,Adamson:2011qu,Ahn:2012nd,Abe:2011sj}. All five experiments indicate that the value of the third mixing angle is around nine degrees. Thus, the value of this mixing angle is relatively large. This means that both the bi- and tri-bi-maximal mixing patterns for leptonic flavor mixing have essentially been ruled out as zeroth-order approximations. Therefore, appropriate corrections must be taken into account to reconcile them with the experimental data. To this end, there remain two quantities for massive and mixed neutrinos that can be determined with oscillations: ${\rm sgn}(\Delta m_{31}^2)$ and the Dirac CP-violating phase $\delta$. The latter always appears in combination with $\theta_{13}$ in the leptonic mixing matrix as $\sin(\theta_{13}) \exp(\pm {\rm i} \delta)$ and a non-zero value of $\theta_{13}$ means that it is possible to determine $\delta$. The best options to measure $\delta$ are provided by accelerator experiments (e.g.~NOvA and NuMI), future superbeams, beta beams, or even better a neutrino factory \cite{Choubey:2010zz}. However, it will take a long time before such experiments are realized. In this work, we will therefore consider an alternative approach to determine $\delta$ by measuring neutrino flux ratios at neutrino telescopes. The generic example of such a telescope is IceCube \cite{icecube}, which exists and has the potential to measure flux ratios \cite{Lipari:2007su}. The dependence of flux ratios on the mixing parameters has been addressed in the literature where the focus has been on the dependence on $\theta_{23}$ and $\theta_{13}$ \cite{Meloni:2006gv,th23,mutau}, on $\delta$ \cite{delta}, and on new physics effects \cite{newphys}. The aim of this work is to illustrate the potential of neutrino telescopes to detect $\delta$ through the measurement of flux ratios after the first measurements of $\theta_{13}$. For our numerical evaluations, we adopt the best-fit values and uncertainties for the mixing angles in the normal hierarchy (NH) quoted in Ref.~\cite{Tortola:2012te}, which are presented in Tab.~\ref{mixings}. Note that, although the flux ratios are not sensitive to the mass-squared differences, a somewhat dissimilar behavior between NH and inverted hierarchy (IH) could be drawn if the allowed ranges for the parameters were sizeably different. However, this is not the case for the 3$\sigma$ ranges considered in this analysis. \begin{table}[h!] \centering \begin{tabular}{l|c|c|c|c} \it Parameter & \it Best-fit value & $1\sigma$ \it range & $2\sigma$ \it range & $3\sigma$ \it range \\ \hline $\sin^2 \theta_{12}/10^{-1}$ & 3.2 & 3.03 -- 3.35 & 2.9 -- 3.5 & 2.7 -- 3.7 \\ \hline $\sin^2 \theta_{13}/10^{-2}$ & 2.6 & 2.2 -- 2.9 & 1.9 -- 3.3 & 1.5 -- 3.6 \\ \hline $\sin^2 \theta_{23}/10^{-1}$ & 4.9 & 4.4 -- 5.7 & 4.1 -- 6.2 & 3.9 -- 6.4 \\ \hline $\delta/\pi$ & 0.83 & 0.19 -- 1.37 & $[0,2\pi]$ &$[0,2\pi]$ \\ \end{tabular} \caption{\label{mixings}\it Results of a global analysis \cite{Tortola:2012te} in terms of best-fit values, $1\sigma$, $2\sigma$, and $3\sigma$ ranges for NH only. See also Ref.~\cite{Fogli:2012ua} for another global analysis.} \end{table}
|
\label{sec:sc} In this paper, we have analyzed the potential of neutrino telescopes to access the leptonic CP-violating phase $\delta$. We have derived expansions for the flux ratios $R_{\alpha\beta}$ up to first (and second) order in small parameters, explicitly showing their dependence on $\delta$, for both kinds of high-energy neutrino sources, i.e., $\pi S$ and $\mu DS$ sources. It turns out that the uncertainty on $\theta_{23}$ affects the global (theoretical) error on the flux ratios the most. Considering both kinds of sources, we have shown that a 10~\% of CP fraction can still be obtained with a 5.5~\% uncertainty on $R_{\alpha\beta}$. We urge IceCube to measure the flux ratios, since such a measurement could provide the first hints on the value of $\delta$.
| 12
| 6
|
1206.6886
|
1206
|
1206.3976_arXiv.txt
|
We present general relativistic magnetohydrodynamic (GRMHD) numerical simulations of the accretion flow around the supermassive black hole in the Galactic centre, Sagittarius A* (Sgr A*). The simulations include for the first time radiative cooling processes (synchrotron, bremsstrahlung, and inverse Compton) self-consistently in the dynamics, allowing us to test the common simplification of ignoring all cooling losses in the modeling of Sgr A*. We confirm that for Sgr A*, neglecting the cooling losses is a reasonable approximation if the Galactic centre is accreting below $\sim 10^{-8} M_{\odot}~{\rm yr^{-1}}$ i.e. $\dot{M} < 10^{-7} \dot{M} _{Edd}$. But above this limit, we show that radiative losses should be taken into account as significant differences appear in the dynamics and the resulting spectra when comparing simulations with and without cooling. This limit implies that most nearby low-luminosity active galactic nuclei are in the regime where cooling should be taken into account. We further make a parameter study of axisymmetric gas accretion around the supermassive black hole at the Galactic centre. This approach allows us to investigate the physics of gas accretion in general, while confronting our results with the well studied and observed source, Sgr A*, as a test case. We confirm that the nature of the accretion flow and outflow is strongly dependent on the initial geometry of the magnetic field. For example, we find it difficult, even with very high spins, to generate powerful outflows from discs threaded with multiple, separate poloidal field loops.
|
Super-massive black holes (SMBHs) of millions to billions of solar masses are believed to exist in the centre of most galaxies. The Galactic center black hole candidate, Sgr A*, is the closest and best studied SMBH, making it the perfect source to test our understanding of galactic nuclei systems in general. This compact object was first observed as a radio source by \citet{balick74}. Since then, observations have constrained important parameters of Sgr A*, such as its mass and distance, estimated at $M=4.3\pm0.5\times10^6M_\odot$ and $D=8.3\pm0.35~{\rm kpc}$, respectively \citep{reid93, schodel02, ghez08, gillessen09}. The accretion rate has also been constrained by polarisation measurements, using Faraday rotation arguments \citep{aitken00, bower03, marrone07} and is estimated to be in the range $2\times10^{-9}<\dot{M} <2\times10^{-7} M_{\odot}~{\rm yr^{-1}}$. Other key parameters, such as the spin, inclination, and magnetic field configuration, are still under investigation. Multi-wavelength observations of Sgr A* have been performed from the radio to the gamma ray (see reviews by \citealt{ melia01, genzel10}, and references therein). More recently, important progress has been achieved in the infrared \citep[IR; e.g.,][]{schoedel11} and sub-millimeter (sub-mm) domains. All observations agree that Sgr A* is a very under-luminous and weakly accreting black hole; indeed its accretion rate is lower than has been observed in any other accreting system. In the near future, the next milestone will be observing the first black hole shadow from Sgr A* \citep{falcke00,dexter10} with the proposed ``Event Horizon Telescope'' \citep{doeleman08, doeleman09, fish11} thanks to the capabilities of very long base interferometry at sub-mm wavelengths. Such a detection would be the first direct evidence for a black hole event horizon, and may also constrain the spin of Sgr A*. In order to make accurate predictions for testing with the Event Horizon Telescope, however, we need to have reliable models for the plasma conditions and geometry in the accretion (in/out)flow. General relativistic magnetohydrodynamic (MHD) simulations offer significant promise for this class of study, as they can provide both geometrical and spectral predictions. Sgr A* has already been modeled in several numerical studies \citep[e.g.,][]{goldston05, moscibrodzka09,dexter09,dexter10, hilburn10, shcherbakov10,shiokawa12,dolence12,dexterfragile12}. All of these models consist of two separate codes: a GRMHD code describing the dynamics, and then a subsequent code to calculate the radiative emission based on the output of the first one. The under-luminous and under-accreting state of Sgr A* ($L_{\rm bol}\simeq 10^{-9} L_{\rm Edd}$ and $L_{\rm X-Ray}\simeq 10^{33}$ erg/s in the 0.5 to 10 keV band, where $L_{\rm Edd}$ is the Eddington luminosity; \citealt{baganoff03}), is the common argument given to justify ignoring the cooling losses and simplify the description of Sgr A*. Even though this approach seems reasonable, especially for the peculiar case of Sgr A*, we propose to quantify to what extent it really applies. This question is especially important if one wants to extend these studies to more typical nearby low-luminosity active galactic nuclei (LLAGN). An important example is the nuclear black hole in M87, which is the other major target for mm-VLBI and is significantly more luminous than Sgr A* ($L/L_{\rm Edd} \sim 10^{-6}-10^{-4}$). To test whether or not ignoring the radiative cooling losses is a reasonable approximation, we need to take into account the radiative losses self-consistently in the dynamics, which is now possible with the \emph{Cosmos++} astrophysical fluid dynamics code \citep{anninos05,fragile09}. By allowing the gas to cool, energy can be liberated from the accretion flow, potentially changing the dynamics of the system. The basic result that we show in this paper is that cooling is indeed negligible at the lowest end of the possible accretion rate range of Sgr A*, but plays an increasingly important role in the dynamics and the resulting spectra for higher accretion rates (relevant for most nearby LLAGN and even at the high end of the range of Sgr A*). This paper is organized as follows: In Section 2 we describe the new version of \emph{Cosmos++} used to perform this study. In Section 3 we present the initial set-up of the simulations. In Section 4 we show the importance of the accretion rate parameter and assess the effect of including radiative cooling. In Section 5 we perform a parameter survey to investigate the influence of the initial magnetic field configuration and the spin of the SMBH (we also discuss the effect of a retrograde spin on the dynamics) on the resulting accretion disc structure and outflow. In Section 6 we end with our conclusions and outlook. The spectra generated from these simulations are analyzed in detail in a companion paper (Drappeau et al., in preparation, hereafter referred to in the text as D12).
|
For the first time we have been able to assess the importance of the radiative losses in numerical simulations of LLAGN, specifically Sgr A* [see also \citet{moscibrodzka11}]. We show that radiative losses can affect the dynamics of the system and that their importance increases with accretion rate. We set a rough limit of $\dot{M} \gtrsim 10^{-7}\dot{M}_{\rm Edd}$, above which radiative cooling losses should be included self-consistently in numerical simulations. Otherwise, many important derived dynamical quantities, such as density, magnetic field magnitude, and temperature, may be off by an order of magnitude or more, especially when the accretion rate reaches $\dot{M} \simeq 10^{-6}\dot{M}_{\rm Edd} $, which correspond to $10^{-7} M_{\odot}~{\rm yr^{-1}}$ for Sgr A*. Since several recent works suggest that accretion physics is similar across the mass scale, this accretion rate in Eddington units should likely be an important limit for all black holes. Thus, we predict that the inclusion of self-consistent radiative cooling above $\sim10^{-6} M_{\rm Edd}$ should be important for LLAGN in general. Overall, this study allows us to have a more consistent model of accretion, even for the well-studied and under-luminous source, Sgr A*. Not only do we have a more realistic model by including the physics of radiative losses, but we are also able to show the influence of the accretion rate on the resulting spectra. The spin of the central black hole and the initial magnetic field configuration of the torus also have important consequences on the dynamics of the system and the resulting spectra. By including the cooling losses in our study, we can discuss the influence of these free parameters with more accuracy. For instance, initial magnetic field configurations consisting of a single set of poloidal loops result in significantly more powerful outflows than the four-loop cases, and we find that the jet power increases with the spin of the central black hole. As mentioned in Section \ref{sec:time_interval}, there is concern as to whether or not our simulations have run long enough to reach meaningful equilibrium states. This concern is especially pertinent for our case using 2.5D simulations, as these can never truly reach a steady state. The reason is that, after a period of initial vigorous growth, the MRI in 2.5D simulations steadily decays because the dynamo action that normally sustains it requires access to non-axisymmetric modes that are obviously inaccessible. We, therefore, chose a time interval for analysis when the mass accretion rate closely approximated our target value for most simulations. Clearly simulations in 3D with a longer duration (to ensure a proper equilibrium is reached) would be beneficial, for comparison. One other concern with only performing 2.5D, axisymmetric simulations is that it precludes the possibility of exploring the effects of misalignment between the angular momentum of the gas and the black hole. In reality, most supermassive black holes are unlikely to be accreting from matter sharing the same orbital plane as the black hole spin. \citet{dexterfragile12} recently showed that accounting for such a ``tilted'' accretion flow can dramatically alter the best-fit characteristics of Sgr A* and produce important new features in the spectra. As a final note, one can justify neglecting full radiative transfer in simulations of Sgr A* and likely most other weakly accreting LLAGN because the inner regions are generally thought to be optically thin. However, to treat the outer regions of the accretion flow, or higher luminosity sources, a more thorough treatment of radiative transfer will need to be implemented into the simulations. Similarly, to approach important questions such as the mass loading and particle acceleration in the jets, likely resistive (non-ideal) MHD will need to be considered. We see this work as an important step towards these next technological ``horizons''.
| 12
| 6
|
1206.3976
|
1206
|
1206.6762_arXiv.txt
|
We present a power spectral density analysis of the short cadence Kepler data for the cataclysmic variable V1504 Cygni. We identify three distinct periods: the orbital period ($1.668\pm0.006$ hours), the superhump period ($1.732\pm0.010$ hours), and the infrahump period ($1.633\pm0.005$ hours). The results are consistent with those predicted by the period excess-deficit relation.
|
NASA's \emph{Kepler} mission has been providing high time-resolution optical photometry since its launch on March 6, 2009 \citep{koch:2010}. Designed to survey a fixed 105 square degree field of view, it monitors the brightness of over 100,000 stars primarily for the purpose of detecting and categorizing exoplanets. However, due to the extremely high quality of the instrument, it has proven itself a boon for other astronomical communities as well, including those interested in the study of Cataclysmic Variables (CVs). CVs are classified based on their outburst properties, with the most active category being dwarf novae (DN). These systems are marked by quasi-periodic outbursts of varying strength, typically increasing in brightness by a few magnitudes. Dwarf novae are further separated into subtypes based on the manner in which these outbursts present themselves. \textit{SU UMa}-type DN are characterized by the presence of occasional superoutbursts, dramatic eruptions several magnitudes brighter than normal outbursts. DN that do not exhibit such behavior are classified as \textit{U Gem}-type. Whether a system exhibits \textit{SU UMa} or \textit{U Gem} characteristics is dependent on the orbital dynamics of the system, and can be characterized by its mass ratio ($q=M_2/M_1$) and its orbital period (see Eq. \ref{eq:smith-dhillon-rel}). Each group tends to lie within separate distributions on either side of a critical mass ratio ($q_\mathrm{crit}\approx0.33$). \textit{U Gem}-type DN have long periods corresponding to mass ratios greater than the critical value ($q>q_\mathrm{crit}$) whereas \textit{SU UMa}-type DN have shorter periods corresponding to mass ratios less than the critical value ($q<q_\mathrm{crit}$) \citep{hellier:2001}. The \textit{Kepler} field of view includes 10 CVs. Of the \textit{SU UMa}-type, only V344 Lyra has been the subject of extensive study \citep{wood:2011}, and to date no equivalent treatment has been undertaken for V1504 Cygni, which is the object of interest in this paper. Like V344 Lyra, V1504 Cygni is a \textit{SU UMa}-type dwarf novae, and as such features superoutbursts in addition to normal outburst behavior. Superhumps, periodic signals which are presumed to arise from the prograde precession of an elliptically elongated accretion disk \citep{hellier:2001}, have previously been identified in the V1504 light curves and studied in detail by \citet{cannizzo:2012}. A thorough analysis of the \emph{Kepler} lightcurve of V1504 Cygni suggests the presence of \textit{negative} superhumps (henceforth referred to by the shorter term infrahumps), as well. These are presumed to be associated with the nodal precession of a tilted accretion disk. In this paper, we outline the process of identifying and extracting the period of these signals in Sec. \ref{sec:data-analysis}, and then show that they are consistent with those predicted by the period excess-deficit relation in Sec. \ref{sec:discussion}. We conclude with our main findings in Sec. \ref{sec:conclusion}.
|
\label{sec:conclusion} By examining the \textit{Kepler} short cadence data for V1504 Cygni, we have identified three distinct periodic signals. Two of these are readily identifiable as the orbital period and the well-studied superhump period, each confirmed by independent measurements. The presence of infrahumps in V1504 is inferred from the measurement of a third periodic signal in the power spectrum that corresponds to neither the established orbital period nor superhump period, and confirmed by comparison of the infrahump period deficit against empirical models. We have measured the infrahump period to be $1.633\pm0.005$ hours.
| 12
| 6
|
1206.6762
|
1206
|
1206.6630_arXiv.txt
|
We briefly review the observations of the solar photosphere and pinpoint some open questions related to the magnetohydrodynamics of this layer of the Sun. We then discuss the current modelling efforts, addressing among other problems, that of the origin of supergranulation.
|
The solar photosphere is the only place in the Universe where we have a detailed view of stellar convection. Thermal convection is a well-known phenomenon that has been studied for more than a century, but in the case of stars it still owns many dark sides that prevent us from a full understanding. Unfortunately, convective regions of stars like the Sun are the seat of the magnetic activity, whose explanation requires strong investigations of flows where buoyancy, radiation and magnetic fields couple together. Flows are thus complex, but their darkest side is their turbulent nature which implies the interaction between many scales either in the velocity field or in the magnetic field or between both. Handling such a multiscale phenomenon has become possible when computers have reached enough computing power so that numerical simulations be realistic on some side(s). The first work of \cite{nordlund85} perfectly illustrates the emergence of interesting simulations coupling fluid dynamics and radiative transfer, and which could be compared to observations (line profiles). With the steady increase of computational power, such simulations have become the preferred tool of astrophysicists involved in fluid dynamical problems. More and more sophisticated simulations addressing the solar photosphere magnetohydrodynamics have emerged \cite[][]{SGSNB09,ustyugov09}. Thus, in this short review, we first set up the stage provided by observations of the Sun and pinpoint some open questions. Then, we briefly describe the current modelling efforts, ending this work with some perspectives.
| 12
| 6
|
1206.6630
|
|
1206
|
1206.1360_arXiv.txt
|
{ A rapidly growing amount of evidences, mostly coming from the recent gamma-ray observations of Galactic supernova remnants (SNRs), is seriously challenging our understanding of how particles are accelerated at fast shocks. The cosmic-ray (CR) spectra required to account for the observed phenomenology are in fact as steep as $E^{-2.2}$--$E^{-2.4}$, i.e., steeper than the test-particle prediction of first-order Fermi acceleration, and significantly steeper than what expected in a more refined non-linear theory of diffusive shock acceleration. By accounting for the dynamical back-reaction of the non-thermal particles, such a theory in fact predicts that the more efficient the particle acceleration, the flatter the CR spectrum. In this work we put forward a self-consistent scenario in which the account for the magnetic field amplification induced by CR streaming produces the conditions for reversing such a trend, allowing --- at the same time --- for rather steep spectra and CR acceleration efficiencies (about 20\%) consistent with the hypothesis that SNRs are the sources of Galactic CRs. In particular, we quantitatively work out the details of instantaneous and cumulative CR spectra during the evolution of a typical SNR, also stressing the implications of the observed levels of magnetization on both the expected maximum energy and the predicted CR acceleration efficiency. The latter naturally turns out to saturate around 10-30\%, almost independently of the fraction of particles injected into the acceleration process as long as this fraction is larger than about $10^{-4}$. }
|
\label{sec:conclusions} In this paper we tackled the problem of particle acceleration at SNR shocks in order to provide a theoretical explanation for the many evidences of spectra steeper than $E^{-2}$ coming from $\gamma$-ray observations of SNRs and from our current understanding of CR propagation in the Galaxy. The present investigation is motivated by the fact that the prediction of the most natural NLDSA theory, namely that the larger the CR acceleration efficiency, the flatter the CR spectrum, needs to be revised in order to be consistent with current observations. We demonstrated that the magnetic field amplification naturally induced by the super-Alfv\'enic streaming of accelerated particles may significantly alter the properties of the CR scattering, actually decoupling the compression ratios felt by the fluid and by the diffusing particles. Under reasonable assumptions about the development and the saturation of the plasma instabilities (which still need to be checked against first-principle simulations given the impossibility of carrying out an analytical treatment of their non-linear regime), we find that the self-generated magnetic turbulence can lead to a steepening in the spectrum of the accelerated particles. More precisely, the expected levels of magnetic field amplification granted by streaming instabilities, in addition to be consistent with the fields inferred in the downstream of young SNRs (figure \ref{fig:Bfield}), can lead to CR spectra as steep as $\sim E^{-2.3}$ even in the early stages of the SNR evolution, i.e., when the sonic Mach number is still much larger than one. The effective steepening of the CR spectra turns out to be function of the CR acceleration efficiency, which we tune by regulating the fraction of particles extracted from the thermal bath and injected in the acceleration process, $\eta$. Such a dependence is however radically different from the one predicted by a NLDSA theory in which the finite velocity of the scattering centers is not taken into account: the spectra of the accelerated particles, in fact, is showed to be consistent with the test-particle prediction for very low efficiencies, but invariably steeper and steeper than $E^{-2}$ when $\eta$ becomes larger and larger. Very interestingly, in this non-linear system a larger $\eta$ on one hand produces a larger $P_{cr}$, but on the other hand it also produces a larger self-generated magnetic field, which acts in such a way to reduce the shock modification by steepening the CR spectrum. The net effect, depicted in figure \ref{fig:eff-VAN+BC}, is that for $\eta\gtrsim 10^{-4}$ the pressure in CRs saturates around 10--30\% of the shock bulk pressure; also the value of the self-generated magnetic field saturates in a similar fashion (figure \ref{fig:Bfield}). This self-regulating interplay between efficient CR acceleration and effective magnetic field amplification may represent a key ingredient in order to quantitatively explain both the levels of magnetization inferred in SNRs and the CR acceleration efficiency required for SNRs to be the sources of Galactic CRs. Another important result is that the magnetic fields inferred are also the ones required to accelerate particles up to energies comparable with the observed knee in the diffuse spectrum of Galactic CRs, namely about 3--5$\times 10^{6}$GeV for protons (top panel of figure \ref{fig:multiTOT}). We also investigated the temporal evolution of the CR spectra following the SNR during its ejecta-dominated and adiabatic stages, assessing the differences between instantaneous and cumulative spectra of the particles advected downstream (which undergo adiabatic losses due to the shell expansion), and of the particles escaping the system from the upstream as a consequence of the decrease of the SNR confining power with time \citep{escape}. We find that the CR acceleration efficiency is expected to drop with time, basically because of the slowing down of the shock due to the inertia of the swept-up material, with the net result that in the intermediate/late Sedov phase the SNR content in CRs is invariably dominated by the contributions of the earlier stages. This is particularly important for two reasons. First, it shows that, in order for the total CR contribution during the SNR lifetime to be steeper than $E^{-2}$, particles must be accelerated with steep spectra also during the early stages. Second, when calculating the expected non-thermal emission from middle-age SNRs, a time-dependent study of the SNR evolution has to be carried out, since a simple snapshot of what is going on at the shock would easily lead to an overestimate of the spectral slope and to a underestimate of the SNR content in non-thermal particles.
| 12
| 6
|
1206.1360
|
|
1206
|
1206.3127.txt
|
Gamma Ray Bursts (GRBs) are unpredictable and brief flashes of gamma rays that occur about once a day in random locations in the sky. Since gamma rays do not penetrate the Earth's atmosphere, they are detected by satellites, which automatically trigger ground-based telescopes for follow-up observations at longer wavelengths. In this introduction to Gamma Ray Bursts we review how building a multi-wavelength picture of these events has revealed that they are the most energetic explosions since the Big Bang and are connected with stellar deaths in other galaxies. However, in spite of exceptional observational and theoretical progress in the last 15 years, recent observations raise many questions which challenge our understanding of these elusive phenomena. Gamma Ray Bursts therefore remain one of the hottest topics in modern astrophysics. \bigskip \begin{keywords}gamma ray bursts, supernovae, stellar mergers, host galaxies \end{keywords}\bigskip \bigskip
|
Gamma Ray Bursts (GRBs) are at the intersection of many different areas of astrophysics: they are relativistic events connected with the end stages of stars; they reveal properties of their surrounding medium and of their host galaxies; they emit radiation from gamma-rays to radio wavelengths, as well as possibly non-electromagnetic signals, such as neutrinos, cosmic rays and gravitational waves. Due to their enormous luminosities, they can be detected even if they occur at vast distances, and are therefore also of great interest for cosmology. Let us first briefly review some basic properties of GRBs. They are unpredictable and non-repetitive violent flashes of gamma rays coming from random directions in the sky at random times and lasting from $\sim 0.01\, $s to $\approx 1000\, $s. When they occur, they outshine all other sources of gamma radiation. Their gamma ray spectrum is non-thermal, with the bulk of energy emitted in 0.1 to 1 MeV range. Gamma ray emission is followed at longer wavelengths by so-called afterglows: these appear as point-like sources in X-rays, ultraviolet, optical, infrared and radio wavebands, and fade away in a few hours to weeks, sometimes months. Deeper observations usually reveal surrounding host galaxies. Spectroscopic observations of the afterglows of GRBs and host galaxies enable us to measure their cosmological redshifts $z$\footnote[1]{The wavelength of light traveling from a source to an observer through the expanding Universe increases by the factor $\lambda_{\rm observed}/\lambda_{\rm emitted}=1+z$ (i.e., $z=(\lambda_{\rm observed} - \lambda_{\rm emitted})/\lambda_{\rm emitted}= \Delta \lambda/\lambda_{\rm emitted}$). This is also the factor by which the Universe has expanded between the time of the emission and reception of light. Cosmological redshift, $z$, is therefore a measure of the Universe's size (and adopting a certain cosmological model, also the Universe's age) at the time the light left the observed astronomical object.}, and infer their distances. From observed fluxes ($F$) and known distances ($d$), assuming that GRBs emit radiation isotropically, we can estimate that the isotropic equivalent luminosity $L_{\rm iso}= {F \cdot{4\pi d^2}}$ is in the range of $L_{\rm iso}\sim 10^{40} - 10^{47}\, $W, i.e. total isotropic energy output of these events is $E_{\rm iso} \sim 10^{42} - 10^{48}\, $J, which is comparable to the rest energy of the Sun, $M_\odot c^2$ (where $M_\odot$ is the mass of the Sun). Another important clue to the nature of GRBs comes from their gamma ray light curves (i.e. flux in gamma rays vs time), which exhibit rapid variability on a timescale of milliseconds. Since variability on a timescale $\Delta t$ can not be produced in an area which is larger than the distance light travels during this time, we can estimate the source size to be $D \leq c \Delta t \approx 300\, $km. This immediately tells us that we are dealing with a very compact stellar mass source, and given the enormous energy involved, also relativistic effects. Sources of energy which fit the bill (small, relativistic, energetic) are compact objects such as neutron stars and black holes, and their gravitational energy (with perhaps also their rotational energy providing part of the energy needed). There are at least two types of GRB, and according to current understanding, both are connected with end stages of stars. One type is connected with the death of massive, rapidly rotating stars, and occurs in star forming regions of blue dwarf galaxies. The other type is found in both old and star-forming galaxies; this is much less understood and is thought to be due to mergers of two compact stellar remnants (neutron stars and/or black holes).
|
GRBs are fascinating events, not only as the most powerful explosions known, but also as laboratories of stellar formation, end stages of stars, extreme relativistic physics, strong gravity regions, and objects probably endowed with strong magnetic fields. They can be observed throughout the Universe up to the first stars after the Big Bang. It is believed that their enormous electromagnetic output is only a fraction of the energy released, and that the majority of energy is carried away by neutrinos and gravitational waves. This makes GRBs ideal targets for multi-messenger astronomy projects. The voyage of discovering their secrets has been very exciting so far and it promises to continue bumpy and full of surprises in the future.
| 12
| 6
|
1206.3127
|
1206
|
1206.4545_arXiv.txt
|
We analyze two regions of the quiet Sun ($35.6\times 35.6$~Mm$^2$) observed at high spatial resolution ($\lesssim$100~km) in polarized light by the IMaX spectropolarimeter onboard the Sunrise balloon. We identify 497 small-scale ($\sim$400~km) magnetic loops, appearing at an effective rate of 0.25~loop~h$^{-1}$~arcsec$^{-2}$; further, we argue that this number and rate are underestimated by $\sim$30\%. However, we find that these small dipoles do not appear uniformly on the solar surface: their spatial distribution is rather filamentary and clumpy, creating {\em dead calm} areas, characterized by a very low magnetic signal and a lack of organized loop-like structures at the detection level of our instruments, that cannot be explained as just statistical fluctuations of a Poisson spatial process. We argue that this is an intrinsic characteristic of the mechanism that generates the magnetic fields in the very quiet Sun. The spatio-temporal coherences and the clumpy structure of the phenomenon suggest a recurrent, intermittent mechanism for the generation of magnetic fields in the quietest areas of the Sun.
|
During the last few years, our understanding of the structure, organization and evolution of magnetic fields in the very quiet Sun (the regions outside active regions and the network) has become increasingly clear. Magnetic fields in the quietest areas of the Sun are relatively weak and organized at small spatial scales, which yields weak polarization signals that are difficult to observe. Until very recently, the general picture of the structure of its magnetism was rather rough: a ``turbulent'' disorganized field \citep[][]{Stenflo82, Solanki93, rafa_04, TrujilloShchukinaEtal04}. It is now clear that even in very quiet areas, magnetic fields may organize as coherent loops at granular and subgranular scales \citep[$\lesssim$1000~km;][]{MartinezColladosEtal07}, that these small loops are dynamic \citep{MartinezColladosEtal07, CentenoSocasEtal07, MartinezBellot09, GomoryBeckEtal10}, that they pervade the quiet solar surface and may even connect with upper atmospheric layers \citep{MartinezBellot09, MartinezMansoEtal10}. Yet, this picture is still incomplete. For example, we lack a complete mapping of the full magnetic field vector on extended fields of view, because the linear polarization signals are intrinsically weak (they are second order on the transverse magnetic field component), and high spatial resolution maps on linear polarization are rather patchy \citep{DanilovicBeeckEtal10}, which has led to incomplete (and sometimes, physically problematic) characterizations of the topology of the field \citep{IshikawaTsunetaEtal08, IshikawaTsuneta09, IshikawaTsuneta10}. Here we look for and trace small magnetic loops on extended regions of the quiet Sun observed with the highest spatial resolution. Loop-like structures are a natural configuration of the magnetic field due to its solenoidal character. While they can be traced as single, individual, coherent entities, they characterize the magnetic field at large, and their statistics and evolution may shed some light on the origin of the very quiet Sun magnetism, in particular, on the operation or not of local dynamo action \citep{Cattaneo99}. On the other hand, the organization of the field at small scales affect the organization of the magnetic field at larger scales and in higher atmospherics layers \citep{SchrijverTitle03, Cranmer09}, and dynamics \citep{CranmervanBallegoijen10}. We find evidence for the small scale loops appearing rather irregularly, as in bursts and clumps. Moreover, wide regions of the very quiet Sun show very low magnetic activity and no apparent sign of organized loops at the detection level of the instruments. These extremely quiet ({\em dead calm}) regions are an intrinsic characteristic of the statistical distribution of these events.
|
It is known that even in very quiet areas of the Sun magnetic fields may organize naturally forming loops at granular scales. In this study we extended this observation to the smallest spatial scales observable (100-1000~km), finding an increasing number of loops at smaller scales up to the resolution limit. This finding suggests that the organization of magnetic fields might continue beyond that limit. We cannot reconstruct the complete magnetic field topology because 1) the finite spatial resolution of our observations is (perhaps inherently) above the organization scale of the magnetic fields, and 2) we lack linear polarimetric sensitivity, which gives us only fragmentary information on the transversal (horizontal) component of the magnetic fields. Due to these limitations the loop structures that we observe are biased towards relatively large and relatively strong with respect to the magnetic flux density in the neighbouring areas. We found evidence that the loops thus detected are not randomly distributed on the solar surface, but rather that they may appear in bursts, and that they are noticiable absent from extended areas which are, also, only weakly magnetized. It is not yet clear what is the nature of the magnetic fields in the quiet Sun ---what are the fundamental physical mechanisms involved in their generation and evolution. The presence of these {\em dead calm} areas in the quiet Sun (and small scale loops hotspots) represent an important constrain on the origin of magnetic fields in the very quiet Sun and on the dominant dynamic and magnetic mechanisms taking place there. It is thought that the magnetism of the quiet Sun can be the result of the emergence of underlying organized magnetic fields \citep{fernando_12} or the dragging of the overlying canopy fields \citep{pietarila+11}. It would then be necessary to understand why there are emergence hotspots and dead calm areas. Another possibility is that they are just recycling of the decay of active regions as they diffuse and migrate to the poles. But it seems unlikely that such random walking would lead to the kind of organized structures and to the spatial patterns reported here. It is also possible that they are linked to some type of dynamo action taking place in the solar surface \citep{Cattaneo99}. It seems now clear that coherent velocity patterns are a requisite for dynamo action to take place \citep{tobias_cattaneo08}. The most obvious coherent velocity pattern in the solar surface is granulation. If this velocity pattern is involved in some dynamo action, it is reasonable that it forms coherent magnetic structures (such as the loops), although these does not need to be organized at the same granular scales; it could well be that they form intermittent patterns as the ones observed here. Actually, theoretical considerations \citep{chertkov+99} and laboratory experiments \citep{revelet+08} support the idea that the onset of turbulent dynamo action may be highly intermittent and bursty. Finally, it could just be that these small scale loops represent the far tail of a continous range of structures from a global dynamo, just lying at the other end from sunspots. Their spatial statistics would then reflect the velocity patterns on the last (shallowest) layers of magnetic field emergence. Future models that we construct to understand the generation of magnetic fields in the very quiet Sun have to explain the spatio-temporal coherences that we report. Further work is needed to extend these results in larger areas of the Sun and along the solar cycle.
| 12
| 6
|
1206.4545
|
1206
|
1206.6892_arXiv.txt
|
{We calculate the polarization of synchrotron radiation produced at the relativistic reconfinement shocks, taking into account globally ordered magnetic field components, in particular toroidal and helical fields. In these shocks, toroidal fields produce high parallel polarization (electric vectors parallel to the projected jet axis), while chaotic fields generate moderate perpendicular polarization. Helical fields result in a non-axisymmetric distribution of the total and polarized brightness. For a diverging downstream velocity field, the Stokes parameter $U$ does not vanish and the average polarization is neither strictly parallel nor perpendicular. A distance at which the downstream flow is changing from diverging to converging can be easily identified on polarization maps as the turning point, at which polarization vectors switch, e.g., from clockwise to counterclockwise.}
|
\label{sec_intro} Relativistic jets are present in all radio-loud active galaxies and are responsible for the bulk of their non-thermal emission. The spectral energy distributions of these sources usually show two broad components. The low-energy one, extending from radio wavelengths to the optical/UV band, and in some cases even to hard X-rays, is commonly interpreted as the synchrotron radiation. This interpretation is supported by significant linear polarization measured routinely in the optical \citep[\eg,][]{1978ApJ...220L..67S,1991ApJ...375...46I,1992ApJ...398..454W,2011PASJ...63..639I}, radio \citep[\eg,][]{2001ApJ...562..208L,2002ApJ...577...85M,2002ApJ...568...99H,2003ApJ...589..733P,2011MNRAS.412..318M} and other bands. Polarization measurements in sources dominated by relativistic jets provide important constraints on the structure of the dominant emitting regions. In particular, a significant polarization degree indicates an anisotropic distribution of magnetic fields. This anisotropy may reflect a large-scale order in the magnetic field lines \citep{1984RvMP...56..255B} or may arise from chaotic magnetic fields compressed at shocks \citep{1980MNRAS.193..439L,1985ApJ...298..301H,1988ApJ...332..678J} or sheared at the jet boundary layer \citep{1981ApJ...248...87L,1990MNRAS.242..616M}. For purely chaotic magnetic fields compressed by perpendicular or conical shocks, high polarization degrees are found only when the polarization (electric) vectors are parallel to the projected jet axis (``parallel polarization''), while for polarization vectors perpendicular to the jet axis (``perpendicular polarization''), the polarization degrees are below $10\%$ \citep{1990ApJ...350..536C}. To produce higher perpendicular polarization degrees, \cite{2006MNRAS.367..851C} introduced a large-scale poloidal magnetic field component. Diverging conical shocks can be caused by collision of the jet with a dense cloud \citep{1985ApJ...295..358L} or appear when the jet rapidly becomes overpressured with respect to its environment \citep{2012ApJ...752...92A}. A different shock structure can result if a jet becomes underpressured. Then, the interaction between the relativistic jet and its gaseous environment leads to the formation of so-called reconfinement or recollimation shocks \citep{1983ApJ...266...73S,1997MNRAS.288..833K}. These shocks deflect the jet flow and focus it on a cross section much smaller compared to that of a freely propagating jet. \citet[hereafter \citetalias{2009MNRAS.395..524N}]{2009MNRAS.395..524N} calculated the linear polarization of synchrotron emission associated with reconfinement shocks for purely chaotic magnetic fields. He found that perpendicular polarization degrees can exceed $20\%$. Because a reconfinement shock is an ensemble of conical shocks, this result appears to be in conflict with the work of \cite{1990ApJ...350..536C}. The crucial difference is that while \cite{1990ApJ...350..536C} assumed a parallel upstream flow, in \citetalias{2009MNRAS.395..524N} a spherically diverging upstream flow was considered. Because parallel upstream flows are often adopted in studies of relativistic jet polarization \citep{2005MNRAS.360..869L,2006MNRAS.367..851C}, we would like to point out that jet divergence makes a substantial difference in the resulting polarization. Reconfinement shocks with purely chaotic magnetic fields cannot account for the high parallel polarization often observed in blazars. Therefore, we extend the study presented in \citetalias{2009MNRAS.395..524N} to include large-scale ordered magnetic field components. High parallel polarization can be achieved by introducing a toroidal magnetic field component, but in general ordered magnetic fields may consist of both toroidal and poloidal components, forming a helical structure. In Section \ref{sec_pol}, we describe our model of synchrotron emission and polarization from relativistic reconfinement shocks, introducing a simply parametrized family of global helical magnetic fields. In Section \ref{sec_res}, we present the results demonstrating the effect of mixing chaotic and ordered (toroidal) magnetic fields (Section \ref{sec_pol_tor}), and the effect of changing the pitch angle of the purely helical magnetic field (Section \ref{sec_pol_heli}). Our results are discussed in Section \ref{sec_dis} and summarized in Section \ref{sec_sum}. Preliminary results were already presented in \cite{2010ASPC..427..205N}.
|
\label{sec_dis} Polarization surveys, combined with knowledge of the inner jet structure inferred from VLBI imaging, can provide information on the orientation of magnetic fields with respect to the projected jet axis. Optical polarization electric vectors in compact radio sources are preferentially parallel to the inner jet \citep[\eg][]{1985AJ.....90...30R}. This polarization alignment can be explained with transverse internal shocks. In the reconfinement shock scenario discussed in this work, strong parallel polarization can only be produced in the presence of ordered magnetic fields that contribute at least half the total magnetic energy density (see Figure \ref{fig_pol_total-tor}) and have a minimum pitch angle of less than $\sim 5^\circ$ (see Figure \ref{fig_pol_total-heli}). Polarization measured in large-scale jets between prominent knots is usually perpendicular, both in radio \citep[\eg][]{1994AJ....108..766B} and optical bands \citep[\eg][]{2006ApJ...651..735P}. It indicates the presence of a strong poloidal component of the magnetic field. However, in expanding jet the poloidal magnetic field component decays faster than the toroidal component. In large-scale jets, the poloidal component of the magnetic field can arise through the velocity shear at the jet boundary \citep{1981ApJ...248...87L}. \cite{2006MNRAS.367..851C} studied polarization of synchrotron radiation from conical shocks filled with a combination of chaotic and poloidal magnetic fields. He calculated polarized emission maps and compared them with resolved polarimetric VLBI map of blazar 3C~380. He was able to reproduce a fan-like structure of the polarization vectors for an equal contribution of chaotic and poloidal magnetic field components ($f=0.5$). His motivation for introducing the poloidal component was to obtain higher degrees of the perpendicular polarization. Reconfinement shocks or conical shocks with diverging upstream flow can produce higher perpendicular polarization degrees without an ordered magnetic field component. \cite{2005MNRAS.360..869L} studied polarization from helical magnetic fields in relativistic cylindrical jets. In this case, the pitch angle is constant along the jet and therefore their results are difficult to compare directly with the results of Section \ref{sec_pol_heli}, where the pitch angle is strongly position-dependent. However, the dependence of the average polarization degree on the viewing angle and the minimum pitch angle (Figure \ref{fig_pol_total-heli}) is in qualitative agreement with their Figure 7a. One important difference is that in their model the average Stokes parameter $\left<U\right>$ vanishes. In our model, $\left<U\right>\ne 0$, since we have defined the poloidal component of the magnetic field along the fluid velocity unit vector $\bm{e}$ (see Equation \ref{eq_pol_vecb_heli}). Only when $\bm{e}$ is parallel to the jet axis, as is the case for a cylindrical jet model, $\left<U\right>=0$. This provides a physical interpretation for the turning point found in Figure \ref{fig_pol_prof-pol-1a-alpha}. It corresponds to a distance $z$ along the jet, for which the post-shock velocity inclination angle $\theta_{\rm s}=0$. Since $\theta_{\rm s}$ is a monotonically decreasing function of $z$, there can be only one such turning point, located closely to the point of maximum jet radius. \cite{2005MNRAS.360..869L} also studied the transverse profiles of polarization degree. In their Figure 9a,b they show that those profiles should be symmetric with respect to the jet axis for $\theta_{\rm obs}=1/\Gamma$ and strongly asymmetric for $\theta_{\rm obs}=1/(2\Gamma)$ or $\theta_{\rm obs}=2/\Gamma$. Polarization vectors close to the jet boundary should always be perpendicular. Using the emission maps shown in Figure \ref{fig_pol_map-1a-alpha}, one can evaluate the transverse polarization degree profiles along the lines indicating the turning point. These models correspond to $\theta_{\rm obs}\Gamma_{\rm j}=0.9$ and $\theta_{\rm obs}\Gamma_{\rm s}$ slightly lower. The $Q/I$ profiles are strongly asymmetric, with the $Q/I$ value increasing from the lower rim to the upper rim. Polarization vectors are not always perpendicular at the jet boundary, but this may be due to the low resolution of the maps. Our polarization profiles appear to be roughly consistent with their case of $\theta_{\rm obs}=1/(2\Gamma)$. This may indicate that the transverse polarization profile for cylindrical jets filled with helical magnetic fields is very sensitive to $\theta_{\rm obs}$ for values close to $1/\Gamma$. On the other hand, some observed radio structures in jets can be understood without invoking any ordered magnetic field component. An interesting recent example is the C80 knot in the jet of radio galaxy 3C~120, reported by \cite{2012ApJ...752...92A}. It has a bow-like shape with polarization vectors aligned perpendicularly to its outline. It was well reproduced by a conical shock model with compressed chaotic magnetic field distribution. However, this model can hardly explain the polarization vectors perpendicular to the jet axis observed immediately downstream of the C80 knot, even when a poloidal field component is introduced. Additional perpendicular shock waves can explain a parallel polarization measured even farther downstream (knot C99), but are inconsistent with a perpendicular polarization. We propose a simple explanation of the perpendicular polarization vectors downstream of the C80 knot by a gradual collimation of the conical shock into a roughly cylindrical structure. In the absence of large-scale magnetic fields, the polarization vectors will be perpendicular to the local shock outline \citepalias{2009MNRAS.395..524N}. Our model predicts a relatively uniform distribution of the total brightness, especially at large viewing angles. Hence, it cannot reproduce very bright and compact features like HST-1 in radio galaxy M87. Clearly, an additional dissipation mechanism is required to explain its behavior, in particular the high optical/UV polarization reported by \cite{2011ApJ...743..119P}.
| 12
| 6
|
1206.6892
|
1206
|
1206.0366_arXiv.txt
|
The effect of temperature inhomogeneity on the periods, their ratios (fundamental vs. first overtone), and the damping times of the standing slow modes in gravitationally stratified solar coronal loops are studied. The effects of optically thin radiation, compressive viscosity, and thermal conduction are considered. The linearized one-dimensional magnetohydrodynamic (MHD) equations (under low-$\beta$ condition) were reduced to a fourth--order ordinary differential equation for the perturbed velocity. The numerical results indicate that the periods of non-isothermal loops ({\it i.e.} temperature increases from the loop base to apex) are smaller compared to those of isothermal loops. In the presence of radiation, viscosity, and thermal conduction, an increase in the temperature gradient is followed by a monotonic decrease in the periods (compared with the isothermal case), while the period ratio turns out to be a sensitive function of the gradient of the temperature and the loop lengths. We verify that radiative dissipation is not a main cooling mechanism of both isothermal and non-isothermal hot coronal loops and has a small effect on the periods. Thermal conduction and compressive viscosity are primary mechanisms in the damping of slow modes of the hot coronal loops. The periods and damping times in the presence of compressive viscosity and/or thermal conduction dissipation are consistent with the observed data in specific cases. By tuning the dissipation parameters, the periods and the damping times could be made consistent with the observations in more general cases.
|
Through SOHO and TRACE observations, propagating slow magnetoacoustic waves have been found in coronal plumes, loop footpoints, above sunspots and even non-sunspot regions ({\it e.g.}, Ofman {\it et al.}, 1997; DeForest and Gurman, 1998; Berghmans and Clette, 1999; De Moortel {\it et al.}, 2000; \'{O}Shea {\it et al.}, 2002; Brynildsen {\it et al.}, 2002; Marsh {\it et al.}, 2003). Standing longitudinal slow waves with strong damping and large Doppler-shift oscillations have been detected in hot postflare loops recorded by SOHO/SUMER. In the cooler loops, these waves were observed by the EUV imaging spectrometer {\it Hinode}/EIS (Srivastava and Dwived, 2010). These oscillations have a phase shift of about one-quarter period between velocity and intensity. The periods and damping times of the standing slow waves are in the ranges of 8.6--32.3 min and 3.1--42.3 min, respectively (Kliem {\it et al.}, 2002; Wang {\it et al.}, 2002a, 2002b, 2003, 2005; Banerjee {\it et al.}, 2007; Erd\'{e}lyi {\it et al.}, 2008). The effect of energy dissipation on the slow waves through thermal conduction, compressive viscosity, radiative cooling, and heating on the periods, period ratio, and damping times has formed the focus of recent studies. Ofman and Wang (2002), for instance, studied the oscillations and damping of standing slow modes in the isothermal loops. They found that thermal conduction is the dominant dissipation mechanism. Taking into account the effects of thermal conduction and compressive viscosity, De Moortel and Hood (2003) investigated both propagating and standing slow magnetoacoustic waves in homogeneous corona loops. They found a minimum damping time that can be obtained by thermal conduction alone. However, for stronger dissipation an additional mechanism such as viscosity has to be added. Sigalotti {\it et al.} (2007) studied the dissipation of standing slow modes in hot, isothermal loops by integrating the effects of gravitational stratification, thermal conduction, compressive viscosity, radiative cooling, and heating in their study. They concluded that thermal conduction and compressive viscosity are the main sources of the wave damping. Pandey and Dwivedi (2006) have shown that separate effects of thermal conduction and viscosity in isothermal loops are not sufficient to explain the observed damping times of the oscillations. Only with the combined effect of thermal conduction and viscosity the results are consistent with the observations. Moreover, Taroyan {\it et al.} (2005) investigated the effect of temperature inhomogeneity on the dissipation of standing slow waves through considering thermal conduction and optically thin radiative losses. They found that the damping time of the isothermal loops is proportional to the wave period and the oscillations are rapidly damped mainly by thermal conduction. The ratio between the period of the fundamental mode ($p_1$) and the first overtone ($p_2$), $p_1/(2p_2)$, for both the fast kink mode and the slow longitudinal mode is a useful seismological indicator. Departure of this ratio from unity is a consequence of the density, pressure, temperature, magnetic field (both radial and longitudinal structures), heating functions and the dissipation mechanisms ( D\'{i}az {\it et al.}, 2006; Dymova and Ruderman, 2006; Donnelly {\it et al.}, 2006; Erd\'{e}lyi and Verth, 2007; Safari {\it et al.}, 2007; Verth {\it et al.}, 2007; Ruderman {\it et al.}, 2008; McEwan {\it et al.}, 2008; Andries {\it et al.}, 2009; Fathalian and Safari, 2010). The results of the numerical modeling of the oscillations allow a comparison between the model and the observational results (McEwan {\it et al.}, 2006; Abedini and Safari, 2011). Macnamara and Roberts (2010) studied the effects of thermal conduction and compressive viscosity on the slow mode of the isothermal loops, and concluded that the effect of thermal conduction on the period ratio is negligible. In line with the above-mentioned studies, in this study, we will investigate the dissipation of the standing slow MHD modes in the hot coronal loops taking into account gravitational stratification, inhomogeneity of temperature, thermal conduction, compressive viscosity, heating, and optically thin radiative losses. For such a purpose, the linearized one dimensional magnetohydrodynamic (MHD) equations are reduced to a fourth--order differential equation for the perturbed velocity. The paper is organized as follows. In Section 2, we present a brief description of the models and equations. In Section 3, the differential equation is solved numerically based on the boundary value problem solver (finite difference code) for different cases. Finally, conclusions are drawn in Section 4.
|
The coronal loop has been modeled as a half-circle magnetic flux tube with a uniform magnetic field along the loop axis. The effect of the inhomogeneity of the temperature on the periods, the period ratio, and damping times of standing slow modes in the gravitationally stratified coronal loops was investigated in this study. The effects of optically thin radiation, compressive viscosity, and thermal conduction were considered. The linearized one-dimensional MHD equations were reduced to a fourth-order ordinary differential equation for the perturbed velocity. Under the solar coronal conditions (low-$\beta$ plasma) the resultant equation was solved numerically. Moreover, the oscillations and the damping of various flux tube models in the presence of dissipation mechanisms were compared with the oscillations and damping of adiabatic and isothermal loops. The following shows the main results: \begin{itemize} \item In a gravitationally stratified loop, the periods and their ratio (fundamental vs. first overtone) are sensitive functions of the temperature inhomogeneity. \item In hot corona loops ($T\ge6$ MK), the effect of optically thin radiation on the periods and damping of the oscillations is negligible for both isothermal and non-isothermal cases. \item In the presence of compressive viscosity, the damping times significantly change but viscous dissipation is more effective in shorter and hotter loops. Compressive viscosity introduces a cutoff frequency on the oscillations. In the case of non-isothermal and hotter loops the computed damping times agree with the observed data. \item Thermal conduction changes the periods more significantly than other dissipation mechanisms, and the damping of oscillations is in the weak regime ($\tau_{\rm d}/p\ge2$). We noted that thermal conduction is a function of the temperature gradient and depends on higher derivatives, and this is the reason for bringing more changes in the periods in addition to the changes due to inhomogeneity in the temperature. \item In the most general case where optically thin radiation, compressive viscosity, and thermal conduction are all considered, the periods are positively correlated with the loop length and negatively correlated with the temperature. The damping time and damping quality are complex functions of both loop length $L$ and inhomogeneity parameter $\lambda$. We also found that in the isothermal gravitationally stratified loops, thermal conduction and compressive viscosity are needed to reproduce the observed periods and damping times. In the presence of the temperature gradient in the gravitationally stratified non-isothermal loops, the periods and the damping times resulted by compressive viscosity and/or thermal conduction dissipation, are consistent with the observed data in special cases. However, by tuning the dissipation parameters, the periods and the damping times could be made in good agreement with the observations in more general cases. \end{itemize} \begin{acks} The authors thank the anonymous referee for very helpful comments and suggestions. \end{acks}
| 12
| 6
|
1206.0366
|
1206
|
1206.2363_arXiv.txt
|
The origin of cosmic rays holds still many mysteries hundred years after they were first discovered. Supernova remnants have for long been the most likely sources of Galactic cosmic rays. I discuss here some recent evidence that suggests that supernova remnants can indeed efficiently accelerate cosmic rays. For this conference devoted to the Astronomical Institute Utrecht I put the emphasis on work that was done in my group, but placed in a broader context: efficient cosmic-ray acceleration and the implications for cosmic-ray escape, synchrotron radiation and the evidence for magnetic-field amplification, potential X-ray synchrotron emission from cosmic-ray precursors, and I conclude with the implications of cosmic-ray escape for a Type Ia remnant like Tycho and a core-collapse remnant like Cas A.
|
This year marks the 100th anniversary of detection of cosmic rays by Victor Hess \citep{hess12}.\footnote{See for \citet{carlson12} for the history behind the discovery of cosmic rays.} The source(s) of these highly energetic particles, and the way these particles are accelerated, is still a matter of debate. The energy density of cosmic rays in the Galaxy, about 1~eV\,cm$^{-3}$ has for a long time been attributed to the power provided by supernovae \citep[e.g.][]{ginzburg64}. Supernovae are the most energetic events in the Galaxy. Nevertheless, a large fraction, 10-20\%, of their explosion energy is needed for particle acceleration, in order to explain the flux of cosmic rays observed on Earth. In addition, the lack of spectral features in the cosmic-ray spectrum up to $3\times 10^{15}$~eV suggests that the sources of cosmic rays should be able to accelerate particles to this energy.\footnote{ Galactic sources should probably even accelerate up to $3\times 10^{18}$~eV, as only above this energy the Galaxy becomes transparent for cosmic rays.} The power provided by supernova explosions may be used to accelerate particles immediately after the explosion, in the supernova remnant (SNR) phase, or perhaps in OB associations, due to the combined effects of multiple supernovae and strong stellar winds \citep{bykov92}. Most evidence supports now the view that at least part of the cosmic rays originate from the SNR phase. Since a long time radio observations of SNRs show that relativistic electrons are present there. But over the last two decades evidence shows that SNRs can accelerate particles to energies of at least 10~TeV. The first proof was the discovery of X-ray synchrotron radiation from the SN1006 \citep{koyama95}, followed by further evidence for hard X-ray synchrotron emission from Cas A \citep{the96,allen97,favata97}. Now for many young SNRs, regions close to the shock front have been identified whose X-ray emission is dominated by synchrotron radiation. Further proof for acceleration to TeV energies consists of the detection of TeV gamma-ray emission from many young SNRs by Cherenkov telescopes such as \hegra, \hess, \magic\ and \veritas. And since a few years the \fermi\ and \agile\ \gray\ observatories detected many, both young and mature, SNRs in the GeV \gray\ range.\footnote{ See \citet{reynolds08,vink12,helder12} for recent reviews.} However, there is no conclusive observational evidence yet that SNRs are capable of accelerating particles up to, or beyond, $3\times 10^{15}$~eV, or that 10\% of the explosion energy is transferred to cosmic rays. Nevertheless, observations give us several hints that SNRs can indeed accelerate sufficient numbers of particles to these very high energies.
| 12
| 6
|
1206.2363
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.