subfolder
stringclasses
367 values
filename
stringlengths
13
25
abstract
stringlengths
1
39.9k
introduction
stringlengths
0
316k
conclusions
stringlengths
0
229k
year
int64
0
99
month
int64
1
12
arxiv_id
stringlengths
8
25
1609
1609.03674_arXiv.txt
{Radial-velocity (RV) signals arising from stellar photospheric phenomena are the main limitation for precise RV measurements Those signals induce RV variations an order of magnitude larger than the signal created by the orbit of Earth-twins, thus preventing their detection.} {Different methods have been developed to mitigate the impact of stellar RV signals. The goal of this paper is to compare the efficiency of these different methods to recover extremely low-mass planets despite stellar RV signals. However, because observed RV variations at the meter-per-second precision level or below is a combination of signals induced by unresolved orbiting planets, by the star, and by the instrument, performing such a comparison using real data is extremely challenging.} {To circumvent this problem, we generated simulated RV measurements including realistic stellar and planetary signals. Different teams analyzed blindly those simulated RV measurements, using their own method to recover planetary signals despite stellar RV signals. By comparing the results obtained by the different teams with the planetary and stellar parameters used to generate the simulated RVs, it is therefore possible to compare the efficiency of these different methods.} {The most efficient methods to recover planetary signals {\bf take into account the different activity indicators,} use red-noise models to account for stellar RV signals and a Bayesian framework to provide model comparison in a robust statistical approach. Using the most efficient methodology, planets can be found down to $K/N= K_{\mathrm{pl}}/\mathrm{RV}_{\mathrm{rms}}\times\sqrt{N_{\mathrm{obs}}}=5$ with a threshold of $K/N=7.5$ at the level of 80-90\% recovery rate found for a number of methods. These recovery rates drop dramatically for $K/N$ smaller than this threshold. In addition, for the best teams, no false positives with $K/N > 7.5$ were detected, while a non-negligible fraction of them appear for smaller $K/N$. A limit of $K/N = 7.5$ seems therefore a safe threshold to attest the veracity of planetary signals for RV measurements with similar properties to those of the different RV fitting challenge systems.} {}
\label{sect:1} The radial-velocity (RV) technique is an indirect method that measures with Doppler spectroscopy the stellar wobble induced by a planet orbiting its host star. The technique is sensitive not only to possible companions, but also to signals induced by the host star. Now that the \ms\, precision level has been reached by the best spectrographs, it is clear that solar-like stars introduce signals at a similar level. Those stellar signals, often referred to as \emph{stellar jitter}, currently prevent the RV technique from detecting and measuring the mass of Earth-twins orbiting solar-type stars, i.e., Earth analogues orbiting in the habitable zone of GK dwarfs, because such planets induce signals an order of magnitude smaller. It is therefore extremely important to investigate new approaches to mitigate the impact of stellar signals if we want the RV technique to be efficient at characterizing the Earth-twins that will be found by TESS \citep[][]{Ricker-2014} and PLATO \citep[][]{Rauer-2014}. At the \ms\,precision level, RV measurements are affected by stellar signals, that depend on the spectral type of the observed star { \citep[][]{Dumusque-2011a,Isaacson-2010,Wright-2005}}. For GK dwarfs, those stellar signals can be decomposed, to our current knowledge, in four different components: \begin{itemize} \item solar-type oscillations \citep{Dumusque-2011a,Arentoft-2008,OToole-2008,Kjeldsen-2005}, \item granulation phenomena \citep{Dumusque-2011a,Del-Moro-2004a,Del-Moro-2004b,Lindegren-2003,Dravins-1982}, \item short-term activity signals on the stellar rotation period timescale \citep[][]{Haywood-2016,Borgniet-2015,Robertson-2015a,Robertson-2014,Dumusque-2014b,Boisse-2012b,Saar-2009,Meunier-2010a,Saar-1997b}, \item and long-term activity signals on the magnetic cycle period timescale \citep[][]{Lanza-2016,Diaz-2016,Meunier-2013,Lovis-2011b,Dumusque-2011c,Makarov-2010}. \end{itemize} {For more details about these signals and their origins, readers are referred to Section 2 in \citet{Dumusque-2016a} and references therein.} Stellar signals creates RV variations that are larger than the signal induced by small-mass exoplanets, such as Earth-twins. There is several examples in the literature, where by analyzing the same RV measurements different teams detected different planetary configurations. This is the case of the famous planetary system GJ581, for which the number of planet detected is ranging between 3 and 6 \citep[][]{Hatzes-2016,Anglada-Escude-2015, Robertson-2014, Baluev-2013, Vogt-2012, Gregory-2011, Vogt-2010b, Mayor-2009b}, of HD40307, for which 4 to 6 planets have been announced \citep[][]{Diaz-2016,Tuomi-2013a}, and GJ667C, for which 3 to 7 planets have been detected \citep[][]{Feroz-2014,Anglada-Escude-2012a,Gregory-2012}. All those systems are affected by stellar signals, and therefore depending on the model used to analyze the data, different teams arrives to different conclusions. This shows that optimal models do not exist at the moment to analyze RV measurements affected by stellar signals and this pushes the community towards finding an optimal solution. The RV fitting challenge is one of the efforts pursued today in this direction. The development of the HARPS-N solar telescope \citep[][]{Dumusque-2015b} is another one that should deliver the optimal data set for characterizing and understanding stellar signals in detail. In principle, the nature of RV stellar and planetary signals is different. RV signal induced by a planet is periodic over time, while stellar signals are in the best case semi-periodic. In addition, a planet induces a pure Doppler shift of the observed stellar spectrum, while stellar signals change the shape of the spectral lines. Therefore, it should be possible to find techniques to differentiate between planetary and stellar signals. Stellar oscillations are often averaged out in RV surveys by fixing an exposure time to 15 minutes. To obtain the best RV precision, it is also possible to observe the same star several times per night, with measurements spread out during the night, to sample better the signature of granulation and supergranulation \citep[][]{Dumusque-2011a}. It has been shown that this simple approach reduces the observed daily RV rms of measurements, however it does not fully average out this signal \citep[][]{Meunier-2015,Dumusque-2011a}, and more optimal techniques need to be investigated. For short-term activity, which is by far the most difficult stellar signal to deal with due to the non-periodic, stochastic, long-term signals arising from the evolution and decay of active regions, several correction techniques have been investigated: \begin{itemize} \item fitting sine waves at the rotation period of the star and harmonics \citep[][]{Boisse-2011}, \item using red-noise models to fit the data \citep[e.g.][]{Feroz-2014, Gregory-2011, Tuomi-2013a}, \item using the FF${}^\prime$ method if contemporaneous photometry exists \citep[][]{Dumusque-2015b, Haywood-2014,Aigrain-2012}, \item modeling activity-induced signals in RVs with Gaussian process regression, whose covariance properties are shared either with the star's photometric variations \citep[][]{Haywood-2014,Grunblatt-2015} or a combination of several spectroscopic indicators \citep[][]{Rajpaul-2015}, or determined from the RVs themselves \citep[][]{Faria-2016a}, \item using linear correlations between the different observables, i.e., RV, bisector span (BIS SPAN) and full width at half maximum (FWHM) of the cross correlation function \citep[CCF,][]{Baranne-1996,Pepe-2002a}, photometry \citep[][]{Robertson-2015a,Robertson-2014,Boisse-2009,Queloz-2001}, and magnetic field strength \citep[][]{Hebrard-2014}, \item checking for season per season phase incoherence of signals \citep[][]{Santos-2014,Dumusque-2014a,Dumusque-2012}, \item avoiding the impact of activity by using wavelength dependence criteria for RV signal \citep[e.g. in HD40307 and HD69830,][]{Tuomi-2013a, Anglada-Escude-2012}. \end{itemize} Finally, long-term activity seems to correlate well with the calcium chromospheric activity index, which provides a promising approach to mitigation of this source of stellar RV noise \citep[][]{Lanza-2016, Diaz-2016,Meunier-2013,Dumusque-2012}. The goal of this paper is to test the efficiency of different approaches to retrieve low eccentricity planetary signals despite stellar signals. To do so, we present the results of a RV fitting challenge, where several teams analyzed blindly the same set of real and simulated RV measurements affected by planetary and stellar signals. Each team used their own method to recover planetary signals despite stellar signals. At the \ms\,precision level reached by the best spectrographs, RV measurements are affected by unresolved planets, but also stellar and instrumental signals. Without knowing which part of the RV variations is due to planets and which is due to the star or the instrument, it is extremely difficult to test which method is the most efficient at finding low-mass planets despite stellar signals. For such an exercise, it is crucial to use simulated RV measurements so that a comparison can be performed between the results of the different analysis and what was initially injected into the data. The set of simulated and real RV measurements used for this RV fitting challenge is described in detailed in \citet{Dumusque-2016a}. { As said in this paper, most of the planets injected in the data have very low eccentricities, which is common is observed multi-planetary systems.} Those RVs correspond to typical quiet { solar-like stars} targeted by high-precision RV surveys. Therefore, the conclusions of this paper are relevant for most high-precision RV surveys. In Sections \ref{sect:2} and \ref{sect:3}, we describe the methods used by the different teams to recover planetary signals despite stellar signals; Section \ref{sect:2} focuses on methods relying on a Bayesian framework, while Section \ref{sect:3} on other methods. For those sections, the number assigned to each team does not have any particular meaning. In Section \ref{sect:4}, we discuss the results of the different teams and compare the efficiency of their method to recover low-mass planetary signals despite stellar signals. We conclude in Section \ref{sect:5}.
\label{sect:5} In total, 8 different teams participated in the analysis of the RV fitting challenge data set. They all used different techniques to find the {low eccentricity planets that were hidden inside stellar signals}. Except system 14 and 15, that present the real and simulated RVs of the active star Corot-7, all the other systems present a typical level of stellar signal for inactive G-K dwarfs. Those stars are the typical targets of most high-precision RV surveys searching for low-mass planets, and therefore the conclusions made here can be applied to most of the RV measurements gathered up to now. With 14 different systems, 48 planets with semi-amplitude ranging between 0.16 and 5.85 \ms, and different modelizations of stellar signals, the number of parameter is huge, and it is difficult to draw some strong conclusions with the analysis of only 8 different teams. In addition, the data set of the RV fitting challenge was given to the different teams 8 months before the deadline. {Techniques used by team 1 to 5, based on a Bayesian framework with red-noise models, required significantly more computational time than the other techniques used by teams 6 to 8. As a result, team 2 and 4 could only analyze the first five systems out of 14, team 5 only the first two, and teams 1 to 5 used statistical shortcuts to find planetary signals in the data, or could not test all possible models, taking the risk of biasing their final results. Readers should therefore be aware that the results presented in this paper are preliminary, and depends on (1) how much time each team was able to invest in the challenge, (2) how mature their analytical methods were, and (3) how experienced the team members were with such analyses. Looking at the results presented in this paper, it seems that some techniques work better at recovering planets despite stellar signals, however further investigation need to be performed to be confident in the conclusions presented here. Note that the best techniques all require intensive computational efforts}. A first important step before finding planets is the detection of the stellar rotation period. For team 1 and 2, this period is used in their model that accounts for short-term activity, for the other teams, this period and its harmonics defines regions in period space were planetary signal should be excluded because likely due to short-term activity. Finding the correct stellar rotation period is therefore crucial to reduce the number of false positives in the end, {and team 3, using its moving average model to account for stellar signals, performed the best at this exercise.} Among all the teams that reported explicitly a stellar rotation period, we notice that only a small number of mistakes were done. However in many cases, a harmonic of the stellar rotation period was found, which can be dangerous because then a signal at the true rotation period can be confounded with a planet. To distinguish between the true stellar rotation period and a harmonic of it, an activity level-rotation calibration as the one developed by \citet{Mamajek-2008} can be used. This was however not possible here due to lack of information in the RV fitting challenge data set, {but this is something that people analyzing RV data should strongly consider to prevent false positives (see Section \ref{sect:4-0}).} When looking at the recovery rate of planetary signals for each team, teams can be separated in two groups. Teams 1, 2, 3, 4 and 5 that used a Bayesian framework with red-noise models and teams 6, 7 and 8 that used \emph{pre-whitening}, {compressed sensing} and/or filtering techniques in the frequency domain to deal with stellar signals. The first group discovered more true planetary signals than the second one, and also made fewer mistakes. In addition, when asked if those detections are significant enough to lead to publications, the first group of teams was also more confident in announcing a planetary signal. {The planets for which the $K/N$ ratio (see Eq. \ref{eq:4-1-0}) was above 7.5 were nearly all recovered by the best teams. Below this threshold, the detection rate drops to 20\% at best. Note that team 3 was able to find the smallest $K/N$ ratio true planetary signals, with $K/N$ ratios between 5 and 7.5, without announcing false positives. Below $K/N=5$, no planetary signals were confidently recovered, it is therefore a lower limit for planetary detections using data with similar properties as those of the RV fitting challenge (see Section \ref{sect:4-1-0}).} Regarding accuracy when estimating the best orbital parameters for planetary signals qualified as publishable most of the teams recovered the correct orbital parameters within 3$-\sigma$ from the truth. A few signals were however out of the 3$-\sigma$ limit, which is probably due to the fact that the models used in this paper to account for stellar signals are not perfect. {This is not surprising as models to account for stellar activity are not perfect, however those are the best we have so far (see Section \ref{sect:4-1-1}).} {\bf Besides recovering real planetary signal in the data and giving correct orbital parameters, it is very important that the false positive rate stays low. Above a threshold of 7.5 in $K/N$ ratio, team 7 announced nine false positives, team 1 six, team 6 one, and the other teams none. The technique use by team 7 is therefore prone to false positive and cannot be used to reliably detect planets. Team 1 also announced several false positives, however a few of them} correspond to the stellar rotation period, despite the fact that the correct rotation period was found a priori. Therefore, although their GP regression has the correct stellar rotation period, it seems that the GP regression cannot fully model stellar signals and that an extra sinusoidal signal is needed. Further investigation on GP modeling needs therefore to be performed to be sure that GP regression does not create false-positives. For the time being, signals close to the stellar rotation period or its harmonics should always be associated to stellar activity to prevent false positives (see Section \ref{sect:4-1-2}). For planetary signals with periods longer than 500 days, several effects make their detection difficult. It is common that drifts in the data are observed due to magnetic cycle effects and long-period binaries. To remove such long-period signals, the different teams corrected the RVs from magnetic cycle effects by using the observed long-term correlation between the RVs and the different activity observables (\logrhk, BIS SPAN, FWHM), and removed the effect of binaries by fitting polynomials as a function of time. {People analyzing RVs data should be aware that such a model can absorb the signal of planets that have orbital periods similar or longer than the time span of the data, and that orbits need closure before inferring planet parameters (see Section \ref{sect:4-1-3}).} When analyzing the recovery of planets with periods shorter than 5 days, teams 3 found 4 out of 6 planets, including 3 with $K/N\le7.5$, while all the other teams found only the planet for which $K/N>7.5$. It seems therefore that the moving average model used by team 3 is more sensitive to short-period planets because such a model consider measurement correlation on short-period timescales, which therefore mitigate the effect of granulation on quiet stars, and the strong short-timescale effect of short-term activity on active stars like Corot-7. We would therefore encourage people using GP modeling, or apodized Keplerians, to add on top of their model a correlation between measurements on short-period timescales, as this seems critical to detect short-period planetary signals with small $K/N$ ratios (see Section \ref{sect:4-1-4}). {The RV rms of real and simulated systems was similar, going in the direction that the different sources of stellar signals were realistically taken into account. Team 3, that performed the best at the exercise of the RV fitting challenge found very similar solutions between real and simulated data. However, it was slightly more difficult for the other teams to analyze simulated data. We therefore believe that even if not perfect, the simulated data are realistic enough to be used to test the efficient of techniques to recover planetary signature despite stellar signals (see Section \ref{sect:4-3}} With more time, each technique can be improved, and the different teams are making progress \citep[see][]{Gregory-2016,Hara-2016}. {The Oxford team also made some important progresses (priv. comm.). Now they are able to perform a full Bayesian marginalisation over all parameters (planets + GP), which give them much more reliable Bayesian model evidences.} Following a private communication with N.C Hara from team 8, It seems that their method is now delivering similar performances in terms of planetary detection as Bayesian framework techniques using red-noise models {and with a much shorter computational time} \citep[see bottom plot in Fig. \ref{imcce_rvchallenge2} and][]{Hara-2016}. However, following the first results of the RV fitting challenge presented here, techniques using a Bayesian framework and red-noise models seem the most efficient at modeling the effect of stellar signals, and therefore detecting true planetary signals while limiting the number of false positives. Moving average, GP regression and apodized Keplerian modelizations should be investigated further, to see the sensitivity of these models to planets at short and long-periods, to planets with a similar period than stellar rotation, to planet with high and low $K/N$ ratios, to multi-planet systems. The goal of the RV fitting challenge was to test the efficiency of the different techniques to recover planets in RV data given the presence of stellar signals, while limiting the number of false positives. As we can see in the different discussions above, the Bayesian framework and moving average model used by team 3 performed the best. {Then, in second position comes the Bayesian framework and apodized Keplerian model used by team 4, followed by the Bayesian framework and GP model used by team 1 in third position. Although team 1 performed well in analyzing system 6 to 15 in terms of true planetary signals detected, they announced a lot of false-positives at the stellar rotation period. Further investigation need to be performed to test if those false positives originate from the GP regression they used, or from another part of their method.} {Team 3 was able to confidently discover a few planetary signals with $K/N$ ratios between 5 and 7.5 without announcing false positives, and nearly all the planetary signal with $K/N>7.5$. Team 4 and 1 detected confidently most of the signals for which $K/N>7.5$, and none below this threshold. In conclusion, for RV measurement similar to those of the RV fitting challenge, a ratio $K/N=7.5$ seems to be a threshold separating confident detection from non-detection of planetary signals. Note however that the method used by team 3 could confidently detect $\sim$20\% of the planetary signals with $K/N$ ratios as low as 5, without announcing false positives.}
16
9
1609.03674
1609
1609.09169.txt
Employing tidally enhanced stellar wind, we studied in binaries the effects of metallicity, mass ratio of primary to secondary, tidal enhancement efficiency and helium abundance on the formation of blue hook (BHk) stars in globular clusters (GCs). A total of 28 sets of binary models combined with different input parameters are studied. For each set of binary model, we presented a range of initial orbital periods that is needed to produce BHk stars in binaries. All the binary models could produce BHk stars within different range of initial orbital periods. We also compared our results with the observation in the $\it{T}\rm_{eff}$-$\rm{log}\it{g}$ diagram of GC NGC 2808 and $\omega$ Cen. Most of the BHk stars in these two GCs locate well in the region predicted by our theoretical models, especially when C/N-enhanced model atmospheres are considered. We found that mass ratio of primary to secondary and tidal enhancement efficiency have little effects on the formation of BHk stars in binaries, while metallicity and helium abundance would play important roles, especially for helium abundance. Specifically, with helium abundance increasing in binary models, the space range of initial orbital periods needed to produce BHk stars becomes obviously wider, regardless of other input parameters adopted. Our results were discussed with recent observations and other theoretical models.
Horizontal branch (HB) stars in globular clusters (GCs) are low mass stars that are burning helium in their cores. These stars are considered to be the progeny of red giant branch (RGB) stars (Hoyle \& Schwarzschild 1955). To locate on different positions of HB, RGB stars need to lose different envelope masses before or during helium core flash is taking place (Catelan 2009). Some stars could settle on the red HB (RHB) positions in the colour-magnitude diagram (CMD) of GCs after losing a few envelope masses, while other stars may occupy the blue HB (BHB) or extreme HB (EHB) positions due to which they lose most of or nearly the whole envelope masses on the RGB stage. However, the physical mechanism of mass loss for RGB stars in GCs is still unclear (Willson 2000; Dupree et al. 2009). In the late 1990s, a special kind of hot EHB stars are found in some massive GCs (e.g., NGC 2808, $\omega$ Cen; Whitney et al. 1998; D'Cruz et al. 2000), which are the so called blue hook (BHk) stars. These stars present very high temperatures (e.g., $T\rm_{eff}$ $>$ 32000 K; Moni Bidin et al. 2012) and very faint luminosity when compared with normal EHB stars in GCs. Therefore, BHk stars can not be predicted by canonical stellar evolution models, and their formation mechanism is still unclear. So far, several formation scenarios are proposed for BHk stars in GCs (see Heber 2016 for a recent review). D'Antona et al. (2010) proposed that BHk star in $\omega$ Cen could be the progeny of blue main-sequence (MS) stars that belong to the second generations in this GC (also see Lee 2005). These stars would undergo an extra mixing during the RGB stage, thus present very high helium abundance in their surfaces (e.g., up to $Y\approx$ 0.8). After helium ignites in their cores, these stars locate on very blue and faint HB position and become BHk stars in GCs. On the other hand, Brown et al. (2001) suggested that BHk stars could be produced through late hot flash process on the white dwarf (WD) cooling curve (also see Castellani \& Castellani 1993; D'Cruz et al. 1996; Brown et al. 2010, 2012). In this scenario, low mass stars in GCs experience huge mass loss on the RGB and undergo helium core flash, instead of at the RGB tip, on the way towards to WD stage (early hot flash; Brown et al. 2001; Cassisi et al. 2003; Miller Bertolami et al. 2008) or on the WD cooling curve (late hot flash; Brown et al. 2001; Cassisi et al. 2003; Miller Bertolami et al. 2008). During the late hot flash, the internal convection can permeate the thin hydrogen-enriched envelope and lead to helium and carbon enhancement in the surface. Therefore, when settling on the zero age horizontal branch (ZAHB), these stars that experience late hot flash could be hotter and fainter than normal EHB stars in GCs. Recently, more and more pieces of evidence both from photometry and spectroscopy support that multiple populations could be a universal phenomenon in most of the Galactic GCs (Piotto et al. 2007; Gratton Carratta \& Bragaglia 2012; Gratton et al. 2013, 2014; Milone 2015) and this phenomenon is considered to be closely correlated with helium enrichment in GCs (D'Antona \& Caloi 2008; Marino et al. 2014; Milone 2015; but see Jiang et al. 2014 for an alternative solution). If this is the case, stars in GCs would belong to different populations that present different helium abundances, and BHk stars would be the progeny of helium-enriched populations that belong to the second generation stars in GCs (D'Antona et al. 2002; Brown et al. 2012). Following this scenario, Tailo et al. (2015) also studied the effects of rapidly rotating second generation stars on the formation of BHk stars in GCs. They found that an increase of helium core mass up to 0.04 $M_{\odot}$ is required to solve the luminosity range problem for BHk stars in $\omega$ Cen. Lei et al. (2015, hereafter Paper I) followed the late hot flash scenario and proposed that tidally enhanced stellar wind in binary evolution (Tout \& Eggleton 1988) is a possible formation channel for BHk stars in GCs. This kind of wind could provide huge mass loss on the RGB stage naturally, which is needed in late hot flash scenario (Brown et al. 2001). Their results indicated that binaries could produce BHk stars under tidally enhanced stellar wind and it may play important roles in the formation of BHk stars in some GCs. However, Paper I did not study the effects of other input parameters on their results, such as metallicity, mass ratio of primary to secondary, tidal enhancement efficiency, and helium abundance, etc. These parameters would influence the evolution of binary systems (e.g., mass loss of the primary, stellar mass, helium core mass, luminosity, etc), thus may have influences on the formation of BHk stars in GCs. As a further study for Paper I, to investigate the role of binaries in the formation of BHk stars, we studied in this paper the effects of metallicity, mass ratio, tidal enhancement efficiency and helium abundance on the binary evolution by considering tidally enhanced stellar wind into binary evolution, thus studied their effects on the formation of BHk stars in GCs. The structure of this paper is organized as follows: In Section 2, we describe the models and method used in this study; results are given in Section 3; and finally, discussion and conclusions are given in Sections 4 and 5 respectively. \section[]{Methodology} As in Paper I, we use equation (1) to describe the tidally enhanced stellar wind in binary evolution, which was first suggested by Tout \& Eggleton (1988). \begin{equation} \dot{M}=-\eta4\times10^{-13}(RL/M)\{1+B\rm_{w}\times \rm min[{\it(R/R\rm_{L})}^{6}, \rm 1/2^{6}]\}, \end{equation} where $\eta$ is the Reimers mass-loss efficiency (Reimers 1975), and $B\rm_{w}$ is the efficiency of tidal enhancement for the stellar wind. Here $R$, $L$, and $M$ are the radius, luminosity and mass of the primary star in solar units. Equation (1) was added into detailed stellar evolution code, Modules for Experiments in Stellar Astrophysics ({\scriptsize MESA}, version 6208; Paxton et al. 2011, 2013, 2015) to study its effects on the mass loss of primary during RGB evolution stage. \begin{table*} \small %\centering \begin{minipage}{80mm} \caption{Main input parameters used in the study. The masses of primary stars at ZAMS in each set correspond to an age of about 12 Gyr at RGB tip.} \end{minipage}\\ \begin{tabular}{ccccccc} \end{tabular}\\ \begin{tabularx}{9.4cm}{XcccccX} \hline\noalign{\smallskip} Model &$M_{\rm ZAMS}/M_{\odot}$ & $Y$ &$P_{1}/{\rm d}$ &$P_{2}/{\rm d}$ &$P_{3}/{\rm d}$ &$P_{4}/{\rm d}$\\ \hline\noalign{\smallskip} I &$Z$=0.003, &$q$=1.6, &$B\rm_{w}$=10000\\ \hline\noalign{\smallskip} set 1 & 0.87 & 0.24 & 2850 & 2700 & 2160 & 2150 \\ set 2 & 0.81 & 0.28 & 3100 & 2850 & 2230 & 2220 \\ set 3 & 0.75 & 0.32 & 3500 & 3100 & 2330 & 2320 \\ set 4 & 0.64 & 0.40 & 10000 & 4600 & 2680 & 2670 \\ \hline\noalign{\smallskip} II &$Z$=0.001, &$q$=1.6, &$B\rm_{w}$=10000\\ \hline\noalign{\smallskip} set 5 & 0.83 & 0.24 & 2200 & 2000 & 1610 & 1600 \\ set 6 & 0.77 & 0.28 & 2300 & 2150 & 1670 & 1660 \\ set 7 & 0.72 & 0.32 & 2600 & 2260 & 1700 & 1690 \\ set 8 & 0.62 & 0.40 & 10000 & 3100 & 1890 & 1880 \\ \hline\noalign{\smallskip} III &$Z$=0.0003, &$q$=1.6, &$B\rm_{w}$=10000\\ \hline\noalign{\smallskip} set 9 & 0.81 & 0.24 & 1700 & 1560 & 1300 & 1290 \\ set 10 & 0.758 & 0.28 & 1800 & 1650 & 1320 & 1310 \\ set 11 & 0.70 & 0.32 & 2050 & 1800 & 1400 & 1390 \\ set 12 & 0.61 & 0.40 & 10000 & 2350 & 1500 & 1490 \\ \hline\noalign{\smallskip} IV &$Z$=0.001, &$q$=1.2, &$B\rm_{w}$=10000\\ \hline\noalign{\smallskip} set 13 & 0.83 & 0.24 & 2250 &2150 & 1730 & 1720 \\ set 14 & 0.77 & 0.28 & 2600 & 2300 & 1800 & 1790 \\ set 15 & 0.72 & 0.32 & 2650 & 2400 & 1820 & 1810 \\ set 16 & 0.62 & 0.40 & 10000 & 3200 & 2020 & 2010 \\ \hline\noalign{\smallskip} V &$Z$=0.001, &$q$=2.4, &$B\rm_{w}$=10000\\ \hline\noalign{\smallskip} set 17 & 0.83 & 0.24 & 1900 & 1750 & 1440 & 1430 \\ set 18 & 0.77 & 0.28 & 2100 & 1900 & 1500 & 1490 \\ set 19 & 0.72 & 0.32 & 2350 & 2050 & 1530 & 1520 \\ set 20 & 0.62 & 0.40 & 10000 & 2750 & 1720 & 1710 \\ \hline\noalign{\smallskip} VI &$Z$=0.001, &$q$=1.6, &$B\rm_{w}$=5000\\ \hline\noalign{\smallskip} set 21 & 0.83 & 0.24 & 1800 & 1680 & 1350 & 1340 \\ set 22 & 0.77 & 0.28 & 1960 & 1780 & 1410 & 1400 \\ set 23 & 0.72 & 0.32 & 2100 & 1900 & 1430 & 1420 \\ set 24 & 0.62 & 0.40 & 10000 & 2700 & 1590 & 1580 \\ \hline\noalign{\smallskip} VII &$Z$=0.001, &$q$=1.6, &$B\rm_{w}$=1000\\ \hline\noalign{\smallskip} set 25 & 0.83 & 0.24 & 1200 & 1130 & 910 & 900 \\ set 26 & 0.77 & 0.28 & 1350 & 1200 & 940 & 930 \\ set 27 & 0.72 & 0.32 & 1500 & 1260 & 960 & 950 \\ set 28 & 0.62 & 0.40 & 10000 & 1750 & 1070 & 1060 \\ \hline\noalign{\smallskip} \end{tabularx} \end{table*} We use the binary module in {\scriptsize MESA} to evolve the primary star in a binary system from zero-age main sequence (ZAMS) to WD cooling curve. Since the stellar wind on the RGB stage could be tidally enhanced by the secondary stars, some of the primary stars could lose nearly the whole envelope mass and evolve off RGB tip, then experience a late hot helium flash on WD cooling curve (Castellani \& Castellani 1993; D'Cruz et al. 1996; Brown et al. 2001). Due to the very thin hydrogen envelope, internal convection mixing triggered by helium core flash is able to penetrate into the surface and lead to helium and carbon enhancement. When settling on HB, these stars could present higher temperatures and faint luminosity than normal EHB star. In this study, the default values of input physics in {\tiny MESA} are used except for the opacity tables. {\scriptsize OPAL} type II tables are used in our model calculations that would be more suitable for helium burning stars (see Paper I). The Reimers mass-loss efficiency, $\eta$, is set to 0.45 (Renzini \& Fusi Pecci 1988; McDonald \& Zijlstra 2015). All the input parameters used in this study are listed in Table 1. Columns 1-3 gives the model number, stellar mass of the primary star at ZAMS ($M_{\rm ZAMS}$) and initial helium abundance. The value of $M_{\rm ZAMS}$ for each model adopted here corresponds to an age of about 12 Gyr at RGB tip. Columns 4-7 present some critical orbital periods for binary models, which will be introduced in next paragraph. Different from Paper I, we adopted three values of metallicity (i.e., $Z$=0.003, 0.001 and 0.0003) in this study to investigate its effects on the final results. For each metallicity, the mass ratio of primary to secondary (i.e., $q$) and tidal enhancement efficiency (i.e., $B\rm_{w}$) is set to 1.6 and 10000, respectively. These models are labelled by I, II and III in Table 1. Excluding $q$ =1.6, we also adopted other two values of mass ratio in the study (i.e., $q$=1.2 and 2.4) for metallicity $Z$=0.001. These two models are labelled by IV and V in Table 1. Moreover, other two values of $B\rm_{w}$ i.e., 5000 and 1000) were used in this study for the models with $Z$=0.001 and $q$=1.6 to investigate its effects on our final results, and these models are labelled by VI and VII in Table 1. For each model, four values of initial helium abundance were used, i.e., $Y$=0.24, 0.28, 0.32 and 0.40. There are a total of 28 sets of binary models combined with different input parameters from model I to VII. In tidally enhanced stellar-wind model (Lei et al. 2013a, b; also see Han, Chen \& Lei 2010; Han et al. 2012; Han \& Lei 2014), the mass-loss of primary star in a binary system is determined by the initial orbital period. Thus, the primary stars in binary systems with long orbital period may locate on RHB position due to little mass loss on the RGB stage, while primary stars in binary systems with shorter periods would lose much envelope masses and settle on hotter HB positions or even fail to ignite helium in their cores and become a helium WD finally. For each set of binary model list in Table 1, we adopted different initial orbital periods in binary evolution to find out what kind of initial periods in binary would produce BHk stars. From columns 4 to 7 of Table 1, we list four critical periods for each binary model that are important in our calculations. Specifically, the initial orbital periods listed in column 4 (e.g., labelled by $P_{1}$) represent the minimum periods for primary stars to experience normal helium core flash at RGB tip. If the initial orbital period is shorter than this one, the primary star would lose much envelope mass on the RGB and experience early or late hot flash before locating on HB. On the other hand, initial periods listed in column 5 (e.g., labelled by $P_{2}$) for each set of model present the minimum periods for primary stars to undergo early hot flash, while the periods labeled by $P_{3}$ in column 6 denote the minimum periods for the primary stars to experience late hot flash. If the initial orbital period of a binary is shorter than $P_{3}$ (e.g., periods list in the last column of Table 1, labeled by $P_{4}$), the primary star may lose too much envelope mass on the RGB and fail to ignite helium in its core, then dies as a helium WD (see Fig. 1). In Table 1, for all the binary models with the highest helium abundance of $Y$=0.40 (e.g., set 4, 8, 12, 16, 20, 24 and 28), the minimum orbital periods for primary stars to experience normal helium core flash at the RGB tip (i.e., the orbital periods labeled by $P_{1}$ in Table 1) are set to 10 000 d. This is because these models present the highest helium abundance, thus the lowest stellar mass at ZAMS. Therefore, even given a very long initial orbital period, e.g., $P$=10 000 d, for which the two components in these binaries could be considered as two single stars, the primary star still experience an early hot flash instead of a normal helium flash at the RGB tip [see panel (m) in Fig. 1]. These stars would become hot EHB stars rather than BHk stars after helium core flash, and it is beyond the range of this study.
As a further study for Paper I, by considering tidally enhanced stellar wind into binary evolution, we studied in detail the effects of metallicity, mass ratio, tidal enhancement efficiency and helium abundance on the formation of BHk stars . Over 20 sets of binary models combined with different values of parameters mentioned above are adopted to study their effects on the formation of BHk stars. For each set of models, the space range of initial orbital periods which are needed to produce BHk stars for binaries was presented. We also showed the evolution tracks of primary stars from ZAMS to WD including the stage of late hot helium flash, as well as the evolution parameters on the ZAHB for some primary stars. Our results are compared with the observation in $\it{T}\rm_{eff}$-$\rm{log}\it{g}$ plane. Even though the input parameters of these binary models are different from each other, all binaries in these models could produce BHk stars within different range of initial orbital periods. Most of the BHk stars in NGC 2808 and $\omega$ Cen locate well in the region predicted by our models, especially when C/N-enhanced model atmospheres are considered in obtaining the stellar atmosphere parameters of BHk stars. The effects of metallicity, mass ratio, tidal enhancement efficiency and helium abundance on the space range of initial orbital periods needed to produce BHk stars are discussed in detail. We found that mass ratio of primary to secondary and tidal enhancement efficiency have little effects on the formation of BHk stars in our model, while metallicity and helium abundance seem to play important roles in formation of BHk stars in binaries. Especially, with the helium abundance increasing from $Y$=0.24-0.40, the range of initial orbital period to produce BHk stars becomes wider obviously for all the models. Assuming a flat initial orbital periods distribution for binaries in GCs, one would expect a much easier production of BHk stars if these stars have higher initial helium abundance. Our results presented here indicate that tidally enhanced stellar wind in binary evolution is a possible formation channel for BHk stars in GCs, but further studies and more evidences are needed to investigate the roles of binaries on this problem before it comes to a conclusive result.
16
9
1609.09169
1609
1609.09495_arXiv.txt
In molecular clouds of the Galactic Disk Region(GDR), a number of filamentary structures have been found by Herschel survey observations (\cite{Pilbratt2010}). It has been revealed that the molecular clouds ubiquitously exist as filamentary structures in the GDR with or without star formation. The widths of these filamentary structures are always $\sim0.1$ pc even though the column densities vary by 1 or more orders of magnitude ($\sim10^{20-23}\rm cm^{-2}$) (\cite{Arzoumanian2011}). Prestellar dense cores and deeply embedded protostars exist along with the filamentary structures of which column densities are more than $\sim10^{22}\rm cm^{-2}$. In contrast, the non-star-forming filaments have much lower column densities which are up to $\sim10^{21}\rm cm^{-2}$ (\cite{Andre2010}). Thus, the column densities of the filamentary structures in the molecular clouds are closely related to the star formation in the GDR. The Central Molecular Zone (CMZ) is a molecular cloud complex in the Galactic Center(GC) region inner $300$ pc region. In the CMZ, the molecular gas is very dense and warm and its velocity dispersion is very large compared to the GDR. Filamentary structures have not been found in the CMZ except for G0.253+0.016(\cite{Rathborne2015}) and have never been identified. Therefore, we observed the GC 50km/s molecular cloud (50MC) to search for filaments. \begin{table} \begin{center} \caption{Physical parameters of the 50MC and the GDR. } \label{tab:pp} {\scriptsize \begin{tabular}{c|c|c|c|c}\hline region&Width&Column Density N&Line Mass $M_{\rm line}$&Critical Line Mass $M_{\rm crit,line}$$^1$\\%&Virial Line Mass $M_{\rm vir,line}$\\ &(pc)& ($\times10^{22}\rm\ cm^{-2}$)& ($M_{\odot}\rm\ pc^{-1}$)& ($M_{\odot}\rm\ pc^{-1}$)\\%& ($\times 10^5M_{\odot}\rm\ pc^{-1}$)\\ \hline \multirow{2}{*}{50MC}&$0.150-0.384$&$2.32-21.1$&$103-1430$&$\sim100$\\%&0.5-9.2\\ &(ave.$=0.268\pm0.060$)&(ave.$=9.97\pm5.19$)&(ave.$=529\pm285$)&(assuming 50K)\\%&(ave.$=1.93\pm1.72$)\\ GDR&$0.10\pm0.03$&$\sim0.1-10$&$\sim10-100$&$\sim20$\\%&-\\ \hline \end{tabular} } \end{center} \vspace{1mm} \scriptsize{ {\it Notes:}\\ $^1$ $M_{\rm crit,line}=\frac{2c_{\rm s}^2}{G}$ where $c_{\rm s}$ and G are the sound velocity and the gravitational constant, respectively. } \end{table} \begin{figure}[h] \begin{minipage}{0.5\textwidth} \hspace{-7ex} \centering \includegraphics[width=7.cm]{map_cs_20-40_chan00000.eps} \end{minipage} \begin{minipage}{0.3\textwidth} \hspace{-18ex} \centering \includegraphics[bb=-54 0 666 756,width=5.cm]{allfilament_with_core_on2.eps} \end{minipage} \hspace{-6.9ex} \begin{minipage}{0.2\textwidth} \centering \includegraphics[width=3.5cm]{filament_width_hist.eps} \includegraphics[width=3.5cm]{width_vs_cd.eps} \end{minipage} \caption{[left]The 50 $\rm km\ s^{-1}$ molecular cloud in an integrated intensity map of CS $J=2-1$. The integrated velocity range is $V_{\rm LSR}=20-40\rm\ km\ s^{-1}$. The synthesized beam size is $1.78''\times1.26''$. [middle]The location of the filaments identified using the DisPerSE algorithm in all velocity range. Gray thick lines show the central axes of the MCFs and black filled circles show the molecular cloud cores. [upper right]The histogram of the widths of the filaments. [lower right]The relation between the widths and column densities of the filaments. The dashed line shows the synthesized beam size in CS $J=2-1$. } \label{fig:fil} \end{figure}
16
9
1609.09495
1609
1609.03865_arXiv.txt
The origin of a neutron star glitch, a sudden spin-up of the rotational frequency, has been one of the unsolved problems in nuclear astrophysics for a long time. It has been suggested \cite{Anderson-Itoh} that the glitch is caused by a catastrophic unpinning of a huge number of vortices from the pinning sites formed by a Coulomb lattice of nuclei immersed in a neutron superfluid in the inner crust of neutron stars. Although the vortex-nucleus interaction is undoubtedly one of the most important ingredients needed to explain the glitches, so far contradictory predictions have been made about both its magnitude and even sign. The difficulty in extracting the vortex-nucleus interaction is related to the enormous number of degrees of freedom that one has to take into account. The vortex itself represents a topological excitation of a superfluid, which may stretch and bend in various ways and the susceptibility towards vortex deformations should be derived from a microscopic equation of motion describing neutron superfluid. Moreover, the nuclear impurity may easily deform as its surface tension is significantly smaller than those of isolated nuclei. Thus various assumptions concerning the symmetry of the problem considered in the past, in order to simplify the analyses, may dramatically change results, as one can accidentally omit important degrees of freedom of either the vortex or of the impurity. Recently, we have reported the first fully microscopic, three-dimensional and symmetry-unconstrained dynamical simulations which enabled us to extract the vortex-nucleus interaction~\cite{vortex}.
We have performed 3D, symmetry unrestricted, microscopic, dynamic simulations for a vortex-nucleus system using a time-dependent extension of DFT for superfluid systems. We have determined that the vortex-nucleus force is repulsive and increasing in magnitude with density, for the densities characteristic of the neutron star crust (0.014~fm$^{-3}$ and 0.031~fm$^{-3}$). It is instructive to note that repulsive force is not ruled out by a purely hydrodynamical approach (see Supplemental Material of \cite{vortex}). It is however difficult to associate unambiguously the superfluid density with the actual neutron density. Therefore hydrodynamical description can be used only to estimate the asymptotic behavior of the vortex-nucleus interaction ($\propto 1/r^3$). It is worth mentioning that the extracted force is at least one order of magnitude larger than those predicted in a recent phenomenological analysis~\cite{Seveso(2016)}. The repulsive force is compatible with the so-called interstitial pinning, where vortices are trapped at positions that maximize the overall separation from the nearest nuclei \cite{Link(2009)}.
16
9
1609.03865
1609
1609.04674_arXiv.txt
On April 23, 2014, the Swift satellite responded to a hard X-ray transient detected by its Burst Alert Telescope, which turned out to be a stellar flare from a nearby, young M dwarf binary DG~CVn. We utilize observations at X-ray, UV, optical, and radio wavelengths to infer the properties of two large flares. The X-ray spectrum of the primary outburst can be described over the 0.3-100 keV bandpass by either a single very high temperature plasma or a nonthermal thick-target bremsstrahlung model, and we rule out the nonthermal model based on energetic grounds. The temperatures were the highest seen spectroscopically in a stellar flare, at T$_{X}$ of 290 MK. The first event was followed by a comparably energetic event almost a day later. We constrain the photospheric area involved in each of the two flares to be $>$10$^{20}$ cm$^{2}$, and find evidence from flux ratios in the second event of contributions to the white light flare emission in addition to the usual hot, T$\sim$10$^{4}$K blackbody emission seen in the impulsive phase of flares. The radiated energy in X-rays and white light reveal these events to be the two most energetic X-ray flares observed from an M dwarf, with X-ray radiated energies in the 0.3-10 keV bandpass of 4$\times$10$^{35}$ and 9$\times$10$^{35}$ erg, and optical flare energies at E$_{V}$ of 2.8$\times$10$^{34}$ and 5.2$\times$10$^{34}$ erg, respectively. The results presented here should be integrated into updated modelling of the astrophysical impact of large stellar flares on close-in exoplanetary atmospheres.
Most of what is known about the mechanisms producing stellar flares is informed by the detailed observations of flares on the Sun. Solar flares occur in close proximity to active regions (ARs), which are effectively localized magnetic field regions of 1-2 kG strength. Loops from these ARs extend into the solar corona; as the footpoints of these loops are jostled by solar convective motions, they are twisted and distorted until magnetic reconnection occurs near the loop tops \citep{parker1988,benzgudel2010}. The reconnection event is accompanied by a sudden release of energy, resulting in the acceleration of electrons and ions in these loops up to MeV energies, which stream both towards and away from the Sun, emitting nonthermal radio (gyrosynchrotron) and X-ray emission (particularly at the loop footpoints) as they move \citep{dennis1989}. These energetic particles stream down to the loop footpoints and deposit substantial energy to the lower solar atmosphere (the chromosphere), “evaporating” and heating plasma from this region to fill the flaring loop(s) with plasma \citep{lin2011}. In the “decay” phase of the flare, the thermal emission dominates the X-ray emission, although in some large solar flares a nonthermal X-ray component may persist as a continuous source of energy \citep{kontar2008}. Young stars and stars in close binary systems rotate much more rapidly than the Sun, and, in consequence, have much stronger levels of magnetic activity, i.e., greater coverage by starspots and ARs, stronger chromospheric and coronal emission, and more frequent and powerful flares \citep{meibom2007,morgan2016}. There is a large disparity between the extremes of solar and stellar flares: while the largest solar flares have radiated energies exceeding 10$^{32}$ erg, and maximum coronal temperatures of a few tens of MK \citep{hotsolarflares}, large stellar flares can be 10$^{6}$ times more energetic, with coronal temperatures around 100 MK \citep{osten2007} and large energetic releases up to 10$^{38}$ erg \citep{cftucflare,osten2007}. A 2008 flare of the nearby 30-300 Myr old M dwarf flare star EV Lac \citep{osten2010} had a lower limit on energetic release of 6$\times$ 10$^{34}$ erg. \citet{caramazza2007} found X-ray flares on very young low mass stars to range up to 2$\times$10$^{35}$ erg, and \citet{tsuboi2014} found flares from active binary systems to range up to 10$^{38}$ erg. The interpretation of these stellar flaring events assumes that the same physical processes are at work as in the solar case, as confirmed by multiwavelength observations of plasma heating and particle acceleration in stellar flares \citep{benzgudel2010}. The largest stellar flares, with their extreme parameters of temperature and energy release, clearly test this correspondence. Initial suppositions of a transition from solar-stellar flare scaling laws has come from the work of \citet{getman2008}, but those data could not determine flare temperatures accurately. DG CVn (GJ 3789) is an interesting, albeit poorly studied member of this class of nearby, very young low-mass stars. It is noted as having an unusually active chromosphere \citep{beers1994} and corona \citep{hunsch1999}, as well as being one of the brightest nearby stellar radio emitters \citep{helfand1999}. Subsequent studies confirm that it exhibits optical flares and sub-day rotational modulation \citep{robb1994}, with a measured photospheric line broadening of 51 km/s \citep{mohantybasri2003} indicative of a very short rotational period of $<$ 8 hours. DG~CVn is a binary, as revealed by the double-lined spectrum noted in \citet{gizis2002}. Adaptive optics imaging of DG CVn \citep{beuzit2004} reveal it to be a close (0.2$^{\prime\prime}$ separation) visual binary system, with two components of near-equal optical brightness ($\Delta$V $\sim$ 0.3) and spectral types of M4Ve. The distance to DG~CVn, from a large study of the trigonometric parallaxes and kinematics of nearby active stars, is 18 pc, with a space motion consistent with the system being a member of the population of 30-Myr old stars in the solar neighborhood \citep{riedel2014}. They quote a combined systemic $\log L_{X}$ and $\log L_{X}/L_{\rm bol}$, from which (by dividing $L_{\rm bol}$ equally between the two components) a luminosity $\log (L_{\rm bol}/L_{\odot})$=$-1.72$ is obtained. \citet{mohantybasri2003} determine a system T$_{\rm eff}$ of 3175 K, which combining with $L_{\rm bol}$ yields a radius estimate of 0.46 R$_{\odot}$. \citet{demory2009} plot stellar radius versus absolute magnitude in the K band, M(K), for low mass and very-low mass stars using interferometric measurements; their 5 GY isochrones together with the absolute K magnitude of the A component of the binary \citep[6.12;][]{riedel2014} suggests a radius of about 0.4 R$_{\odot}$. These numbers are consistent with a larger radius than obtained for other nearby M dwarfs of the same temperature \citep[such as described in ][]{newton2015,mann2015} and we adopt R$_{\star}$=0.4R$_{\odot}$ in this paper. In young stars, accretion episodes can provide an additional optical and X-ray signature to that expected from magnetic reconnection \citep{stassun2006,brickhouse2010}. However, the WISE $w_{1} - w_{3}$ and $w_{1}- w_{4}$ colors of this system show no evidence for an infrared excess \citep{allwise}, indicating that there is no active accretion, as would be expected for stars older than several million years. On 2014 April 23, one of the 2 stars in this system flared to a level bright enough ($\sim$3.4 $\times$ 10$^{-9}$ erg s$^{-1}$ cm$^{-2}$) in the 15-100 keV band that it triggered the Swift Burst Alert Telescope (BAT); described in \citet{dgcvnATel}. Two minutes later, after Swift had slewed to point in the direction of this source, the Swift X-ray Telescope (XRT) and the Ultraviolet Optical Telescope (UVOT) commenced observing this flare. These observations, as well as supporting ground-based optical and radio observations, continued (intermittently) for about 20 days and yielded a fascinating case history of this colossal event, the decay of which took more than two weeks in the soft X-ray band, and included a number of smaller superimposed secondary flares (see Fig.~\ref{fig:lc}). Recent papers have reported on additional data indicating radio and optical bursts from this system during this time period \citep{fender2015,caballero2015}. In this paper, we discuss the observations and their interpretation in light of the standard solar flare scenario. The paper is organized as follows: \S 2 describes the entire set of Swift and ground-based observations used in the study, \S 3 describes the analysis of the two main flaring events observed, \S 4 discusses what can be determined for the second event, and applies this to an interpretation of the first event. Finally, \S 5 concludes. \begin{figure}[!h] \includegraphics[scale=0.5]{f1new.eps} \caption{Comprehensive light curve of the event as seen in soft X-rays, UVOT bands, and ground-based optical photometry. The initial impulsive event took only a few hours to decay, but was followed by a series of flares which spanned more than two weeks. The legend lists the UV optical filters and the central wavelength of each. \label{fig:lc}} \end{figure}
We presented a detailed study of two of the most energetic flare events seen on a young low mass star. In addition to measurements made for each flare event, we used the properties of the second flare F2 to infer some of the properties for BFF. The results confirmed the basic flare scenario for hyperactive stars as for solar flares, and revealed evidence of departures of trends between the temperatures and emission measures of the highest temperature stellar flares compared with lower temperature solar flares. The object, DG~CVn, has been relatively uncharacterized for its flaring and extreme magnetic activity and we hope that this report will spur additional studies. Based on the flare properties described in this paper, we expect the existence of very strong magnetic fields in the photosphere. Starspot modelling should confirm the nature of starspot sizes implied by the flare footpoint modelling. Uncertainties in the rotation period and $v\sin i$ mentioned in the introduction are likely the result of the previously unrecognized binary nature of the system. While X-ray flares from stars are commonly known, observations with Swift have revealed that stellar flares can be bright enough to trigger the BAT with their intense hard X-ray ($>$15 keV) emission. These events reveal the nature of magnetic reconnection processes occurring in a regime vastly different from the Sun, yet exhibiting continuity with solar events. Supporting data from both space- and ground-based observatories enable more constraints on the extremes of energetics and plasma parameters. In contrast with the claim of nonthermal emission from the superflare on II~Peg reported by \cite{osten2007}, for the DG~CVn event the possibility of a nonthermal interpretation is confronted with constraints on kinetic energy and photospheric flare area provided by radio and optical observations, respectively. Since the nonthermal interpretation is disfavored in the DG~CVn flares because of constraints from the radio and optical data, the II~Peg nonthermal interpretation is in doubt. The extreme nature of the flare temperature of BFF, coupled with results from other extreme flares, suggest that the scaling between solar and stellar flare temperatures and emission measures exhibits a flattening at high temperatures. The opportunity these flares present to confirm this flattening by using spectroscopically derived temperatures is important and may reveal departures from canonical solar flare behavior. Planets around M dwarfs will likely experience millions of these kinds of superflares during their infancy. This pair of well-studied flares on M dwarfs should be used to provide updated constraints on the impact of flare radiation on close-in terrestrial exoplanets. This confirms the conclusion reached for EV Lac that the ``habitable zone" $\sim$ 0.1 AU from a young M dwarf star is likely inimicable to life: the flare peak luminosity in the GOES (1.5 - 8 keV) band would be equivalent to an X60,000,000 flare. If the energetic proton fluxes and coronal mass ejection energies scale with the radiated flare energy, the impact upon the atmosphere and magnetosphere of any hypothetical terrestrial planet would be catastrophic.
16
9
1609.04674
1609
1609.03748_arXiv.txt
We performed millimeter observations in CO lines toward the supernova remnant (SNR) \snr. Substantial molecular gas around $-45$~\km\ps\ is detected in the conjunction region between the SNR~\snr\ and the nearby W3 complex. This molecular gas is distributed along the radio continuum shell of the remnant. Furthermore, the shocked molecular gas indicated by line wing broadening features is also distributed along the radio shell and inside it. By both morphological correspondence and dynamical evidence, we confirm that the SNR~\snr\ is interacting with the $-45$~\km\ps\ molecular cloud (MC), in essence, with the nearby H~{\sc ii} region/MC complex W3. The red-shifted line wing broadening features indicate that the remnant is located at the nearside of the MC. With this association, we could place the remnant at the same distance as the W3/W4 complex, which is $1.95\pm0.04$~kpc. The spatial distribution of aggregated young stellar object candidates (YSOc) shows a correlation to the shocked molecular strip associated with the remnant. We also find a binary clump of CO at ($l=132^{\circ}.94, b=1^{\circ}.12$) around $-51.5$~\km\ps\ inside the projected extent of the remnant, and it is associated with significant mid-infrared (mid-IR) emission. The binary system also has a tail structure resembling the tidal tails of interacting galaxies. According to the analysis of CO emission lines, the larger clump in this binary system is about stable, and the smaller clump is significantly disturbed.
The supernova remnant (SNR) \snr\ was discovered in the radio band \citep{BrownHazard1953,Williams+1966,Caswell1967}, and progressively, multi-wavelength emissions were detected from it. It has an angular size of $90'\times120'$ and a radio spectral index of $-0.56$ \citep{Landecker+1987,Fesen+1995,Reich+2003,TianLeahy2005,Green2007}. \snr\ is considered to be an evolved SNR, as indicated by a strong radio-optical correlation plus a multishell structure \citep{Fesen+1995}. Characterized by shell-like radio continuum morphology and centrally peaked thermal X-ray emission, \snr\ is identified as a mixed-morphology (MM) or thermal composite SNR \citep[][and references therein]{LazendicSlane2006}. By spectral analysis to {\it ASCA} and {\it XMM-Newton} \xray\ observations, \cite{LazendicSlane2006} derived the velocity of the remnant's blast wave $340\pm{37}$~\km\ps, the remnant's ambient particle density $0.32\pm0.10$~cm$^{-3}$, the remnant's age ($3.00\pm0.33$)$\E{4}$~yr, and the explosion energy ($3.4\pm1.5$)$\E{50}$~ergs. The remnant is adjacent in the sky to the H~{\sc ii} region/molecular cloud (MC) complex W3 with a potential association between them \citep{Landecker+1987}. \cite{Routledge+1991} examined the H~{\sc i} and \twCO~(J=1--0) line emissions, and found a bright \twCO\ ``bar" near $-43$~\km\ps\ that is morphologically corresponding to \snr's enhanced radio continuum emission, which supports the association between the remnant and the W3 complex. The distance of the W3 complex is $1.95\pm0.04$~kpc, which was determined by triangulation method \citep{Xu+2006}. A large H~{\sc i} shell surrounds the remnant was found in velocity from $-25$ to $-43$~\km\ps\ \citep{Routledge+1991, Normandeau+1997}. No OH 1720~MHz maser was found to be associated with the shock of \snr\ \citep{Koralesky+1998}. Broadened \twCO~(J=2--1) line emission was detected toward the north of \snr, which was confirmed to be associated with the H~{\sc ii} region W3~(OH) but not \snr\ \citep[][and references therein]{Kilpatrick+2016}. In this paper, we present CO line observations fully covering the SNR~\snr, and confirm that \snr\ is interacting with the nearby W3 complex. For convenience, we introduce the factor of distance \du\ stands for $d$/(1.95~kpc), where $d$ is the distance to \snr. The observations are described in Sections~\ref{sec:obs}. In Section~\ref{sec:result} and Section~\ref{sec:discuss}, we present the results and physical interpretations, respectively. The conclusions are summarized in Section~\ref{sec:conclusion}.
\label{sec:conclusion} We present millimeter observations in CO emission lines toward \snr. Substantial molecular gas around $-45$~\km\ps\ is detected in the conjunction region between the SNR~\snr\ and the nearby H~{\sc ii} region/MC complex W3. This molecular gas is distributed along the radio continuum shell of the remnant. Furthermore, the shocked molecular gas indicated by line wing broadening features is also distributed along the radio shell and inside it. By both morphological correspondence and dynamical evidence, we confirm that the SNR~HB~3 is interacting with the $-45$~\km\ps\ MC, in essence, with the nearby H~{\sc ii} region/MC complex W3. The red-shifted line wing broadening features indicate that the remnant is at the nearside of the MC. With this association, we could place the remnant at the same distance as the W3/W4 complex, which is $1.95\pm0.04$~kpc. We also find a spatial correlation between the aggregated YSOc and the shocked molecular strip which is associated with the remnant. Particularly, a binary clump at ($l=132^{\circ}.94, b=1^{\circ}.12$) around $-51.5$~\km\ps\ inside the remnant's radio shell has been found, and it is associated with significant mid-IR emission. The binary system also has a tail structure resembling the tidal tails of interacting galaxies. According to the analysis of CO emission lines, the larger clump in this binary system is approaching stability, and the smaller clump is significantly disturbed.
16
9
1609.03748
1609
1609.08346_arXiv.txt
% We present a simple method for the identification of weak signals associated with gravitational wave events. Its application reveals a signal with the same time lag as the GW150914 event in the released LIGO strain data with a significance around $3.2\sigma$. This signal starts about 10 minutes before GW150914 and lasts for about 45 minutes. Subsequent tests suggest that this signal is likely to be due to external sources.
% \label{sec:introduction} The announcement by LIGO of the first observed gravitational wave (GW) event GW150914~\citep{LIGO PRL} has opened a new era in astrophysics and generated considerable interest in the observation and identification of associated signals. Currently, attention has largely been focussed on electromagnetic signals, especially gamma rays~\citep{Fermi-GW150914,AGILE-GW150914,XSL-GW150914}, that have the potential to confirm both the existence and nature of GW150914. In this work, we consider a rather different approach intended to identify weaker signals in the LIGO strain data that have the same time lag as GW150914 itself. The observation of such associated signals is potentially useful in understanding the nature of the primary GW event..
% \label{sec:Conclusion} We have presented a method for identifying weak signals associated with gravitational wave events that is based on time integrals of the Pearson cross-correlation coefficient. Applying this method to the LIGO GW150914 event, we have found indications of an associated signal. This signal has same time delay between Hanford and Livingston detectors as the GW150914 event and has a duration of approximately 45 minutes (from approximately 12 minutes before to 33 minutes after GW150914). Due to the weakness of this associated signal and its duration (which appears to be 4 orders of magnitude greater than that of GW150914 itself), it is not possible to determine its shape in the time domain. In spite of its weakness, however, we have argued that it is unlikely that this signal is of systematic origin. Numerical simulations show that this associated signal has a statistical significance of $3.2 \sigma$. While it is suggestive (but not conclusive) that this signal is real, it is not possible to offer a convincing suggestion regarding its origin --- astrophysical or otherwise. More generally, however, we have shown the value of studying the cross-correlation between two identical signals as a function of the time shift, $\tau$, between them. In such cases the Pearson cross-correlation coefficient (as a function of $\tau$) is independent of the amplitude of the signals and directly related to the power spectrum of the common signal. Thus, the remarkable similarity of the cross-correlators found here for the associated signal {\em before\/} GW150914, the GW150914 event itself, and the associated signal {\em after\/} GW150914 suggests that their power spectra (but not their time domain shapes) are also similar. Given the dramatic physical differences that are to be expected in a system before, during and after a strong GW event, such a constraint on the associated power spectra could provide a valuable diagnostic tool.
16
9
1609.08346
1609
1609.04660_arXiv.txt
{Stellar activity influences radial velocity (RV) measurements and can also mimic the presence of orbiting planets. As part of the search for planets around the components of wide binaries performed with the SARG High Resolution Spectrograph at the TNG, it was discovered that \object{HD 200466A} shows strong variation in RV that is well correlated with the activity index based on \Ha. We used SARG to study the \Ha\ line variations in each component of the binaries and a few bright stars to test the capability of the \Ha\ index of revealing the rotation period or activity cycle. We also analysed the relations between the average activity level and other physical properties of the stars. We finally tried to reveal signals in the RVs that are due to the activity. At least in some cases the variation in the observed RVs is due to the stellar activity. We confirm that \Ha\ can be used as an activity indicator for solar-type stars and as an age indicator for stars younger than 1.5 Gyr.}
Studying the variation in the radial velocity (RV) induced by the chromospheric activity is important to distinguish it from the Keplerian motion of the star that may be caused by a planet \citep[see e.g.][]{Queloz01, Dumusque11, 2014Robertson}. On long timescales the active regions can modify measured RVs by introducing a signal related to the stellar activity cycle, while on short timescales the rotational period can become evident. The most widely used activity indicators are based on the Ca II H\&K lines \citep{2010Isaacson, Lovis11, Gomes11}, which have been shown to correlate with the radial velocity jitter. Other lines were investigated and it was found that the \Ha\ line can be a good alternative \citep{ 1990Robinson, 1990Stassmeier,2010Santos, Gomes11}. However the correlation of \Ha\ with Ca II H\&K indices is high for the most active stars but decreases at a lower activity level, and sometimes becomes an anti-correlation \citep{Gomes11}. Similar results were also found by \cite{2007Cincunegui}, who added, using simultaneous observations of stars with spectral type later than F, that the correlation is lost when studying individual spectra of single stars and there is no dependence on activity. The correlation between the averaged fluxes for the Ca II and \Ha\ lines can be clarified by considering the dependence of the two indexes on the stellar colour or the spectral type, while the absence of a general relation between the simultaneous Ca II and \Ha\ index can be due to difference in the formation region of the two lines \citep{2007Cincunegui, Gomes14}. Studying the solar spectrum as a prototype and extrapolating the results to other stars, \cite{2009Meunier} discovered that plages and filaments in the chromosphere contribute differently to Ca II and \Ha\ lines: while plages contribute to the emission of all these lines, the absorption due to filaments is remarkable only for \Ha. Therefore the saturation of the plage filling factor seems to enhance the correlation between the two indexes in case of high stellar activity and low filament contribution. On the other hand, the anti-correlation between the emission in Ca II and \Ha\ for low active stars seems to depend only on a strong filament contrast if the filaments are well correlated with plages \citep[see also][]{Gomes14}. A search for planets around the components of wide binaries was performed using SARG (Spettrografo Alta Risoluzione Galileo) at the Telescopio Nazionale Galileo (TNG) in the past years. Two planetary companions were detected around \object{HD 132563B} and \object{HD 106515A} \citep{Desidera11, Desidera12}. \citet{Carolo14} found strong variations in the RVs of \object{HD 200466A} that could not be explained by a stable planetary system, but which were well correlated to a \Ha\ based activity indicator, showing that they are due to an $\sim$1100-day activity cycle. Stimulated by this finding, we started a systematic analysis of \Ha\ in the binaries of the SARG sample to identify activity-induced RV variations and distinguish them from planetary signatures. We report here on the main results of the activity study made within this survey. We also include the measurements for additional stars observed by our group for other programs carried out with SARG.\\
\label{sec:conclusions} The activity of 104 stars observed with the SARG spectrograph was studied using an index based on the \Ha\ line. We found that this index, \Haexcess, correlates well with the index based on Ca II lines, \RHK, and therefore it can be used to estimate the average activity level, confirming previous results. It also correlates with the rotation of the star: low activity corresponds to slow rotation, especially for cool stars. After removing a few targets for which contamination of the spectra by their companion is the dominant source of RV scatter, we found that \Haexcess\ also correlates with the scatter in RV. We obtain that a low-mass companion might be the source of a high residual RV scatter at least for \object{HD 76073B}. We also found a strong correlation between the average activity level \Ham\ of the two components in each binary system and that roughly a half of our systems are active. Finally, we showed that activity as measured by \Haexcess\ is correlated with the age derived from isochrone fitting. Although these have large error bars due to uncertainties in temperature and parallaxes, we found that active stars are typically younger than 1.5 Gyr, while older stars are typically inactive. We then analysed the time series of the stars: 11 stars ($\sim 8.5$ \%) of the SARG sample show a periodicity in \Ha with false-alarm probability $<0.5\%$. All these stars have a moderate activity level ($0.029 < \Delta H\alpha < 0.077$) except for the pair HD 76037A and B, but in these cases we only have a hint of a long-term period or magnetic cycle. When we focused on the long-term cycle, we obtained that the temperature interval of these stars is also limited to late-G and early-K stars. Other stars show variabilities on temporal scales certainly different from the rotational periods. In the bright stars sample, we found five stars out of ten with significant periodic variations in \Ha. In some cases the physical origin of this type of signal is unclear. Only five stars show a significant correlation between \Ha\ and RVs. We conclude that if care is exerted, \Ha\ is a useful indicator for activity and can be a good alternative to Ca II \RHK\ for studies based on radial velocity techniques, especially for solar-type stars.\\ \textit{Acknowledgements}. This research has made use of the SIMBAD database, operated at CDS, Strasbourg, France. This research has made use of the Keck Observatory Archive (KOA), which is operated by the W.M. Keck Observatory and the NASA Exoplanet Science Institute (NExScI), under contract with the National Aeronautics and Space Administration. We thank the TNG staff for contributing to the observations and the TNG TAC for the generous allocation of observing time. This work was partially funded by PRIN-INAF 2008 “Environmental effects in the formation and evolution of extrasolar planetary systems”.
16
9
1609.04660
1609
1609.03040_arXiv.txt
{ We provide the basic integrated physical properties of all the galaxies contained in the full Cornell Atlas of Spitzer/IRS Sources (CASSIS) with available broad-band photometry from UV to 22 $\mu$m. We have collected broad-band photometric measurements in 14 wavelengths from available public surveys in order to study the spectral energy distribution (SED) of each galaxy in CASSIS, thus constructing a final sample of 1,146 galaxies in the redshift range $0 < \rm z < 2.5$. The SEDs are modelled with the CIGALE code which relies on the energy balance between the absorbed stellar and the dust emission while taking into account the possible contribution due to the presence of an active galactic nucleus (AGN). We split the galaxies in three groups, a low-redshift (z$<0.1$), a mid-redshift ($0.1 \leq \rm z<0.5$) and a high-redshift (z$ \geq 0.5$) sub-sample and find that the vast majority of the Spitzer/IRS galaxies are star-forming and lie on or above the star-forming main sequence of the corresponding redshift. Moreover, the emission of Spitzer/IRS galaxies with z$<0.1$ is mostly dominated by star-formation, galaxies in the mid-redshift bin are a mixture of star forming and AGN galaxies, while half of the galaxies with z$ \geq 0.5$ show moderate or high AGN activity. Additionally, using rest-frame $NUV-r$ colour, S\'ersic indices, optical [O$_{\rm III}$] and [N$_{\rm II}$] emission lines we explore the nature of these galaxies by investigating further their structure as well as their star-formation and AGN activity. Using a colour magnitude diagram we confirm that 97\% of the galaxies with redshift smaller than 0.5 have experienced a recent star-formation episode. For a sub-sample of galaxies with available structural information and redshift smaller than 0.3 we find that early-type galaxies are placed below the main sequence, while late-type galaxies are found on the main-sequence as expected. Finally, for all the galaxies with redshift smaller than 0.5 and available optical spectral line measurements we compare the ability of CIGALE to detect the presence of an AGN in contrast to the optical spectra classification. We find that galaxies with high AGN luminosity, as calculated by CIGALE, are most likely to be classified as composite or AGNs by optical spectral lines. }
\label{sec:intro} Understanding how galaxies obtain their baryonic matter over cosmic time is an open question in extragalactic astronomy. Multiple physical processes can influence the star-formation history (SFH) of a galaxy, including external processes such as minor mergers, gas accretion, dynamical heating of stellar populations, as well internal processes, including massive wind outflows or feedback due to active galactic nuclei (AGN). For constraining the star-formation history and explaining the evolution of their baryonic content we need accurate measurements of physical parameters in particular stellar masses ($M_{\star}$), star-formation rates (SFR), dust content together with measurements of the possible AGN contribution to the total galaxy luminosity at different epochs of the galaxy evolution. The spectral energy distribution (SED) of a galaxy, typically estimated by collecting broad band photometry across the all possible wavelengths, is a valuable source from which one can extract key physical properties of the unresolved stellar population (see \citet{tex:WG11} for a review). A SED comprises the emission from the stars as well as the interstellar gas and dust, with stars emitting mainly in the UV-optical wavelengths while dust absorbing part of the stellar light and re-emiting it at infrared (IR) and the submillimeter wavelengths. Since the UV-optical absorbed energy is re-emitted up to submillimeter wavelengths, the intrinsic stellar emission can be constrained by gathering observations from UV to far-IR after applying energy balance arguments (i.e. \citealt{tex:dC08,tex:NB09}). When a galaxy hosts an accreting supermassive black hole, the emission from the central active galactic nucleus (AGN) may also contribute to the global optical and IR power output of the galaxy and should be taken into account in order to properly estimate both the stellar mass and the star-formation rate (\citealt{tex:CC15}). In addition to the SED analysis, moderate or high resolution spectroscopy can provide even more information on galaxy physical properties. One can use detailed theoretical predictions regarding the strength of fine-structure lines or molecular spectral features to infer the details of excitation mechanisms, chemical composition, strength of radiation field, and amount of dust extinction. In particular mid-and far-IR spectroscopy, even though challenging to obtain, is an extremely powerful probe of the nuclear activity, since it's less affected by obscuration and samples a wealth of ionic and rotational/vibrational features (\citealt{tex:CL01, tex:D03}). In the present work, we use as a basis for our study the Cornell AtlaS of Spitzer/IRS Sources (CASSIS\footnote{http://cassis.astro.cornell.edu/atlas/}; \citealt{tex:LB11b,tex:LB15}), which contains all pointed observations obtained by the Infrared Spectrograph (IRS; \citealt{tex:HR04b}) on board the Spitzer Space Telescope (\citealt{tex:WR04}). We select all the galaxies for which in addition to the 5-37$\mu$m mid-IR spectrum, ancillary broad band photometry from UV to mid-IR (22$\mu$m) is publicly available. The main goal of this paper is to present a panchromatic atlas of the broadband SEDs of these galaxies and to provide their global properties such as stellar mass, star-formation rate, AGN luminosity, as well as the contribution of an AGN to the total IR luminosity (${\rm frac}_{\rm AGN}$), as derived via a global SED modelling. It is envisioned that the availability of Spitzer/IRS mid-IR spectroscopy for this infrared selected sample will enable us to better constrain their energy production mechanism in the often dust enshrouded nuclear regions. The Spitzer/IRS observations provide the 5-37$\mu$m galaxy emission with the current highest spatial resolution that allows for new mid-IR diagnostics to be developed, something that was not possible with previous shallow mid-IR spectra. So far, Spitzer/IRS spectra have enabled the detailed study of the various AGN types based on silicate (9.7$\mu$m, 18$\mu$m) and polycyclic aromatic hydrocarbons (PAHs, 6.2$\mu$m) emissions (e.g. \citealt{tex:WH10,tex:HH11,tex:SY12,tex:SA14}). It has been shown that silicate features can vary with the AGN type (\citealt{tex:SM07,tex:HW07,tex:HH15}) while in case of early-type galaxies they provide a strong diagnostic tool for the population content of the galaxies (\citealt{tex:BP06b}). For the case of the luminous and ultra-luminous IR galaxies (LIRGs/ULIRGs), the PAHs and silicate features allowed the further classification of the various subtypes of infrared galaxies (\citealt{tex:SA13} and references therein). Furthermore, Spitzer/IRS spectra can provide an additional method of separating the nuclear emission from AGNs and star-formation when both are present in a galaxy (e.g. \citealt{tex:PA11}). Thus, this work aims to produce a complete catalogue of stellar masses, SFRs, stellar ages, together with measurements of the dust content and AGN luminosity of all the Spitzer/IRS observed galaxies that will allow for future studies to compare the mid-IR spectral properties with the globally derived ones from the galaxy SED. In Section 2 we describe the sample construction. In Section 3 we present the methodology used in order to measure the global physical properties. In Section 4 we explore the nature of the galaxies found in the CASSIS sample, while our conclusions are presented in Section 5. Throughout this paper we use $H_{0} = 70$, $\Omega_{\rm M}=0.3$ and $\Omega_{\Lambda}=0.7$. All magnitudes are given in AB system.
\label{sec:conclusions} We have analysed the broadband SEDs of 1,146 galaxies with available mid-IR spectra from Spitzer/IRS and broadband photometry from the UV to 22$\mu$m. We have collected photometric measurements from GALEX, SDSS, 2MASS/UKIRT and WISE and fitted the photometric SED with the code CIGALE. The CASSIS galaxies span a wide redshift range from 5Mpc up to z$\sim$2.5. Based on the CIGALE measured parameters it is shown that the local sample (z$<$0.1) consists of 584 galaxies that have a wide range of SFRs and stellar masses, from massive passive galaxies to dwarf galaxies with SFRs above the MS. The mid-redshift sample (0.1$ \leq $z$<$0.5) consists of 360 galaxies and the high-redshift sample (z$\geq $0.5) of 190 galaxies, both samples are dominated by massive, star forming galaxies that are placed above the MS corresponding at each redshift bin. Employing additionally the CIGALE parameter ${\rm frac}_{\rm AGN}$ we find that the low redshift galaxies are mainly star forming and only 10\% and 2\% are hosting a moderate and a strong AGN emission respectively. The percentage of the composite and AGN galaxies for the mid-redshift and high-redshift increases from 18\% to 29\% and 13\% to 24\%. The star-formation properties of all the galaxies at z$<$0.5 and available $NUV-r$ restframe colours, 915 in total, are further explored based on the $NUV-r$ colour - absolute $r-$band magnitude diagram. With the use of this diagram we confirm that the vast majority (97\%) of the galaxies in this sample have experienced a recent star-formation event in agreement with the high sSFR as measured by CIGALE. For a fraction of 256 galaxies with z$<$0.3 with available single S\'ersic index $n_{\rm g}$ measurements the galaxies are divided based on their structure into early-, "red \& low $n$"-, late- and "blue \& high $n$"-type galaxies. CASSIS galaxies display a wide range of structures and when are placed on the $\log (\rm SFR) - \log (M_{\star})$ plane show a gradual distinction from early-type galaxies to "blue \& high $n$" galaxies that indicates a connection of their structure with the sSFR. More specific, early-type galaxies are located below the main star forming sequence and "blue \& high $n$" galaxies are found above the MS. On the contrary, "red \& low $n$" galaxies together with late-type galaxies are settled mainly on the MS with the former to have on average lower SFR and higher stellar mass. A subsample of 586 galaxies that optical spectral line measurements can be acquired is delineated into AGN, composite and star-forming galaxies based on the BPT diagram. It is found that the optical spectral line classification is not always in agreement with the CIGALE model parameter ${\rm frac}_{\rm AGN}$ but there is a correlation between the CIGALE AGN luminosity and the optical spectral line classification, where AGN luminosity is gradually increasing as we move from star-forming, to composite and AGN galaxies. We speculate that this mismatch as a result first of the different region for collecting light used by each method, i.e. central 3 arcsec in case of SDSS spectra versus the total light in the case of broadband photometry. Secondly, it is due to different wavelength range studied by each methodology, SDSS lines investigate only optical signs of AGN or star forming activity while with SED modelling uses the information from UV, optical and IR measurements. Placing these galaxies on the MS and using the classification acquired from the BPT diagram we see that CASSIS AGN galaxies do not occupy a specific region on the diagram and can be found above, on and below the MS revealing that exhibit various sSFRs. Finally, this study provides a catalogue with all the CIGALE measured physical parameters along with a structure classification and a star forming - AGN activity classification. The two classifications are derived from the colour - $n_{\rm g}$ and BPT diagrams respectively. The SED derived physical parameters contained in the catalogue are the stellar mass ($M_\star$), the instantaneous star-formation rate (SFR), the stellar age, the E(B-V) attenuation of the young stellar population, the AGN luminosity fraction (${\rm frac}_{\rm AGN}$), the AGN luminosity (L$_{\rm AGN}$), the dust luminosity (L$_{\rm dust}$) and the dust attenuation in the $FUV$ (A$_{\rm FUV}$). It should be stressed that the availability of Spitzer/IRS nuclear spectra for all the galaxies in our sample provides a unique advantage compared to other studies. In a subsequent paper we will explore how the nuclear properties of the sample, as derived by spectral features in the rest-frame 5-37 $\mu$m range, which are not (or marginally) affected by obscuration, compare with integrated galaxy properties obtained by modelling their global SED.
16
9
1609.03040
1609
1609.04726_arXiv.txt
{BHR\,160 is a virtually unstudied cometary globule within the Sco~OB4 association in Scorpius at a distance of 1600~pc. It is part of a system of cometary clouds which face the luminous O star HD155806. BHR\,160 is special because it has an intense bright rim. } {We attempt to derive physical parameters for BHR\,160 and to understand its structure and the origin of its peculiar bright rim. } {BHR\,160 was mapped in the \twcop, \thco and \ceo (2--1) and (1--0) and CS (3--2) and (2--1) lines. These data, augmented with stellar photometry derived from the ESO VVV survey, were used to derive the mass and distribution of molecular material in BHR\,160 and its surroundings. Archival mid-infrared data from the WISE satellite was used to find IR excess stars in the globule and its neighbourhood.} {An elongated 1\arcmin\ by 0\farcm6 core lies adjacent to the globule bright rim. \twco emission covers the whole globule, but the \thcop, \ceo and CS emission is more concentrated to the core. The \twco line profiles indicate the presence of outflowing material near the core, but the spatial resolution of the mm data is not sufficient for a detailed spatial analysis. The BHR\,160 mass estimated from the \ceo mapping is { 100$\pm$50\,\Msun\ (d/1.6\,kpc)$^2$ where d is the distance to the globule. Approximately 70\% of the mass lies in the dense core. The total mass of molecular gas in the direction of BHR\,160 is 210$\pm$80\,(d/1.6\,kpc)$^2$\,\Msun\ when estimated from the more extended VVV NIR photometry.} We argue that the bright rim of BHR\,160 is produced by a close-by early B-type star, HD~319648, that was likely recently born in the globule. This star is likely to have triggered the formation of a source, IRS~1, that is embedded within the core of the globule and detected only in \Ks\ and by WISE and IRAS. } {}
\label{sect:introduction} Bok globules are small compact clouds with typical dimensions of approximately 0.15 to 0.8~pc \citep{Bok1977, reipurth2008}. { Many of these are cloud cores} which have been exposed from the interior of large molecular clouds when the more tenuous material has been swept away by the formation of nearby OB stars \citep{reipurth1983}. Cometary globules are a subset of Bok globules in a transition phase, still showing the windswept appearance of their formation. The compression that the globules suffer as they are being excavated, in many cases leads to the formation of stars, so cometary globules are frequently containing young stars \citep[for example][]{haikalareipurth2010,haikalaetal2010}. In a survey for globules in the Galactic plane, we have come across a remarkable yet virtually unexplored cometary globule. It is listed as object \object {353.3+2.4} in the \citet{hartleyetal1986} list of southern dark clouds. \citet{bourkeetal1995a, bourkeetal1995b} surveyed the optical appearance of the opaque \citet{hartleyetal1986} clouds smaller than 10$'$ in diameter and searched for ammonia (1,1) emission in this selection. Globule 353.3+2.4 is object 160 in the \citet{bourkeetal1995a} list and will be identified as \object {BHR\,160} in the following. The globule is clearly cometary, but the striking aspect of it is its bright rim (Figure~1). The brightest part of this rim has a width of $\sim$2$'$ (0.9~pc) with fainter extensions on both sides. We have obtained a poor, noisy red spectrum { at the ESO 3.6m telescope} of the bright rim which shows a strong H$\alpha$ line and much weaker lines of [SII] $\lambda$~6717/6731 lines, as expected in photoionized gas. None of the other cometary clouds in the region show similar bright rims, which leads to the suspicion that a bright star $\sim$30" away from the midpoint of the bright rim is the source of UV radiation. This star is \object {HD 319648}, an early B star. The general region of BHR\,160 contains a number of cometary globules and cometary-shaped clouds, which all { face towards another, more distant,} bright star, \object {HD 155806} (= HR~6397 = V1075~Sco), see Figure~2. This is an O7.5Ve star \citep{walborn1973}, and the hottest known Galactic Oe star \citep{fullertonetal2011}. HD~155806 is the most luminous star in the little-studied \object {Sco OB4} association \citep{the1961, roslund1966}. Within a radius of one degree around HD~155806, \citet{the1961} found 40 OB stars and 85 A-type stars, for which he adopted a distance of $\sim$1400~pc. In a follow-up study, \citet{roslund1966} found that the Sco OB4 association extends towards the south, with its main concentration in the young H~II region NGC 6334. \citet{persitapia2008} have determined the distance to NGC~6334 to be 1.61$\pm$0.08~kpc. In the following we adopt a distance of 1.6~kpc for BHR\,160. In this paper, we present detailed millimetre-wavelength multi-transition observations of the virtually { unstudied} globule BHR\,160, and we use the data to determine the mass and structure of the globule. The observations are augmented with archival near- and mid-infrared data which reveal that star formation is ongoing within the globule.
\label{sect:conclusions} We have carried out dedicated molecular line observations in the CO (and isotopologues) (1--0), (2--1) and in CS (2--1) and (3--2) transitions to study the basic properties of the bright rimmed dark cloud BHR\,160. Combining these data with data available in various public surveys provides insight into the globule and its surroundings: 1. AAO/UKST imaging in the \Halpha\ line reveals that BHR\,160 is part of the shell of an extinct HII region. Besides BHR\,160 several other globules are associated with the same shell of neutral material at the edge of the Sco OB4 association. 2. BHR\,160 is elongated with dimensions of approximately 5\arcmin$\times$3\arcmin. The globule is delineated on three sides by a halo emitting in the \Halpha\ line while a cometary-like tail is present on the NE side. 3. The 1\arcmin\ by 0\farcm6 BHR\,160 core is dense and bound sharply in the West by the bright \Halpha\ rim which is also seen faintly in reflected light at other wavelengths. 4. The maximum \Htwo\ column density in the globule core is $4.4\times10^{22}$\,\persqcm\ as estimated from the \ceo data, or half of this when estimated from extinction. Mass estimated from the \ceo observations { using LTE approximation within the mapped region is 100$\pm$50(d/1.6\,kpc)$^2$\,\Msun\ of which 70\% is contained in the core. The total BHR\,160 mass estimated from optical extinction is 210$\pm$80(d/1.6\,kpc)$^2$\,\Msunp\ Approximately 40\% of the total optical mass is contained in the BHR\,160 appendix which is argued to be a separate cloud seen in projection.} 5. Only one infrared excess star, BHR\,160\,IRS1, was detected within BHR\,160. It lies off the core on the opposite side to the bright rim. Only a faint slightly extended object is seen in the VVV \Ks\ image but the star is seen in all the four WISE channels from 3.4~$\mum$ to 20~$\mum$. An IRAS source lies nominally 15\arcsec\ to the West of IRS\,1, but in the HIRES enhanced IRAS 60\,$\mum$\ image the emission maximum coincides with IRS\,1. A SED fit to the WISE and IRAS 60\,$\mum$\ fluxes points at a low mass Class I protostar of about 2~\Msun. Other fits from subsolar up to 5~\Msun\ are, however, possible and the fits must be considered only indicative until better FIR data become available. 5. Analysis of seven bright stellar WISE sources seen towards BHR\,160 provides evidence for recent star-formation in the region. According to SED fits to \jhks\ and WISE data, six of these are high mass objects in different stages of early stellar evolution. Only one low mass source is detected, but only the WISE fluxes are available for this star. As all the fits are based on a restricted wavelength range they should all be considered as indicative. \vspace{0.5cm} {\em Acknowledgements:} We thank Noel Cramer for obtaining the Geneva photometry of HD~319648, and Bambang Hidayat for sending us a copy of the Th\'e~(1961) paper. This research has made use of the following resources: data products from observations made with ESO Telescopes at the La Silla or Paranal Observatories under ESO programme ID 179.B-2002; SIMBAD database, operated at CDS, Strasbourg, France; NASA’s Astrophysics Data System Bibliographic Services; data from the Southern H$\alpha$\ Sky Survey Atlas (SHASSA), which was produced with support from the National Science Foundation; data products from the 2MASS, which is a joint project of the University of Massachusetts and the Infrared Processing and Analysis Center/California Institute of Technology, funded by the NASA and the US National Science Foundation; data products from the Wide-field Infrared Survey Explorer, which is a joint project of the University of California, Los Angeles, and the Jet Propulsion Laboratory/California Institute of Technology, funded by the National Aeronautics and Space Administration.
16
9
1609.04726
1609
1609.06723_arXiv.txt
In this paper, we provide a simple and modern discussion of rotational superradiance based on quantum field theory. We work with an effective theory valid at scales much larger than the size of the spinning object responsible for superradiance. Within this framework, the probability of absorption by an object at rest completely determines the superradiant amplification rate when that same object is spinning. We first discuss in detail superradiant scattering of spin 0 particles with orbital angular momentum $\ell=1$, and then extend our analysis to higher values of orbital angular momentum and spin. Along the way, we provide a simple derivation of vacuum friction---a ``quantum torque'' acting on spinning objects in empty space. Our results apply not only to black holes but to arbitrary spinning objects. We also discuss superradiant instability due to formation of bound states and, as an illustration, we calculate the instability rate $\Gamma$ for bound states with massive spin 1 particles. For a black hole with mass $M$ and angular velocity $\Omega$, we find $\Gamma \sim (G M \mu)^7 \Omega$ when the particle's Compton wavelength $1/\mu$ is much greater than the size $GM$ of the spinning object. This rate is parametrically much larger than the instability rate for spin 0 particles, which scales like $(GM \mu)^9 \Omega$. This enhanced instability rate can be used to constrain the existence of ultralight particles beyond the Standard Model.
Superradiance\footnote{Throughout this paper will we will continually use the more general term of ``superradiance'' in place of the more specific one ``rotational superradiance''.} is a surprising phenomenon where radiation interacting with a rotating object can be amplified if prepared in the correct angular momentum state~\cite{zel1971generation,zel1972amplification}. For an axially symmetric object, such amplification occurs whenever the following ``superradiant condition'' is met: \be \label{SR_condition} \omega-m\Omega < 0 \, , \ee where $\omega$ is the angular frequency of the incoming radiation, $m$ its angular momentum along the axis of rotation (which coincides with the axis of symmetry), and $\Omega$ is the magnitude of the angular velocity of the rotating object. The importance of superradiance in astrophysics stems from the fact that it is a mechanism for extracting energy from spinning compact objects, and in particular from black holes~\cite{misner1972stability,starobinskii1973amplification,starobinskiichurilov1973amplification,teukolsky1974perturbations}. Because this rotational energy reservoir can be tremendous, any such mechanism could in principle have observable consequences and serve as a measure of strong gravity. Historically, however, it has been difficult to detect this phenomenon in real astrophysical systems. One main difficulty is that the amplification efficiency is generally very low\footnote{This is especially true in the long wavelength limit. However, it should be noted that the amplification factors for scalar and electromagnetic radiation are small at all frequencies ($<0.4\%$ and $<10\%$ respectively), whereas the amplification of gravitational radiation can become as high as $140\%$ at very high frequencies ($\omega$ of order the light crossing time)\cite{starobinskiichurilov1973amplification}.} for massless radiation~\cite{starobinskii1973amplification, starobinskiichurilov1973amplification,Press:1972zz}. This necessitates contrived scenarios such as the``Black Hole Bomb" of Press and Teukolsky~\cite{Press:1972zz}, where some sort of perfect spherical mirror encases the rotating object and reflects the amplified modes back allowing them to exponentially grow in energy. Consequently, it seems now that other astrophysical mechanisms, such as for instance the Blandford-Znajek process~\cite{Blandford:1977ds}, play a much more important role in the dynamics and evolution of compact objects than superradiant scattering of electromagnetic or gravitational radiation. Recently however, two distinct developments have led to a renewed interest in superradiance. First, the development of the gauge-gravity correspondence~\cite{Maldacena:1997re,Gubser:1998bc,Witten:1998qj} has spurred a great deal of activity surrounding black hole solutions in asymptotic AdS. For such solutions, the boundary of AdS acts as a perfect mirror reflecting gravitational radiation back to the black hole in a finite time making it possible for instabilities to develop~\cite{Cardoso:2004hs}. Secondly, in the context of particle physics it was realized that the existence of light particles beyond the standard model can affect the spin distribution of astrophysical black holes~\cite{Arvanitaki:2009fg}. Such light particles can become gravitationally bound to a black hole and, if they are bosons (such as axions), acquire extremely high occupation numbers. This instability can have a host of fascinating---and more importantly observable---consequences~\cite{Arvanitaki:2010sy, Arvanitaki:2014wva, Arvanitaki:2016qwi, Arvanitaki:2016fyj}.\footnote{We have given short shrift to a great deal of work on the subject matter of superradiance. The interested reader can find a much more complete record of the literature in the excellent reviews~\cite{Bekenstein:1998nt} and~\cite{Brito:2015oca}.} These developments are representative of the fact that high energy physicists' interest in superradiance is often in the context of black holes. This may give the false impression that superradiance is somehow related to the existence of an ergosphere in the Kerr solution.\footnote{See for instance remarks to this end in the authoritative account of the Kerr metric by Teukolsky \cite{Teukolsky:2014vca}.} It is instead a much more general phenomenon that can occur for {\em any} rotating object that is capable of absorbing radiation. As a matter of fact, the original papers on the subject by Zel'dovich~\cite{zel1971generation,zel1972amplification}---beautiful in their brevity and clarity---are about scattering of electromagnetic radiation off a cylinder with finite conductivity. Nevertheless, discussions of superradiance are often obscured by the algebraic complexity of the Kerr solution. In this paper, we will show that, at least in the long wavelength limit, dealing with the details of the Kerr solution is neither necessary nor helpful. It is also very restrictive, because there is no analogue of Birkhoff's theorem for the Kerr solution~\cite{Teukolsky:2014vca}. This means that the metric outside a rotating star generically differs from the Kerr metric and can depend on additional parameters besides the mass $M$ and the spin $J$. It is therefore necessary to go beyond the Kerr solution in order to describe superradiant scattering off astrophysical objects other than black holes. This is however not feasible in the usual approach, which is based on finding solutions to the wave equation on a fixed curved background, because (\emph{a}) in general the exact form of the metric is not known, and (\emph{b}) even it was, the resulting wave equation would likely be much more complicated to solve analytically than for the Kerr metric. The purpose of this work is to give a modern and comprehensive account of superradiance based on effective field theory (EFT) techniques, and to provide a simple framework to carry out perturbative calculations in the context of superradiant processes. By focusing on the long wavelength limit, our approach is capable of describing the onset of superradiance for any slowly rotating object, be that a star or a black hole. This shows explicitly that superradiance is just a consequence of dissipation and spin, and nothing else. In particular, there is no need for an ergosphere. Moreover, it is possible to infer superradiant scattering efficiencies and superradiant instability rates by matching a single quantity (e.g. absorption cross section) which can be calculated \emph{even when the object is at rest}. For instance, one can extract the leading order results for rotating black holes from calculations carried out in a Schwarzschild background, without knowing anything about the Kerr metric. The rest of this paper is organized as follows. In Section~\ref{EFT_for_spin}, we review the effective theory of relativistic spinning objects coupled to external fields discussed in~\cite{Delacretaz:2014oxa}. This approach is valid in the slowly-rotating regime, i.e. whenever the angular frequency is smaller than the object's characteristic frequencies.\footnote{For maximally rotating objects, the expansion in angular velocities breaks down~\cite{Delacretaz:2014oxa} and one needs to reorganize the effective action following for instance~\cite{Porto:2005ac,Levi:2015msa}.} We also discuss how to incorporate the effects of dissipation---critical for superradiance---in a way that is consistent with unitarity and the EFT framework~\cite{Goldberger:2005cd,Porto:2007qi}. With the basic formalism in hand, we illustrate our approach to superradiance in Section \ref{spin_0} by considering at first the $\ell =1$ modes of a spin $0$ field. Here, we argue that superradiant scattering follows from a tension between {\em absorption} and {\em stimulated emission}. Calculations are carried out in some detail, as similar manipulations take place also in the subsequent sections. In particular, we calculate the probabilities of absorption and spontaneous emission by considering processes that involve single quanta. In order to justify this approach and make contact with the standard one based on classical wave equations~\cite{Brito:2015oca}, in Section \ref{coherent states} we recalculate these probabilities using coherent states and find perfect agreement with the single-quantum approach in the limit of large occupation number. We then generalize our results to higher values of the orbital angular momentum (Section \ref{higher multipoles}) and higher integer spins (Section \ref{higher spins}). These cases are a bit more cumbersome from a purely algebraic viewpoint, although conceptually they are simple generalizations of the results derived in Section~\ref{spin_0}. Finally, in Section \ref{bound states} we consider superradiant instability due to formation of bound states and show how to compute the instability rate using the formalism developed in the previous sections. For concreteness, we carry out explicit calculations for spin 0 and spin 1 particles. The latter case would be exceedingly complicated to analyze with conventional methods, because the wave equation for massive spin 1 fields is not factorizable on a Kerr background (let alone on more general axially symmetric backgrounds). Interestingly, we find the instability rate for spin 1 particles to be parametrically larger than the one for spin 0 particles, in agreement with the results of~\cite{Pani:2012bp}. We conclude in Section~\ref{conclusions} by summarizing our results. \\ \noindent \emph{Conventions:} throughout this paper, we will work in units such that $c = \hbar =1$ and we will adopt a ``mostly plus'' metric signature. Greek indices $\mu$, $\nu$, $\cdots$ run over $0$, $1$, $2$, $3$ and capital Latin indices $I$, $J$, $\cdots$ run over $1$, $2$, $3$. Other conventions and technical details are summarized in the appendices.
\la{conclusions} In this work, we have discussed a modern, perturbative approach to (rotational) superradiance based on effective field theory techniques. Our formalism describes slowly spinning objects interacting with particles of any mass and spin whose energy is much smaller than the inverse size of the object (in natural units). Within this framework, we show unambiguously that superradiance is not peculiar to the Kerr solution, but rather is a generic feature of any dissipative rotating object. As such, our results apply also to astrophysical systems other than black holes, which generically are not described by a Kerr metric. For simplicity, we have restricted our attention to spherically symmetric objects, although it would be interesting and in principle straightforward to relax this assumption. We argued that, at lowest order in perturbation theory, the \emph{same} parameters determine (1) the absorption probability of a particle with a given spin, mass and polarization (2) the superradiant amplification rate of a beam of such particles, (3) the rate of superradiant instability due to formation of bound states, and (4) the relaxation time scale due to vacuum friction. These parameters can be extracted from analytic (for black holes) or numeric (for other astrophysical objects) calculations of the absorption probability in the (relatively simpler) static limit, and then used in the more complicated spinning case. This is an improvement on a similar EFT treatment given in~\cite{Porto:2007qi}, which required an additional matching procedure for the spinning case. For spin 0 particles, the same parameters describe both the massless and massive case. This is not the case for higher spin particles, which require two distinct sets of parameters. Within our framework, we were able to unify a variety of results that were previously scattered across the literature, as well as to derive some interesting new ones. In particular, we calculated the absorption probability and superradiant amplification rate for massive particles with \emph{arbitrary} integer spin scattering off a \emph{generic} spinning object (that is, not necessarily a black hole). We also calculated the instability rate due to formation of bound states with massive spin 0 and spin 1 particles. By working at lowest order in the gravitational coupling, we showed that the former scales like $\l(G M \mu\r)^9$ and the latter like $(G M \mu)^7$ for small values of $G M \mu$. This estimate was obtained by studying states with $\ell =1$ and $\ell =0$ respectively, as these are the most unstable ones. There is however no conceptual obstacle to extending our calculations to higher values of $\ell$. Our results for vector bound states are consistent with the recent numerical results of~\cite{Rosa:2011my,Pani:2012bp}, but disagree with earlier results obtained analytically in~\cite{Galtsov:1984nb}. Finally, when combined with our previous work~\cite{Endlich:2015mke}, our results make explicit the ``correspondence'' between tidal distortion and gravitational superradiance put forward in~\cite{Glampedakis:2013jya}. From our EFT perspective, the connection between these two seemingly unrelated phenomena follows immediately from the fact that they are governed by the same dissipative couplings in the effective action.
16
9
1609.06723
1609
1609.02336_arXiv.txt
We study the main astrophysical properties of differentially rotating neutron stars described as stationary and axisymmetric configurations of a moderately stiff $\Gamma=2$ polytropic fluid. The high level of accuracy and of stability of our relativistic multidomain pseudo-spectral code enables us to explore the whole solution space for broad ranges of the degree of differential rotation, but also of the stellar density and oblateness. Staying within an astrophysicaly motivated range of rotation profiles, we investigate the characteristics of neutron stars with maximal mass for all types of families of differentially rotating relativistic objects identified in a previous article \citep{AGV}. We find that the maximum mass depends on both the degree of differential rotation and on the type of solution. It turns out that the maximum allowed mass can be up to 4 times higher than what it is for non-rotating stars with the same equation of state. Such values are obtained for a modest degree of differential rotation but for one of the newly discovered type of solutions. Since such configurations of stars are not that extreme, this result may have important consequences for the gravitational wave signal to expect from coalescing neutron star binaries or from some supernovae events.
Binary neutron star (BNS) mergers are thought to be promising sources of gravitational waves and of neutrinos \citep{R15, Bernuzzi16}, as well as the progenitors of short gamma ray bursts \citep*{BNPP84,ELPS89}. After the detection of gravitational waves from binary black holes by the LIGO experiment \citep{Abbotta, Abbottb}, they are even one of the most expected next targets. The outcome of a BNS merger is the formation of a stellar black hole, but the latter can either occur promptly or be slightly delayed and include a stage during which a massive and warm differentially rotating neutron star is produced \citep{SU02}. Whatever is the actual scenario, highly sophisticated and realistic numerical simulations are needed to ascertain the signals to be expected and consequently to enable us to extract information on both the gravitational and the high energy underlying physics. Although a lot of progress has been made, building numerical codes that take into account all the pertinent physical ingredients is such a difficult task that it is still far from achieved, making useful the resort to simplified studies concentrating on some specific aspects. One basic but key issue is the maximum life span of the potential short-lived material remnant, a question which can be approached by focussing first on its maximum mass. The analysis of the involved timescales and of the results of numerical simulations \citep{STU05} shows that a rough but reasonable approximation to address this problem consists in modeling the central body as a stationary axisymmetric rotating relativistic star in differential rotation, neglecting complicated inner motions of the matter, nuclear reactions, thermal effects, magnetic fields, etc. Doing so, it is for instance possible to study the influence of the degree of differential rotation or of the stiffness of the equation of state (EOS) on the maximum mass which, for rotating stars, can be much higher than for static stars [see for instance \cite{CST92,CST94a,CST94b,BSS0,LBS03}]. In a previous article \citep{AGV} (later referred to as Paper~I), a new investigation was presented of the structure of differentially rotating neutron stars, modelized as constant density stars or relativistic $N=1\,(\Gamma=2)$ polytrops. This study, extended for other polytropic EOSs in \cite{SKGVA}, relied on a multi-domain spectral code \citep*[based on the so called ``AKM-method'',][]{AKM} that was formerly shown to enable the calculation of very extremal configurations of rigidly rotating relativistic stars \citep*{AKMb,SA} or rings \citep*{Ansorg2005,AP05}. In Paper~I, only star-like configurations, \emph{i.e.} with a spheroidal topology (without a hole), were considered, but allowing what are sometimes called "quasi-toroidal" configurations in which the maximal density is not the central one. The focus was put on the solution space, and a noticeable result was the discovery of four "types" of configurations that co-exist with each other even for reasonable profiles of angular momentum. The purpose of the present article is to extend the study of Paper I by calculating, for $\Gamma=2$ polytrops, astrophysicaly relevant quantities, such as the maximal mass, the angular momentum, the ratio between kinetic and potential energies, etc. Our highly accurate and stable spectral code enable us, for the first time, to study in detail those properties of differentially rotating neutron stars taking into account the whole solution space identified in Paper~I. Moreover, the understanding of the global structure of the parameter space makes it possible to explain the results of preceding studies, especially the works by \cite{BSS0} and \cite{LBS03}, showing that strange features of some sequences they had obtained arise from the fact that their codes were jumping from one type of configuration to another due to numerical limitations. Finally, the configurations we have calculated could be used as initial data to perform in a systematic way the stability analysis of differentially rotating neutron stars and to determine stability criteria for such objects. The plan of the article is as follows: in Section~\ref{s:rotlaw}, we start by recapitulating the issue of differential rotation in relativistic stationary rotating stars, with as a primary goal to describe the hypothesis done in our work and to define variables and notations. Then, in Section~\ref{s:mass}, we briefly review current knowledges concerning the maximal mass of rotating relativistic stars and we present our results in the context of the existence of the various types that were introduced in Paper I. Later on, Section~\ref{s:rotpr} focusses on angular momentum and other quantities related to rotation and stability after which Section~\ref{s:conc} summarizes the results achieved, in contrast with those of previous studies. Finally, Appendix~\ref{s:NumScheme} reviews the feature of the numerical code, displaying tests of convergence and accuracy of our results, putting emphasis on the specificities of the version used in this article as well as in Paper~I.
\label{s:conc} Using a highly accurate spectral code based on the Newton-Raphson scheme, we calculated configurations of relativistic differentially rotating neutron stars modeled as $\Gamma=2$ polytrops for broad ranges of maximal densities and of the degree of differential rotation. We were able to fully explore the solution space for stars with a rotation profile described by the law proposed in \cite{KEH}, although we considered only models with spheroidal topology (without a hole). For the first time, the maximum mass and various other astrophysical quantities were calculated for all types of differentially rotating neutron stars, as defined in Paper~I. The maximum mass of differentially rotating neutron stars was shown to depend not only on the degree of differential rotation but also on the type of the solution. Its value is an increasing function of the degree of differential rotation for type A solutions (associated to a low or to a modest degree of differential rotation) and a decreasing function for types B (with a modest degree of differential rotation) and C (with a modest or a high degree of differential rotation). The highest increase of the maximal mass, 3-4 times the maximal non-rotating mass, is obtained for intermediate degrees of differential rotation, indicating that the corresponding configurations could be relevant in some astrophysical scenarios. Those configurations belong to sequences of type B which were not taken into account in previous studies mainly due to numerical difficulties. In addition, the thorough investigation performed in the present article allowed to understand the partial results obtained with other codes \citep{BSS0,LBS03} and to show that the maximum possible mass of a differentially rotating neutron star could be much higher than previously thought, even for astrophysically reasonable configurations. In order to try to go a step further in deciding whether configurations of the new types have pertinence in actual situations, we performed a rough analysis of their rotational parameters, such as their angular momentum and other quantities linked to the possible appearance of instabilities. We observed that the ratio between the kinetic and potential energies was indeed quite large for many newly discovered configurations, but we also noticed that so is their Kerr-parameter (always higher than 1 for stars with the maximum mass belonging to types B and D and for some of type C), which could imply that they are somehow stabilized. However, the definitive answer to that question has to come from other analysis, be they perturbative or fully dynamical. A few hydrodynamical studies \citep{BSS0,SBS0,GRS11} have already shown that supra-Kerr stellar models seem dynamically stable but are subject to various secular instabilities leading to the emission of gravitational waves. Another complementary approach would naturally be to use the configurations we have calculated as initial data to perform dynamical evolutions of differentially rotating neutron stars and to study the stability criteria for such objects. To be of physical interest, the conclusions drawn in our study should as well be supported by further investigations with more relatistic descriptions of the microphysics, such as the equation of state. In \cite{SKGVA}, we present results concerning the influence of the stiffness of a polytropic equation of state on the various types of configurations and on their properties to examine how robust our results are. In other articles, we shall study the maximum mass of strange stars \citep{SGVA} and analyse in detail the rotational properties of neutron stars described by realistic EOSs.
16
9
1609.02336
1609
1609.00569_arXiv.txt
The distribution of the sunspot group size (area) and its dependence on the level of solar activity is studied. It is shown that the fraction of small groups is not constant but decreases with the level of solar activity so that high solar activity is largely defined by big groups. We study the possible influence of solar activity on the ability of a realistic observer to see and report the daily number of sunspot groups. It is shown that the relation between the number of sunspot groups as seen by different observers with different observational acuity thresholds is strongly non-linear and cannot be approximated by the traditionally used linear scaling ($k-$factors). The observational acuity threshold [$A_{\rm th}$] is considered to quantify the quality of each observer, instead of the traditional relative $k-$factor. A nonlinear $c-$factor based on $A_{\rm th}$ is proposed, which can be used to correct each observer to the reference conditions. The method is tested on a pair of principal solar observers, Wolf and Wolfer, and it is shown that the traditional linear correction, with the constant $k-$factor of 1.66 to scale Wolf to Wolfer, leads to an overestimate of solar activity around solar maxima.
\label{Sec:Intro} The sunspot number series was introduced in the 1860s by Rudolf Wolf of Z\"urich and became the most commonly used index of long-term solar variability ever since. The sunspot number series is longer than 400 years, including the Maunder minimum (\opencite{eddy76,sokoloff04,usoskin_MM_15}), and is composed of {observations} from a large number of different observers. Since they used different instruments and different techniques for observing and recording sunspots, it is unavoidable that data from different observers need to be calibrated to each other to produce a homogeneous dataset. The first inter-calibration of data from different observers was performed by Rudolf Wolf in the mid-19th century. He proposed a simple linear scaling between the different observers (the so-called $k-$factors) so that the data (count of groups and sunspots) from one observer should be multiplied by a $k-$factor to rescale it to another reference observer. The value of the correction $k-$factor is assumed to be rigidly fixed, as found by a linear regression, for each observer, and it characterizes the observer's quality in a relative way with respect to the reference observer. Since then, this method has always been used until very recently (\opencite{clette14,svalgaard16}). The $k-$factor approach utilizes the {method of ordinary linear least square regression forced through the origin}. This method is based on several formal assumptions which are usually not discussed, but their violation may lead to incorrect results: \begin{enumerate} \item {\it Linearity}, \textit{i.e.} the relation between two variables $X$ and $Y$ can be described as linear in the entire range of the $X$-values. This assumption is invalid for the sunspot (group) numbers, as shown by \inlinecite{lockwood_SP3_2016} or \inlinecite{usoskin_ADF_16} and discussed here, because of the essential nonlinearity. \item {\it Random sample}, \textit{i.e.} the pairs of $X$- and $Y$-values are taken randomly from the same population and have sufficient lengths. This assumption is valid in this case. \item {\it Zero conditional mean}, \textit{i.e.} normality and independence of errors, implying that all errors are normally distributed around the true values. This {assumption} is also invalid since the errors are asymmetric and not normal (\opencite{usoskin_ADF_16}). \item {\it Constant variance (homoscedasticity)}. This assumption is violated since the variance of the data is not constant but depends on the level of solar activity so that the variance of the data points is much larger for periods of high activity than around solar minima. \item X-values are supposed to be known exactly without errors. This assumption is invalid since data from the calibrated observer (X-axis) can be even more uncertain than those by the reference observer (Y-axis). \item Additionally, forcing through the origin is assumed for the $k-$factors. This assumption is also invalid as shown by \inlinecite{lockwood_SP3_2016}, since no spot reported by an observer does not necessarily mean that an observer with a better instrument would not see some small spots. \end{enumerate} We do not discuss here the issue of collinearity, since this assumption is not directly applied to the regression problem considered here. Accordingly, five out of six assumptions {listed} above are invalid in the case of sunspot numbers making the linear scaling calibration by $k-$factor formally invalid. This method was reasonable in the mid-19th century {for interpolations to fill short gaps in observations}, but now we aim to develop a more appropriate method for a direct calibration of different observers to each other. Several indirect methods of solar-observer calibration have been introduced recently (\opencite{friedli16,usoskin_ADF_16}) but here we focus on a direct inter-calibration {based on} modern statistical methods. \begin{figure} \centering \resizebox{10cm}{!}{\includegraphics[bb = 28 177 486 805]{MAP.eps}} \caption{ Size distribution of sunspot groups from the reference database. Panel a: Cumulative distribution function (CDF) of sizes of sunspot groups above the given threshold $A_{\rm th}$. The solid line depicts the entire population of sunspot groups, the red dashed and blue dotted lines show the CDF for low-activity ($G=1$) and high-activity days ($G=20$), respectively. Panel b: The 2D map of the CDF (color code is shown on the right) as a function of the activity level (the daily $G_{\rm ref}$ -- X-axis) and the group size (Y-axis), normalized to the CDF at $G=1$ (the red dashed curve in panel a). } \label{Fig:map} \end{figure} It was proposed recently (\opencite{lockwood_SP3_2016,usoskin_ADF_16}) that the ``quality'' of a solar observer can be quantified not by a relative $k-$factor but by the observational acuity threshold, \textit{i.e.} the minimum size of a sunspot group the observer can see considering the used instrumentation, technique and eyesight. This quantity [$A_{\rm th}$] (in millions of the solar disc, msd) has a clear meaning -- all sunspot groups bigger than $A_{\rm th}$ are reported while all the groups smaller than $A_{\rm th}$ are missed by the observer. We note that weather conditions and age or experience may lead to variations of the actual threshold for a given observer in time, but here we consider that the threshold is constant in time. This is also assumed in the $k-$factor methodology. The threshold would be consistent with the $k-$factor if the fraction of small ($<A_{\rm th}$) groups on the solar disc was roughly constant and independent on the level of solar activity. However, as many studies imply (\opencite{kilcik11,jiang11_2,nagovitsyn12,obridko14}), the fraction of small groups varies with solar activity: it is large around solar minima and decreases with the level of solar activity. Accordingly, the use of the linear $k-$factor method may lead to a distortion of the calibrated sunspot numbers (\opencite{lockwood_ApJ_16}). In this article we study the relation between sunspot group counts by a ``poor'' observer and those by the reference ``perfect'' observer, using the reference dataset described in Section~\ref{Sec:data}. In Section~\ref{Sec:dist} we study the distribution of sunspot group sizes and its dependence on the level of solar activity. Its effect on observations by solar observers of different quality and their inter-calibrations are discussed in Section~\ref{Sec:obs}. We propose the use of a new nonlinear $c-$factor to calibrate data from a ``poor'' observer to the reference conditions depending on the level of solar activity, in a more realistic manner than that offered by the traditionally used linear $k-$factor.
We have studied the distribution of sunspot group size or area and its dependence on the level of solar activity. We show that the size distribution of sunspot groups cannot be assumed constant but varies significantly with solar activity. The fraction of small groups, which can be potentially missed by an ``imperfect'' solar observer, is found to be not constant but decreasing with the level of solar activity. An empirical relation (Equation (\ref{Eq:exp})) is proposed which describes the amount of small groups as a function of the solar activity level. It is shown that the number of small groups asymptotically approaches a saturation level so that high solar activity is largely defined by {big} groups. We have studied the effect of the changing sunspot group area on the ability of realistic observers to see and report the daily number of sunspot groups. It is shown that the relation between the numbers of sunspot groups as seen by different observers with different observational acuity thresholds (defined by the quality of their instrumentation and eye sights) is strongly non-linear and cannot be approximated by a linear scaling, in contrast to how it was traditionally done earlier. We propose to use the observational acuity threshold $A_{\rm th}$ to quantify the quality of each observer, instead of the relative $k-$factor used earlier. The value of $A_{\rm th}$ means that all sunspot groups bigger than $A_{\rm th}$ would be reported while all the groups smaller than $A_{\rm th}$ missed by the observer. We have introduced the non-linear $c-$factor, based on the observer's acuity threshold $A_{\rm th}$, which can be used to correct each observer to the reference conditions. The method has been applied to a pair of principal solar observers of the 19th century, Rudolf Wolf and Alfred Wolfer of Z\"urich. We have shown that the earlier used linear method to correct Wolf data to the conditions of Wolfer, using the constant $k-$factor of 1.66, tends to overestimate of solar activity around solar maxima. This {result} presents a new tool to recalibrate different solar observers to the reference conditions. A full recalibration based on the new method will be a subject of a forthcoming work.
16
9
1609.00569
1609
1609.00872_arXiv.txt
\noindent We compute the entanglement and R\'enyi entropy growth after a global quench in various dimensions in free scalar field theory. We study two types of quenches: a boundary state quench and a global mass quench. Both of these quenches are investigated for a strip geometry in 1, 2, and 3 spatial dimensions, and for a spherical geometry in 2 and 3 spatial dimensions. We compare the numerical results for massless free scalars in these geometries with the predictions of the analytical quasiparticle model based on EPR pairs, and find excellent agreement in the limit of large region sizes. At subleading order in the region size, we observe an anomalous logarithmic growth of entanglement coming from the zero mode of the scalar. \let\thefootnote\relax\footnotetext{\hspace{-0.75cm}{\tt $^[email protected] \\ $^[email protected] \\ $^[email protected] \\ $^[email protected]}}
A global quench is a simple setting in which we can study thermalization in isolated quantum systems: at $t=0$ we start with an atypical translationally invariant, short range entangled initial state $\ket{\psi_0}$, and let the state evolve in time.\footnote{We use the word quench somewhat loosely; a more narrow definition describes a process in which the abrupt change of the Hamiltonian turns the ground state of the pre-quench Hamiltonian into an excited state of the post-quench Hamiltonian.} In a generic quantum system, during the process of thermalization all simple observables converge to the value they take in the Gibbs ensemble. A good characterization of thermalization is how close the reduced density matrix of a small subsystem $\sA$, $\rho_\sA\le[\ket{\psi(t)}\ri]$ is to the reduction of the thermal density matrix to the region $\sA$, $\rho_\sA^\text{th}\propto \Tr_{\bar \sA} e^{-\beta H}$, where $\beta$ is to be chosen such that the expectation value of the energy agrees between the two density matrices. One way to quantify the proximity to thermal behavior is to calculate the von Neumann entropy of $\rho_\sA(t)$, and follow as it evolves from an area law value to saturation at the thermal entropy. In a free theory, because of the infinitely many conserved charges the above picture requires modification. The time evolution leads to simple observables converging to their values in the Generalized Gibbs Ensemble (GGE)~\cite{rigol2007relaxation} instead of the Gibbs ensemble. In this paper we will work with Gaussian states in free scalar field theories: for these states it is known that the set of charges that one has to include in the GGE are the particle numbers in each momentum mode~\cite{Sotiriadis:2014uza}.\footnote{For non-Gaussian states the story is more complicated~\cite{Cardy:2015xaa,Sotiriadis:2015xia}.} After the quench we focus on the case of massless fields. We investigate entanglement entropy growth of these fields for geometric subregions in diverse dimensions. We discretize the theory on a lattice and use the correlator matrix approach to numerically compute entanglement and R\'enyi entropies. The continuum limit can be achieved by taking a scaling limit: \es{Scaling1}{ {R\ov a}\,,{t\ov a} \gg 1\,, \qquad {\beta_k\ov a} \gg 1\,, } where $R$ is the characteristic size of the region $\sA$, $t$ is the time measured from the quench, $a$ is the lattice constant, and $\beta_k$ is the inverse of the effective temperature in the mode with wavenumber $k$. Let us introduce \es{Shat}{ \hat{S}_\sA(t) = S_\sA(t) - S_\sA(0) } to get rid of the vacuum area law pieces in the entropy.\footnote{We want the subtracted entropy $\hat{S}_\sA(t)$ to have a good continuum limit. In theories with low-dimension scalar operators the entropy can exhibit a state-dependent divergence structure~\cite{Marolf:2016dob}. In these theories there should exist a corresponding ambiguity in the definition of the entropy that allows us to regularize the entropy in a way that $\hat{S}_\sA(t)$ is finite for all times. In theories with state-independent divergence structure any regularization will yield a finite result. Free scalar theories are of the latter type. Of course, $S_\sA(t)$ itself is well-defined on the lattice. } In the limit of large region sizes and times \es{Scaling2}{ {R,t} \gg \beta_k\,, } it is expected that the entropy obeys a scaling form: \es{ScalingLaw}{ \hat{S}_\sA(t)&=s\,\vol(\sA)\, f\le(t\ov R\ri)\\ f(0)&=0\,, \qquad f(\infty)=1 \,, } where $s$ is the entropy density in the GGE,\footnote{In a generic system without any conserved quantity other than the energy, it would be the thermal entropy density.} $f(0)=0$ follows from the definition, and $f(\infty)=1$ assumes that the entropy reaches the equilibrium value predicted by GGE. In the limits~\eqref{Scaling1},~\eqref{Scaling2} the finite area law pieces in the entropy are suppressed by the factor $R/\beta$. In summary, we want to work in the double scaling limit \es{Scaling3}{ {R,t} \gg \beta_k\gg a\,. } There is a useful toy model for entanglement growth introduced in~\cite{Calabrese:2005in,Calabrese:2007rg} and generalized to higher dimensions in~\cite{Casini:2015zua}. This model assumes that the quench creates quasiparticle EPR pairs\footnote{In higher dimensions we can consider more complicated patterns of entanglement, as explored in~\cite{Casini:2015zua}. Intuitively, however, for Gaussian states that we consider in this paper the bipartite entanglement structure encoded in EPR pairs seems to be the appropriate choice.} localized on length scales $O(\beta)$. In the scaling limit~\eqref{Scaling3} the pairs can be taken to be pointlike. In a massless theory, the pairs then propagate with the speed of light,\footnote{In massive integrable models, they follow a nontrivial dispersion relation~\cite{fagotti2008evolution,2016arXiv160800614A}.} and $\hat{S}_\sA(t)$ counts the number of pairs that have one member in $\sA$ and the other in $\bar{\sA}$. While the model reproduces the entropy of one interval in any $1+1$ dimensional conformal field theory (CFT)~\cite{Calabrese:2005in}, for more complicated geometries it only works in integrable CFTs~\cite{Asplund:2015eha}. In this work we find overwhelming evidence that the quasiparticle model reproduces the growth of entanglement in higher dimensional free massless scalar field theories in the scaling limit~\eqref{Scaling3}, by comparing the predictions of the quasiparticle picture to numerical computations in strip and sphere geometries, see Fig.~\ref{fig:geometry}. We study two types of quenches: the boundary state quench corresponds to starting the evolution from a regularized boundary state of the CFT, which leads to $\beta_k=\beta$, while the mass quench corresponds to abruptly changing the Hamiltonian of the system by changing the mass parameter, and leads to a $k$-dependent effective temperature. The quasiparticle picture works for both quenches equally well. \begin{figure}[!h] \begin{center} \includegraphics[width=5cm]{Shapes1.pdf}\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \includegraphics[width=6cm]{Shapes2b.pdf} \caption{\small{The two types of geometries we examine in this work. Regions $\mathcal{A}$ and $\bar{\mathcal{A}}$ partition the system into two distinct regions. Starting with a pure state, we trace out region $\bar{\mathcal{A}}$ to obtain a reduced density matrix $\rho_{\mathcal{A}}$, from which we compute the entanglement and R\'enyi entropies. Left: The strip geometry with two sides separated by a distance $2R$. Right: A spherical geometry of radius $R$.} \label{fig:geometry}} \end{center} \end{figure} We emphasize three key features of our findings. First, we find (in the two geometries we consider) that at early times the entropy exhibits linear growth of the form: \es{LinGrowth}{ \hat{S}_\sA(t) &= v_E\,s\, \text{area}(\sA)\,t\,, \quad \beta_k\ll t\ll R \,, } where by $\text{area}(\sA)$ we mean the area of the entangling surface, $\vol(\p\sA)$. The dependence on the shape only appears through $\text{area}(\sA)$, and the entanglement velocity $v_E$ is shape independent.\footnote{In the regime $\beta_k\ll t\ll R$ the curvature of $\sA$ should be irrelevant for the process, so~\eqref{LinGrowth} is intuitive.~\eqref{LinGrowth} is also known to hold in strongly coupled theories with a holographic dual~\cite{Hartman:2013qma,Liu:2013iza,Liu:2013qca}.} Second, we comment on the saturation time $t_S$. For spherical geometries entanglement saturates as fast as allowed by causality~\cite{Hartman:2015apr}, \es{SatTime}{ t_S^\text{(sphere)} = R \,. } We find that $t_S$ is strongly shape dependent,\footnote{In chaotic (holographic) examples the shape dependence of $t_S$ is mild, but still non-trivial~\cite{Liu:2013iza,Liu:2013qca,Mezei:2016wfz}.} and for a strip geometry\footnote{The intuition behind~\eqref{SatTime2} is that there are quasiparticle pairs propagating almost parallel to $\p\sA$ that take an arbitrary long time to start to contribute to the entropy. } \es{SatTime2}{ t_S^\text{(strip)} = \infty \,. } We reiterate that the results~\eqref{LinGrowth},~\eqref{SatTime}, and~\eqref{SatTime2} are in agreement with the quasiparticle model. They are just simple properties of the function $\hat{S}_\sA(t)$ in the limit~\eqref{Scaling3}, which according to our findings is in complete agreement between the numerical computation in the free massless scalar field theory and the quasiparticle model. Third, we point out an unexpected aspect of our numerical results: we see a logarithmic growth of entropy even after the saturation time~\eqref{SatTime}, which however is subleading in the limit~\eqref{Scaling3}, and therefore does not spoil the agreement with the quasiparticle model in the appropriate regime.\footnote{Unless we extrapolate this growth to exponentially large times.} We identify the scalar zero mode as the source of this logarithmic growth, but the phenomenon deserves further investigation. Besides the intrinsic interest in the study of equilibration in free field theory, we are also motivated by the scarcity of computations of entropy growth in field theories. Our results elevate the status of the higher dimensional quasiparticle model from a toy model to an actual description of entanglement growth in free massless scalar field theories.\footnote{In $1+1$ dimensions the quasiparticle model has already been solidly established as a valid description of entanglement growth in integrable models.}\textsuperscript{,}\footnote{Of course, the outstanding problem is to give an analytic derivation of the quasiparticle picture from the field theory.} The results then provide a useful benchmark for strongly coupled theories: the conclusion of~\cite{Casini:2015zua} that in strongly interacting (holographic) theories entanglement spreads faster than allowed by free streaming, \es{vEsmall}{ v_E^\text{(free)} < v_E^\text{(holographic)}\,, } is reinforced. In general, collecting results on entanglement growth from various systems could lead to further insight into the workings of equilibration in quantum systems, both integrable and chaotic. For further discussion from this viewpoint see~\cite{Mezei:2016wfz}. Using similar techniques, it is possible to study global quenches in free fermion theories. The analytical and numerical techniques for analyzing global quenches in free scalar fields could potentially be extended to interacting field theories either perturbatively \cite{Hertzberg:2016eescalar} or non-perturbatively \cite{Cotler:2016ee1, Cotler:2016ee2}. Such generalizations could shed new light on the dynamics of entanglement in interacting systems. In this paper we restrict our attention to instantenous quenches. It would be very interesting to extend our analysis to smooth quenches, where the duration of the quench $\de t$ introduces a new time scale. In the limit $R,t,\de t\gg \beta$, we expect the entropy to again obey a scaling form~\eqref{ScalingLaw}, but the scaling would become a function of two variables $f(t/R,\, \de t/R)$. Correlation functions obey universal scaling laws in this limit~\cite{Das:2014jna,Das:2014hqa,Berenstein:2014cia,Das:2015jka}, and it would be interesting to explore, if those results carry over to the case of entanglement entropy. It would also be interesting to see, if a modification of the quasiparticle model could reproduce the scaling function $f(t/R,\, \de t/R)$. Perhaps, smearing the time of origin of the EPR pairs could be a useful starting point~\cite{Casini:2015zua}. The plan of the paper is as follows. In Sec.~\ref{sec:setup} we provide an introduction to our setup: we review the correlation matrix approach of computing entropies, and we discuss the quenches considered, along with the quasiparticle model. Sec.~\ref{sec:numres} contains the numerical results for the entanglement and R\'enyi entropies, and a comparison with the quasiparticle model gives excellent agreement. A brief investigation into the logarithmically growing mode is also included. Some further details of the setup are relegated to the Appendices.
16
9
1609.00872
1609
1609.00796_arXiv.txt
Volatiles, especially CO, are important gas tracers of protoplanetary disks (PPDs). Freeze-out and sublimation processes determine their division between gas and solid phases, which affects both which disk regions can be traced by which volatiles, and the formation and composition of planets. Recently, multiple lines of evidence suggest that CO is substantially depleted from the gas in the outer regions of PPDs, i.e. more depleted than would be expected from a simple balance between freeze-out and sublimation. In this paper, we show that the gas dynamics in the outer PPDs facilitates volatile depletion through turbulent diffusion. Using a simple 1D model that incorporates dust settling, turbulent diffusion of dust and volatiles, as well as volatile freeze-out/sublimation processes, we find that as long as turbulence in the cold midplane is sufficiently weak to allow a majority of the small grains to settle, CO in the warm surface layer can diffuse into the midplane region and deplete by freeze-out. The level of depletion sensitively depends on the level of disk turbulence. Based on recent disk simulations that suggest a layered turbulence profile with very weak midplane turbulence and strong turbulence at disk surface, CO and other volatiles can be efficiently depleted by up to an order of magnitude over Myr timescales.
\label{sec:intro} Protoplanetary disks (PPDs) consist of gas and dust. Both components play a major role in planet formation through dynamical processes in the gaseous disk, as well as physical and chemical coupling between gas and dust components. The dust can be probed via the disk spectral energy distribution and resolved dust continuum emission up to millimeter/centimeter grain sizes \citep{Andrews15}. Despite uncertainties in dust opacity, dust mass can be derived from sub-millimeter continuum flux (e.g., \citealp{WilliamsCieza11}). There is no corresponding direct constraint on the gas because molecular hydrogen hardly radiates, the gas mass is instead usually estimated by assuming a canonical gas-to-dust mass ratio of 100 from the interstellar medium (e.g.,\citealp{Bohlin_etal78}), leading to large uncertainties. Recently, a number of works have attempted to measure the gas content of PPDs using CO and its isotopologues (e.g., \citealp{Bruderer2012,WilliamsBest14,Kama2016a,Kama2016,Eisner_etal16,Ansdell_etal16}). As a volatile species, CO freezes out onto dust grains in the cold midplane regions of the outer PPDs, while it remains in the gas phase in the warmer disk surface layer (e.g., \citealp{HenningSemenov13}). These studies, which incorporate CO freeze-out and different levels of disk chemistry, found that if one assumes a standard gas to dust ratio and a canonical $\rm CO/H_2$ ratio of $\sim10^{-4}$ (e.g., \citealp{Frerking_etal82,Ripple2013}), CO is frequently underabundant by % a factor of $\gtrsim10$ in the warm disk surface layer. This result holds also if isotopologue-selective photodissociation is taken into account (\citealp{Miotello14,Schwarz16}). Therefore, either CO is intrinsically depleted, or the gas-to-dust mass ratio is significantly lower than the standard value. \begin{figure*}[!ht] \centering \includegraphics[width =0.8\textwidth]{PPDs.pdf} \caption{Schematic picture on the turbulent diffusion mediated volatiles (CO) depletion in PPDs. Freeze-out of CO on dust grain surface at the low-temperature midplane allows surface CO to turbulently diffuse down to the midplane, which further freeze-out onto the grains. Dust grains settle to the midplane without mixing back to disk surface due to weak midplane turbulence, leading to systematic CO depletion.} \label{fig:PPDs} \end{figure*} Theoretically, both scenarios are plausible. The gas-to-dust ratio can be reduced via disk wind, where mass loss from disk surface primarily remove gas instead of dust \citep{Gorti_etal15,Bai16}. In the mean time, through chemical processes, a significant fraction of carbon can be converted to complex organic molecules over the disk lifetime \citep{Bergin2014,Yu2016,Bergin_etal16}. The presence of CO depletion is supported at least in the case of TW Hya \citep{Favre_etal13,Du_etal15}, where a constraint on the disk gas mass is available from HD observations \citep{Bergin_etal13}. Volatile depletion has also been inferred in the case of water, whose freeze-out temperature is much higher. Based on {\it Spitzer} mid-infrared observations of H$_2$O lines \citep{Salyk08,CarrNajita08}, \citet{Meijerink09} showed that water vapor abundance at the disk surface is sharply truncated beyond $\sim1$AU, inconsistent with pure chemical models (e.g. \citet{Glassgold09}). They hypothesized that beyond $\sim1$AU, warm water vapor at the disk surface diffuses vertically towards the midplane by turbulence and freezes-out onto the solids to account for the truncation. This water vapor depletion mechanism by turbulent diffusion is related to the ``cold-finger effect'' of \citet{StevensonLunine88}, but working in the vertical direction. Using Monte-Carlo simulations of dust/vapor dynamics, \citet{RosJohansen13} and \citet{Krijt16} indeed found rapid depletion of water vapor in the surface layer of inner PPDs near the water ice line. They further showed that the depletion process strongly promote grain growth, and hence planetesimal formation. In this paper, we apply the picture of turbulent-diffusion mediated volatile depletion to CO. Note that this picture has also been invoked by \citet{Kama2016} recently to interpret carbon depletion based on ALMA observations of CO and [CI] lines from two PPDs. We focus on the outer regions of PPDs ($\gtrsim10-30$AU), which are where most of the CO mass resides, and are also where significant CO depletion has been observationally measured. Compared to the inner disk, the outer disk is characterized by much lower gas density and much longer dynamical timescales. Correspondingly, dust grains of the same size are more loosely coupled to the gas in the outer disk, and settle more strongly towards the midplane, which, as this paper shows, has a large impact on volatile depletion. In this work, we present a simple semi-analytical model for the evolution of CO abundances to quantify the efficiency of gas-phase CO depletion. It incorporates dust settling, turbulent diffusion, adsorption (freeze-out) and thermal desorption (sublimation) processes. We do not attempt to model the entire disk in full scale, but restrict ourselves to a simple one-dimensional (1D) model in the vertical dimension. Our goal is to demonstrate and clarify the relevant physics, which can be incorporated into more sophisticated models in the future. We highlight that the level of disk turbulence we adopt is motivated from recent gas dynamic simulations of the outer PPDs \citep{Simon2013,Bai2015} that take into account more realistic disk physics: the level of turbulence in the outer disk is layered, with strong turbulence in the warm disk surface layer and much weaker turbulence in the midplane region (see Section \ref{sec:model} for more details). We will show that this layered structure of turbulence facilitates the depletion of gas-phase CO. We describe the basic picture of turbulent-diffusion mediated CO depletion, as well as our physical model in Section \ref{sec:model}. Model results presented in Section \ref{sec:result}. In Section \ref{sec:con}, we summarize, and discuss the caveats and applications.
\label{sec:con} In this paper, we have presented a simple semi-analytical model which demonstrates that as a result of dust settling and turbulent diffusion, CO in the warm surface layer of the outer regions of PPDs are subject to turbulent diffusion into the cold midplane and subsequent depletion. The most important condition for turbulent-diffusion mediated CO depletion is that midplane turbulence must be sufficiently weak so that the bulk of the small grains that dominate the surface area can settle within the atmospheric snow line. The process is facilitated by stronger turbulence in the disk surface layer. Both conditions are likely realizable in the outer regions of PPDs \citep{PerezBeckerChiang11,Simon2013,Bai2015}. Our results suggest that turbulent-diffusion likely contributes to the observed carbon (especially CO) depletion in PPDs, particularly in the disk surface region (e.g., \citealp{Du_etal15}). Its contribution depends on the level of midplane turbulence and grain size distribution, and can be a factor of a few to more than one order of magnitude. In reality, we expect that additional mechanisms also contribute to carbon depletion. Conversion of carbon to complex organic molecules likely yields a factor of a few of depletion over disk lifetime \citep{Bergin2014,Yu2016}. Gas removal from disk wind likely contributes another factor of two to a few to the reduced gas-to-dust mass ratio given that wind mass loss rate is comparable to the mass accretion rate \citep{Bai16}. Altogether, these processes are likely to be able to account for a wide range (two orders of magnitude) of the CO depletion factor, and/or the apparent gas-to-dust ratio inferred from observations. While the mechanism of CO depletion in the outer disk discussed here is similar to previous studies on the depletion of water vapor in the inner disk \citep{Meijerink09,RosJohansen13,Krijt16}, there are important differences. One may not directly apply our calculation results to the inner disk, and vice versa. We first note that the level of turbulence in the inner region of PPDs is likely very weak yet highly uncertain. On the one hand, the MRI is largely suppressed (e.g., \citealp{BaiStone13}), yet some turbulence is needed to keep small grains suspended to match the observed disk spectral energy distributions (SEDs), which may be attributed to certain poorly understood hydrodynamic instabilities (e.g., \citealp{Turner_etal14}). We choose to focus on the outer regions of PPDs, where layered turbulence structure is likely a natural outcome of the MRI operating in the surface FUV layer \citep{Perez-BeckerChiang11}. Another important difference between the inner and outer regions of PPDs is the timescale. The inner disk is characterized by fast collision and grain growth timescales, and sub-micron grains are mainly the product of destructive collisions of large grains. Recently, \citet{Krijt2016} showed that because of their short collisional coagulation timescale, small grains can be effectively trapped in the midplane region without diffusing to disk upper layers. This effect can substantially enhance volatile depletion, and in the case of water, such depletion further enhances grain growth \citep{RosJohansen13,Krijt16}. We have ignored the growth and collisional evolution of grains in our calculations. While this is less self-consistent, we note that in the outer disk, grain growth is much slower and is found to be drift-limited rather than fragmentation limited (e.g., \citealp{Birnstiel12}). Therefore, the smallest grain population in the outer disk is mostly primordial rather than from collisional fragmentation. As we have mentioned in Section \ref{sec:dust}, it takes $\sim1$Myr for $\sim0.1\mu$m grains to settle to within $\sim2H$ at $\sim50$AU. We also estimate a similar $\sim$Myr time scale for collisions between sub-micron grains at that height (which is more relevant for CO depletion, instead of midplane). Therefore, because of the long timescale in the outer disk, we expect grain growth to only play a minor role in the outer disk physics discussed in this work. As an initial effort, we focus on the physics of the mechanism using a simple 1D model, which captures the essence of the problem. One important limitation is that we have ignored the radial dimension, where grains undergo radial drift, and the disk itself evolves over Myr timescales \citep{TakeuchiLin2002}. We note that significantly improvement in our knowledge about disk evolution is needed before more reliable dust transport model can be made, especially given the prevalence of disk substructures that has been realized in the recent years (e.g., \citealp{ALMA15,Nomura_etal16,Andrews_etal16,Zhang_etal16}). Regardless of the details of radial dust transport, we expect our main conclusions to be robust as long as the vertical profile of turbulence does not vary significantly with radius in the outer disk. Overall, our work has demonstrated the importance of incorporating more realistic disk dynamics (i.e., turbulent diffusion) into models of volatile evolution (e.g., \citealp{CieslaCuzzi06}). The outcome would be important for determining, for instance, the location of the volatile condensation fronts/snow lines \citep{Oberg2011,Qi_etal13,Piso2016}, and volatile delivery to planets which would affect the planets' bulk and atmospheric composition \citep{Madhusudhan2011}. More generally, volatiles play an important role in the overall disk chemistry \citep{HenningSemenov13}. As initially pursued in \citet{SemenovWiebe11}, we expect future studies of PPD chemical evolution to pay more attention to, and eventually benefit from incorporating more realistic PPD gas dynamics. We thank Chunhua Qi, Fred Ciesla, Til Birnstiel and Ted Bergin for helpful discussions, and Mihkel Kama, Klaus Pontoppidan, Eugene Chiang and an anonymous referee for useful comments that greatly improve the paper. XNB acknowledges support from Institute for Theory and Computation, Harvard-Smithsonian Center for Astrophysics. KI\"O acknowledges funding through a Packard Fellowship for Science and Engineering from the David and Lucile Packard Foundation.
16
9
1609.00796
1609
1609.00275_arXiv.txt
{ Evaporating rocky exoplanets, such as KIC~12557548b, eject large amounts of dust grains, which can trail the planet in a comet-like tail. When such objects occult their host star, the resulting transit signal contains information about the dust in the tail. } { We aim to use the detailed shape of the \textit{Kepler} light curve of KIC~12557548b to constrain the size and composition of the dust grains that make up the tail, as well as the mass loss rate of the planet. } { Using a self-consistent numerical model of the dust dynamics and sublimation, we calculate the shape of the tail by following dust grains from their ejection from the planet to their destruction due to sublimation. From this dust cloud shape, we generate synthetic light curves (incorporating the effects of extinction and angle-dependent scattering), which are then compared with the phase-folded \textit{Kepler} light curve. We explore the free-parameter space thoroughly using a Markov chain Monte Carlo method. } { Our physics-based model is capable of reproducing the observed light curve in detail. Good fits are found for initial grain sizes between 0.2 and 5.6~$\upmu$m and dust mass loss rates of 0.6 to 15.6~M$\sub{\oplus}$~Gyr$^{-1}$ ($ 2 \sigma $ ranges). We find that only certain combinations of material parameters yield the correct tail length. These constraints are consistent with dust made of corundum (Al$_2$O$_3$), but do not agree with a range of carbonaceous, silicate, or iron compositions. } { Using a detailed, physically motivated model, it is possible to constrain the composition of the dust in the tails of evaporating rocky exoplanets. This provides a unique opportunity to probe to interior composition of the smallest known exoplanets. }
Determining the chemical composition of exoplanets is an important step in advancing our understanding of the Earth's galactic neighbourhood and provides valuable benchmarks for theories of planet formation and evolution. For small (i.e., Earth-sized and smaller) exoplanets, most efforts so far have been directed at determining a planet's mean density from independent measurements of its size and mass, which gives an indication of the bulk composition \citep{2007ApJ...659.1661F,2013Natur.503..381H}. This method, however, is restricted by the observational lower limits for which planetary radii and masses can reliably be determined \citep[although recent progress is pushing the limits ever further down;][]{2015Natur.522..321J}. Another, more fundamental, problem is that exoplanets with different (combinations of) chemical compositions can have the same mean density, making it impossible to distinguish between these compositions using just the planet's size and mass \citep{2007ApJ...669.1279S}. In particular, whether small carbon-based exoplanets exist is an open question that cannot be resolved with bulk density measurements alone \citep[see Fig.~9 of][]{2007ApJ...669.1279S,2012ApJ...759L..40M}. Another method of investigating the chemical composition of exoplanetary material is the study of white-dwarf atmospheres that are polluted by the accretion of tidally disrupted asteroids or minor planets (see \citealt{2014AREPS..42...45J} for a review and \citealt{2015Natur.526..546V}; \citealt{2016ApJ...816L..22X} for some recent results). This method allows the bulk composition of the accreted bodies to be measured with unprecedented precision. However, it can only be applied to white-dwarf systems, and the exact relation between the measured compositions and those of the exoplanets in the original, main-sequence-stage planetary systems is not yet clear. \begin{figure*}[!t] \includegraphics[width=\linewidth]{intro_sketch} \caption{ The phase-folded long-cadence \textit{Kepler} light curve of KIC~1255b (\textbf{bottom}), together with schematic views of the system at different orbital phases (\textbf{top}), illustrating how an asymmetric dust cloud can explain the peculiar transit profile. Arrows indicate which sketch corresponds to which orbital phase. For details on the observational data, see Sect.~\ref{s:meth_obs} of this work and Sect.~2 of \citet{2014A&A...561A...3V}. The error bars on flux include the spread caused by the variability in transit depth, making the in-transit error bars greater than the out-of-transit ones (which are mostly smaller than the size of the symbols). In the sketches, the star, the orbit of the planet, and the length of the dust tail are all drawn to scale; the tick marks on the axes are spaced one stellar radius apart. The vertical thickness of the dust cloud and its colour gradient (which illustrates the gradually decreasing dust density) are chosen for illustrative purposes. } \label{fig:intro_sketch} \end{figure*} The discovery of transiting evaporating rocky exoplanets \citep{2012ApJ...752....1R} has opened up a possible new channel for determining the chemical compositions of small exoplanets that is complementary to the methods mentioned above. Through the evaporation of their surface, these objects present material from their interior to the outside, where it can be examined as it blocks and scatters star light. We recently showed how the composition of the outflowing material can be determined from the shape of the object's transit light curve using semi-analytical expressions \citep[][hereafter \citetalias{2014A&A...572A..76V}]{2014A&A...572A..76V}. In the present paper, we revisit this problem using a numerical model, which allows us to let go of several of the simplifying assumptions made in \citetalias{2014A&A...572A..76V} and to use more directly all the information contained in the light curve. \subsection{Evaporating rocky exoplanets} To date, there are three known (candidates of) transiting evaporating rocky exoplanets: KIC~12557548b \citep[hereafter KIC~1255b;][]{2012ApJ...752....1R}, \mbox{KOI-2700b} \citep{2014ApJ...784...40R}, and \mbox{K2-22b} \citep{2015ApJ...812..112S}, all three discovered using the \textit{Kepler} telescope \citep{2010Sci...327..977B}.\footnote{% A search for more such objects amongst short-period \textit{Kepler} exoplanet candidates did not find any additional ones \citep{2014AN....335.1018G}.} They all orbit K- and \mbox{M-type} main-sequence stars in orbital periods of less than a day and their light curves are marked by asymmetric transit profiles and variable transit depths. Both light-curve properties can be explained by a scenario in which the extinction of star light is caused by an asymmetric cloud of dust grains, whose collective cross-section changes from transit to transit. This scenario was first put forward by \citet{2012ApJ...752....1R} for the prototype KIC~1255b; we briefly summarise it here. The dust grains that make up the cloud originate in a small evaporating planet. Once they have left the planet, radiation pressure from the host star pushes them into a comet-like tail trailing the planet. With increasing distance from the planet, the dust grains speed up with respect to the planet. They also gradually sublimate due to the intense stellar irradiation, decreasing their size. Both effects cause the angular density of extinction cross-section to decrease further into the tail. The resulting asymmetric shape of the dust cloud can explain the sharp ingress and gradual egress of the observed transit light curve (see Fig.~\ref{fig:intro_sketch}).\footnote{% \label{fn:k2_22b} The light curve of \mbox{K2-22b} has a markedly different shape, which can be explained by a streamer of dust grains leading the planet (instead of trailing it). In this object, whose host star is less luminous, the initial launch velocities of the dust grains (rather than radiation pressure) may dominate the dynamics \citep{2015ApJ...812..112S}. } In addition, scattering of star light by dust grains results in a brightening just before the transit, when the bulk of the dust cloud is not in front of (the brightest part of) the stellar disk, but close enough to yield small scattering angles. The asymmetry of the dust cloud means that this effect is much stronger around ingress than egress. The dust cloud scenario has been validated by the colour dependence of the transit depth \citep{2015ApJ...800L..21B,2015ApJ...812..112S,2016arXiv160507603S}, while many false positive scenarios for this type of event have been ruled out based on radial velocity measurements, high angular resolution imaging, and photometry \citep{2014ApJ...786..100C}. Morphological modelling of the KIC~1255b light curve has allowed some properties of its dust cloud to be determined \citep{2012A&A...545L...5B,2013A&A...557A..72B,2014A&A...561A...3V}. In particular, both the wavelength dependence of the transit depth and the morphological dust cloud models indicate that the dust grains have radii in the range 0.1 to 1.0~$\upmu$m. To explain the variation in transit depth, the dust cloud scenario invokes erratic variations in the planet's dust production rate. By making some assumptions about the dust grains, it is also possible to infer the average dust mass loss rate of the evaporating planet (i.e., excluding the mass lost in gas) from the light curve. For KIC~1255b and \mbox{K2-22b}, the dust mass loss rates are estimated to be of the order of 0.1 to 1~M$\sub{\oplus}$~Gyr$^{-1}$ (\citealt{2012ApJ...752....1R,2013MNRAS.433.2294P,2013ApJ...776L...6K}; \citetalias{2014A&A...572A..76V}; \citealt{2015ApJ...812..112S}), while for \mbox{KOI-2700b} it may be one to two orders of magnitude lower (\citealt{2014ApJ...784...40R}; \citetalias{2014A&A...572A..76V}). The planet's mass loss is thought to be fuelled by the total bolometric flux from the host star \citep{2012ApJ...752....1R}.\footnote{% \mbox{X-ray}-and-ultraviolet-driven evaporation was suggested as an alternative, based on a relation between transit depth and stellar rotational phase \citep{2013ApJ...776L...6K}. A more straightforward explanation for this relation, however, is the occultation of starspots by the transiting dust cloud \citep{2015MNRAS.449.1408C}. } Stellar radiation heats the planetary surface to a temperature exceeding 2000~K, which causes the solid surface to vaporise, creating a metal-rich atmosphere \citep[as has been modelled in detail for super-Earths;][]{2009ApJ...703L.113S,2011ApJ...742L..19M,2012ApJ...755...41S}. This atmosphere is hot and expands into the open space around the planet, driving a ``Parker-type'' thermal wind \citep{2012ApJ...752....1R}. As the gas expands and cools, its refractory constituents can condense into dust grains.\footnote{% Another mechanism that could be responsible for loading the planet's atmosphere with dust is explosive volcanism \citep{2012ApJ...752....1R}.} Small dust grains are entrained in the gas flow until the gas thins out, from which point the dust dynamics are controlled by stellar gravity and radiation pressure. \citet{2013MNRAS.433.2294P} modelled the planetary outflow in detail, finding that the mass loss rate is a strong function of the mass of the evaporating body. According to their model, the mass loss rate of KIC~1255b indicates that the planet cannot be more massive than about 0.02~M$\sub{\oplus}$ (i.e., less than twice the mass of the Moon) and will disintegrate completely within about 40 to 400~Myr. The planetary radius corresponding to the mass limit is consistent with the upper limits on the size of the planet -- derived from the non-detection of transits in some parts of the light curve \citep{2012A&A...545L...5B} and secondary eclipses in the entire light curve \citep{2014A&A...561A...3V} -- and, if correct, would make KIC~1255b one of the smallest exoplanets known. \subsection{Dusty tail composition} Regardless of how exactly the evaporating planet produces and ejects dust, the composition of the dust in the tail will reflect that of the planet. The precise relation between the two compositions may be complicated by selection effects such as preferential condensation of certain dust species in the atmosphere \citep[e.g., Sect.~3.2.2 of][]{2012ApJ...755...41S} and possibly the fractional vaporisation of a magma ocean \citep[Sect.~6.1 of][]{2011Icar..213....1L}. Nevertheless, identifying the composition of the dust can lead to insights into the composition of the surface of the planet and possibly its interior (if prior evaporation has already removed the original surface, exposing deeper layers). Such insights are invaluable for theories of planet formation and evolution. Building upon the work of \citet{2002Icar..159..529K} and \citet{2012ApJ...752....1R,2014ApJ...784...40R}, we recently demonstrated how the length of a dusty tail trailing an evaporating exoplanet can be used to constrain the composition of the dust in this tail \citepalias{2014A&A...572A..76V}. In a nutshell, the length of the tail is determined by the interplay of radiation-pressure-induced azimuthal drift of dust grains and the decrease in size of these grains due to sublimation. Because the sublimation rate of the dust is strongly dependent on its compositions, the tail length is a proxy for grain composition. By comparing tail-length predictions for potential dust species with the observed tail length (derived from the duration of the transit egress), it is possible to put constraints on the composition of the dust in the tail. \citetalias{2014A&A...572A..76V} presents a semi-analytical description of dusty tails, in which the shape of the tail is described using just two parameters: the tail's characteristic length and its initial angular density. The values of these two parameters are taken from the morphological tail models, which derive them from light curve fitting \citep{2012A&A...545L...5B,2013A&A...557A..72B,2014A&A...561A...3V,2014ApJ...784...40R}. However, describing the tail morphology in just two parameters ignores many details of the tail's shape, which may be used to constrain the dust composition from the detailed shape of the transit light curve. Furthermore, the derivation of a semi-analytical description of the dust tail in \citetalias{2014A&A...572A..76V} requires many assumptions, which may undermine the applicability of the resulting equations. To take the next step in modelling the dusty tails of evaporating planets, it is desirable to employ a physics-based (in contrast to morphological) model of the tail that self-consistently takes into account the interplay of grain-size-dependent radiation-pressure dynamics and temperature-dependent grain-size evolution. In this paper, we develop such a model. In brief, the model consists of a particle-dynamics-and-sublimation simulation, followed by a transit-profile generation using a light-scattering code. Similar modelling work has been done previously to predict the light curves due to possible extrasolar comets in the $\upbeta$~Pictoris system \citep{1999A&A...343..916L,1999A&AS..140...15L}, with the major differences that these comets have orbital periods of years rather than hours and sublimation does not have to be taken into account. For the dusty tails of evaporating planets, \citet[][their Sect.~4.6]{2012ApJ...752....1R} and \citet[][their Sect.~6.2]{2015ApJ...812..112S} did particle-dynamics simulations, but using a constant lifetime of the dust grains against sublimation and without generating light curves. In order to derive constraints on the dust composition from broadband transit profiles, it is essential to treat dust sublimation in a self-consistent, time-dependent way. \begin{table} \centering \small \caption{Host star and system parameters of KIC~1255b} \label{tbl:sys_pars} { \renewcommand{\arraystretch}{1.16} \begin{tabular}{lccc} \hline \hline Parameter & Symbol & Units & Value \\ \hline Stellar effective temperature & $ T\sub{eff,\star} $ & K & $ 4550\substack{+140 \\ -131} $ \\ Stellar mass & $ M\sub{\star} $ & M$ \sub{\odot} $ & $ 0.666\substack{+0.067 \\ -0.059} $ \\ Stellar radius & $ R\sub{\star} $ & R$ \sub{\odot} $ & $ 0.660\substack{+0.060 \\ -0.059} $ \\ Stellar luminosity & $ L\sub{\star} $ & L$ \sub{\odot} $ & $ 0.168\substack{+0.037 \\ -0.036} $ \\ \hline Planet's orbital period & $ P\sub{p} $ & days & $ 0.6535538(1) $ \\ Planet's semi-major axis & $ a\sub{p} $ & AU & $ 0.0129(4) $ \\ \hline \end{tabular} \tablefoot{ The stellar parameters are taken from \citet{2014ApJS..211....2H}; the planet's orbital period is from \citet{2014A&A...561A...3V}. Numbers in brackets indicate the uncertainty on the last digit. } } \end{table} We apply our model to the prototypical evaporating rocky exoplanet KIC~1255b, which has the best quality data of the three candidates. In principle, the model can be applied to the other two candidate evaporating rocky exoplanets after some additional work. Specifically, for \mbox{KOI-2700b} it would be necessary to obtain a better constraint on the dust survival time from possible correlations between subsequent transits or lack thereof. Modelling \mbox{K2-22b} would require the initial launch velocity of the grains to be explored in more detail (see footnote~\ref{fn:k2_22b}). The basic parameters of the KIC~1255b system are listed in Table~\ref{tbl:sys_pars}. These are the values that we use in calculations throughout the rest of this paper. For the stellar parameters, there are different estimates in the literature, casting doubt on whether the star has evolved off the main sequence or not. In Appendix~\ref{app:star_param}, we investigate the different claims and conclude that the star is most likely still on the main sequence. The rest of this paper is organised as follows. Section~\ref{s:meth} provides a detailed description of the dust cloud model and of how it is compared to the observations. Section~\ref{s:res} gives the resulting constraints on the free parameters of the model and shows what they imply for the dust composition. In Sect.~\ref{s:disc}, we discuss our findings in the light of previous work and examine one of our modelling assumptions. Finally, we summarise our work and draw conclusions in Sect.~\ref{s:conclusions}.
\label{s:conclusions} We have developed a numerical model to simulate the dusty tails of evaporating planets and their transit light curves, with the primarily goal of putting constraints on the composition of the dust in such tails. The model solves the dynamics and sublimation of dust particles in the orbital plane of the planet (2D) and then generates a synthetic light curve of the dust cloud transiting the star (after rotating the dust cloud to take into account its inclined orbit; hence, 3D), which can be compared with an observed light curve. We applied this model to the phase-folded \textit{Kepler} light curve of the prototypical evaporating planet KIC~1255b, using an MCMC optimisation technique to constrain the free parameters of the model, including those describing the dust composition. Although the precise best-fit values and uncertainties we find for the model parameters may depend on modelling details (e.g., simulating only a single initial grain size), our analysis shows that by using a physically motivated model it is possible to put meaningful constraints on the composition of the dust in the tail of an evaporating planet based on the shape of its broadband transit light curve. Since the dust composition is related to that of the planet, such constraints can provide helpful input for theories of planet formation and evolution. Regarding KIC~1255b, we draw the following conclusions. \begin{enumerate} \item \textit{Dust composition.} We find that only certain combinations of material properties (specifically, of sublimation parameters and the imaginary part of the complex refractive index) can reproduce the observed transit profile. To obtain the observed tail length while avoiding the correlations between subsequent transits that arise when grains survive longer than an orbital period of the planet, the dust grains need to have the right sublimation rate and temperature. The constraints we find allow us to rule out or disfavour many of the pure materials we tested for the dust composition (see Fig.~\ref{fig:heatmaps}): iron, silicon monoxide, fayalite, enstatite, forsterite, quartz, silicon carbide, and graphite. The only material we found to match the constraints is corundum (i.e., crystalline aluminium oxide). Grains made of combinations of the tested materials, however, cannot be ruled out. The present results agree with those found earlier using a semi-analytical approach \citepalias{2014A&A...572A..76V}, which gives credence to this simpler method. \item \textit{Grain sizes.} We simulate the dust cloud assuming the dust grains all have the same initial size, but let this size evolve as a result of sublimation. Good fits to the observed light curve are produced by initial grain sizes between $ 0.2 $ and $ 5.6 $~$\upmu$m ($ 2 \sigma $ range). The shape of the pre-ingress brightening favours initial grain sizes at the lower edge of this range (0.2 to 0.3~$\upmu$m). \item \textit{Mass loss rate.} We find that the planet loses 0.6 to 15.6~M$\sub{\oplus}$~Gyr$^{-1}$ in dust alone ($ 2 \sigma $ range). \item \textit{Tail morphology.} It is not necessary to invoke an object consisting of multiple components (e.g., a coma and a tail) to explain the detailed shape of the averaged transit light curve. The shape emerges naturally from the distribution of the dust extinction cross-section in the tail (see Fig.~\ref{fig:eom_example}). We also find evidence that the head of the dust cloud may be optically thick in the radial direction (see Fig.~\ref{fig:opt_depth_chk}). \end{enumerate}
16
9
1609.00275
1609
1609.06138.txt
The European Space Agency has invested heavily in two cornerstones missions; {\it Herschel} and {\it Planck}. The legacy data from these missions provides us with an unprecedented opportunity to study cosmic dust in galaxies so that we can answer fundamental questions about, for example: the origin of the chemical elements, physical processes in the interstellar medium (ISM), its effect on stellar radiation, its relation to star formation and how this relates to the cosmic far infrared background. In this paper we describe the DustPedia project, which is enabling us to develop tools and computer models that will help us relate observed cosmic dust emission to its physical properties (chemical composition, size distribution, temperature), to its origins (evolved stars, super novae, growth in the ISM) and the processes that destroy it (high energy collisions and shock heated gas). To carry out this research we will combine the {\it Herschel/Planck} data with that from other sources of data, providing observations at numerous wavelengths ($\le 41$) across the spectral energy distribution, thus creating the DustPedia database. To maximise our spatial resolution and sensitivity to cosmic dust we limit our analysis to 4231 local galaxies ($v<3000$ km s$^{-1}$) selected via their near infrared luminosity (stellar mass). To help us interpret the data we have developed a new physical model for dust (THEMIS), a new Bayesian method of fitting and interpreting spectral energy distributions (HerBIE) and a state-of-the-art Monte Carlo photon tracing radiative transfer model (SKIRT). In this the first of the DustPedia papers we describe the project objectives, data sets used and provide an insight into the new scientific methods we plan to implement.
In this paper we describe a major collaborative project to enable a much better understanding of cosmic dust and particularly how it influences both physical processes in the interstellar medium and the observations we make. The DustPedia\footnote{DustPedia is a collaborative focused research project supported by European Union Grant 606847 awarded under the FP7 call. Further information can be found at www.dustpedia.com.} project aims to bring together expertise in various aspects of the study of cosmic dust to develop a coherent interpretation of recent state-of-the-art observations, particularly those that are now available after the successful {\it Herschel Space Telescope} mission (Pilbratt et al. 2010). We have constructed from the {\it Herschel Science Archive} (HSA) a sample of nearby galaxies (within $\sim$3000 km s$^{-1}$ or about 40 Mpc), to study at far-infrared wavelengths. These observations will also be combined with data that is available at many other wavelengths from numerous other databases, thus extending our studies from the ultraviolet to the radio. The far-infrared emission we detect using {\it Herschel} arises mainly from cosmic dust in the interstellar medium that is heated by galactic stars, hence the name of our project - DustPedia. Cosmic dust forms by nucleation and growth from the vapour phase in the cool atmospheres of low mass stars as they come to the end of their lives and probably also in the gas ejected from supernovae as more massive stars expire. Once deposited into the interstellar medium the dust grains are subject to various physical processes that allow them to grow via the accretion of atoms and molecules and disintegrate in shock heated gas or via high-energy photon or cosmic ray processing. The dust is composed of a mixture of carbonaceous and amorphous silicate grains with a size distribution governed by the growth and destruction mechanisms (sizes of order 0.01-1.0$\mu$m). A comprehensive discussion of dust in the inter-stellar medium can be found in Whittet (2002) and Tielens (2005). A large part of our project is to study the dust formation process, dust evolution and destruction, to eventually construct a model able to explain the diversity of observations we make. Our observations sample a wide range of physical conditions (density, temperature and composition) within the inter-stellar medium of galaxies and as such our models need to be versatile. There are three primary physical mechanisms that enable us to detect cosmic dust in galaxies. Firstly, dust absorbs and scatters radiation from the stars, which not only causes the extinction of the light, but also causes the stellar spectrum to become "redder". Secondly, the alignment of dust grains in the Galactic magnetic field leads to the polarisation of starlight. Finally, because the absorbed stellar light heats the dust it subsequently radiates some of its energy away. Dust temperatures range from about 10-100K and so the energy radiated from dust is emitted predominantly in the mid-infrared and sub-mm part of the electromagnetic spectrum (wavelengths of about 10$\mu$m to 1mm). Imprinted on this spectrum are signatures of dust composition, structure and chemistry, which provide input into a realistic physical dust model. In addition the combination of the observed absorption and scattering with that of the measured emission, via a radiative transfer code, provides other important measurements of the physical properties of the dust grains (for example, dust emissivity). A comprehensive review of how dust observations can be related to the properties of inter-stellar dust grains can be found in Draine (2003). Dust extinction and polarisation are effects imprinted by dust on the radiation from stars, while the only direct measure of the dust itself comes from the radiation it emits. The detection and measurement of this radiation had to await the arrival of space telescopes, because the majority of the cosmic far-infrared and sub-mm radiation is efficiently absorbed by molecules in the Earth's atmosphere. The first far infrared space telescope ({\it IRAS}, Neugebauer et al.1984) revolutionised our ideas about the physical properties of cosmic dust, just how much dust there was and how important the dust is in governing physical processes in the interstellar medium (Rice et al. 1988 and references therein). For a typical galaxy like the Milky Way from one third to a half of the radiation produced by its stars is subsequently re-processed through cosmic dust (Popescue and Tuffs 2002, Davies et al. 2012, Davies et al. 2013). We now know of other, more extreme galaxies, in which 99\% of the stellar radiation is reprocessed in this way (Sanders and Mirabel 1996). Subsequent pre-{\it Herschel} space missions ({\it ISO}, Kessler et al. 1996, {\it Spitzer}, Werner et al., 2004) have greatly extended our understanding of cosmic dust in galaxies providing detailed information on the physical properties of the dust and how it is spatially distributed (see for example Alton et al. 1998, Munoz-Mateos et al. 2011, Galametz et al. 2011). In addition, a wealth of information has been gleaned about dust in our own galaxy by telescopes primarily designed to observe the cosmic microwave background i.e. {\it COBE}, {\it WMAP}, and {\it PLANCK} (Sodroski et al. 1997, Bennett et al. 2003, Green et al. 2015). The {\it Herschel Space Observatory} (Pilbratt et al. 2010) has hugely advanced our understanding of cosmic dust, because with a mirror diameter of 3.5m its collecting area is 5-6 times larger than any of the previous far-infrared telescopes - giving both improved sensitivity and spatial resolution. One of the major discoveries from previous far-infrared missions was that the cosmic dust was somewhat colder than expected (Alton et al. 1998, Galametz et al. 2011, Galliano et al. 2003, 2005), so the {\it Herschel} instruments were also designed to look at a previously un-explored part of the electromagnetic spectrum between the far-infrared and sub-mm (250-500$\mu$m). This was in addition to the spectral regions previously explored with smaller telescopes (70-160$\mu$m). A major part of the DustPedia project is to exploit the unique capabilities of the data in the {\it Herschel} legacy archive and use them, not just to constrain the physical dust model, but to explore broader science issues as well (see section 6). Recent mid-infrared to sub-mm observations of cosmic dust ({\it WISE}, Wright et al. 2010, {\it Spitzer}, Werner et al., 2004, {\it Herschel}, Pilbratt et al. 2010) and their interpretation can be considered important for four primary reasons. \begin{enumerate} \item As a depository of metals the dust content of a galaxy at face value is a measure of how far along the evolutionary path a galaxy has progressed (Edmunds and Eales 1998, Dunne et al. 2003, Davies et al. 2014). \item Cosmic dust plays an important role in many of the physical processes that regulate the evolution of galaxies. For example, it provides opacity so that giant clouds of gas collapsing under gravity can heat up to temperatures sufficient for stars to form and nucleosynthesis to start. It is on the surface of dust grains that molecular hydrogen, which is the crucial gaseous ingredient for star formation, forms. \item Dust traces other physical processes and galaxy constituents that are not so easily measured. For example far infrared emission from galaxies is closely related to the rate at which stars form (Bell 2003, Calzetti et al. 2007, Davies et al. 2014, ) and the relatively easily measured mass of dust relates closely to the difficult to measure mass of molecular hydrogen (Galametz et al. 2011, Magrini et al. 2011, Eales et al. 2012, Remy-Ruyer et al. 2013). \item Dust can greatly affect what you measure at other wavelengths. The ultra-violet emission of hot young stars for example is greatly attenuated by dust and may lay hidden and the reddening effect can mislead us in our determination of the ages of stellar populations. \end{enumerate} On cosmological scales the diffuse far infrared background can be used to infer the far infrared luminosity and dust temperature of distant galaxies and as a measure of the star formation history of the Universe (Puget et al. 1996). However, the interpretation of this background depends very much on an accurate local measurement of the far infrared luminosity density and temperature. We will use the DustPedia sample to measure the local far infrared luminosity density in all available bands and then use this, via a suitable calibration (Davies et al. 2014) to measure the local star formation rate density. In addition, dust extinction through the Universe may noticeably influence our observations of the most distant objects (Menard et al. 2010). We will be able to provide the most up-to-date analysis of the dust extent and column density profile of galaxies and so use this to revisit the question of the line of sight extinction and reddening to distant objects. \subsection{The Importance of Studying Nearby Galaxies} Many observations of galaxies over large look back times correspond very well with cosmological models of galaxy and larger scale structure formation. This has led to the wide spread belief that the current cosmological model is broadly correct. Although there is good agreement over large spatial scales there are some challenging disagreements between theory and observation when one looks over smaller scales and particularly at the properties of nearby galaxies. The distributions of galaxy mass and size, their locations within larger scale structures in the Universe and their star formation histories as a function of galactic mass are all examples of disparity with the currently favoured model. In particular Peebles and Nusser (2010) stress the importance of nearby galaxies if we want to understand the detailed processes of galaxy evolution and hence develop a complete model of how galaxies change with time. They specifically say that ".......nearby galaxies offer rich and still far from completely explored clues to a better picture of how galaxies form.". The reason of course is that nearby galaxies can be studied in far greater detail than those that lie at the edge of the Cosmos, and importantly typical cosmological surveys have covered such small areas of sky that they do not sample the local population very well if at all. Observations of cosmic dust address many aspects of the current galaxy evolutionary model i.e. star formation rate, growth of the metal abundance, loss of metals in galactic winds, physical processes in the interstellar medium etc. and so offer the potential for a much better understanding. Within the DustPedia project we will carry out five specific tasks, utilising nearby galaxies, that will provide important inputs into current evolutionary models: \begin{enumerate} \item Measure the complete UV-mm/radio spectral energy distribution (SEDs) for a large number ($>$1000) of galaxies, and for different environments within individual galaxies. \item Interpret the galaxy SEDs using radiative transfer and full SED models, to derive stellar, gas and dust properties, star formation rates and histories as a function of morphological type. \item Determine how the dust NIR-mm/radio SED evolves throughout the Universe and how this is related to the underlying dust properties. \item Develop a dust evolution model that is consistent with the SEDs of galaxies of different morphological types and determine the primary sources and sinks for cosmic dust. \item Derive dust mass functions to the lowest-possible luminosities and masses and compare these with cosmological surveys and the cosmic far infrared background. \end{enumerate}
The DustPedia project has primarily been funded to exploit the {\it Herschel} and {\it Planck} data archives that contain many observations of nearby galaxies. We have selected a {\it WISE} 3.4 $\mu$m sample of nearby galaxies and then looked for their counterparts in the {\it Herschel} and {\it Planck} Science Archives. From the archives we have been able to select 876 galaxies that have far infrared and/or sub-mm data. We have carried out our own data reduction of these galaxies and ingressed this data into the DustPedia galaxy database. We have then searched for auxiliary data for these galaxies in databases ranging from the ultra-violet to the near infrared. This has resulted in just over 25,000 images, with many of our sample galaxies observed in twenty or more bands, these have again been ingressed into the DustPedia database. We have carried out photometry on all the data so that calibrated images and global flux values are available for the whole multi-wavelength data set. We are developing modelling tools to interpret the observations. These include a new SED fitting tool (HerBIE) that uses a Bayesian approach, a Monte Carlo photon tracing radiative transfer model (SKIRT) of galaxies and a new physical model for the dust (THEMIS). We will use the data and the output from our models to explore the origins of cosmic dust, its evolution in the inter-stellar medium and its ultimate fate. On more global scales we will study the influence dust has on processes in the interstellar medium and how it affects our interpretation of the global properties of galaxies of various morphological types and galaxies within different environments. The complete data sets will eventually be used to study the luminosity and mass functions of galaxies and how these can be related to issues regarding the evolution of galaxies through cosmic time. \begin{center} {\bf Acknowledgements} \end{center} J.Fritz acknowledges the financial support from UNAM-DGAPA-PAPIIT IA104015 grant, Mexico. The {\it Herschel} spacecraft was designed, built, tested, and launched under a contract to ESA managed by the {\it Herschel/Planck} Project team by an industrial consortium under the overall responsibility of the prime contractor Thales Alenia Space (Cannes), and including Astrium (Friedrichshafen) responsible for the payload module and for system testing at spacecraft level, Thales Alenia Space (Turin) responsible for the service module, and Astrium (Toulouse) responsible for the telescope, with in excess of a hundred subcontractors. PACS has been developed by a consortium of institutes led by MPE (Germany) and including UVIE (Austria); KU Leuven, CSL, IMEC (Belgium); CEA, LAM (France); MPIA (Germany); INAF-IFSI/OAA/OAP/OAT, LENS, SISSA (Italy); IAC (Spain). This development has been supported by the funding agencies BMVIT (Austria), ESA-PRODEX (Belgium), CEA/CNES (France), DLR (Germany), ASI/INAF (Italy), and CICYT/MCYT (Spain). SPIRE has been developed by a consortium of institutes led by Cardiff University (UK) and including Univ. Lethbridge (Canada); NAOC (China); CEA, LAM (France); IFSI, Univ. Padua (Italy); IAC (Spain); Stockholm Observatory (Sweden); Imperial College London, RAL, UCL-MSSL, UKATC, Univ. Sussex (UK); and Caltech, JPL, NHSC, Univ. Colorado (USA). This development has been supported by national funding agencies: CSA (Canada); NAOC (China); CEA, CNES, CNRS (France); ASI (Italy); MCINN (Spain); SNSB (Sweden); STFC (UK); and NASA (USA). This research has made use of the NASA/IPAC Extragalactic Database (NED) which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration. Funding for the SDSS and SDSS-II has been provided by the Alfred P. Sloan Foundation, the Participating Institutions, the National Science Foundation, the U.S. Department of Energy, the National Aeronautics and Space Administration, the Japanese Monbukagakusho, the Max Planck Society, and the Higher Education Funding Council for England. The SDSS Web Site is http://www.sdss.org/. The SDSS is managed by the Astrophysical Research Consortium for the Participating Institutions. The Participating Institutions are the American Museum of Natural History, Astrophysical Institute Potsdam, University of Basel, University of Cambridge, Case Western Reserve University, University of Chicago, Drexel University, Fermilab, the Institute for Advanced Study, the Japan Participation Group, Johns Hopkins University, the Joint Institute for Nuclear Astrophysics, the Kavli Institute for Particle Astrophysics and Cosmology, the Korean Scientist Group, the Chinese Academy of Sciences (LAMOST), Los Alamos National Laboratory, the Max-Planck-Institute for Astronomy (MPIA), the Max-Planck-Institute for Astrophysics (MPA), New Mexico State University, Ohio State University, University of Pittsburgh, University of Portsmouth, Princeton University, the United States Naval Observatory, and the University of Washington. This publication makes use of data products from the Two Micron All Sky Survey, which is a joint project of the University of Massachusetts and the Infrared Processing and Analysis Center/California Institute of Technology, funded by the National Aeronautics and Space Administration and the National Science Foundation. We acknowledge the usage of the HyperLeda database (http://leda.univ-lyon1.fr). This publication makes use of data products from the Wide-field Infrared Survey Explorer, which is a joint project of the University of California, Los Angeles, and the Jet Propulsion Laboratory/California Institute of Technology, funded by the National Aeronautics and Space Administration. Based on observations made with the NASA Galaxy Evolution Explorer. GALEX is operated for NASA by the California Institute of Technology under NASA contract NAS5-98034. \newpage \begin{center} {\bf References} \end{center} Agius et al., 2015, MNRAS, 451, 3815 \\ Ahn et al., 2012, ApJS, 203, 21 \\ Alton et al., 1998, A\&A, 335, 807 \\ Alton et al., 2004, A\&A, 425, 109 \\ Amblard et al., 2014, ApJ, 783, 135 \\ Aniano et al. 2012, ApJ, 756, 138 \\ Asboth et al., 2016, arXiv160102665 \\ Auld et al., 2013, MNRAS, 428, 1880 \\ Baes and Dejonghe, 2001, MNRAS, 326, 722 \\ Baes et al., 2003, MNRAS, 343, 1081 \\ Baes et al., 2008, MNRAS, 343, 1081 \\ Baes et al., 2010, A\&A, 518, L39 \\ Baes et al., 2011, ApJS, 196, 22 \\ Baes M. and Camps P., 2015, Astronomy and Computing, 12, 33 \\ Baes et al., 2016, A\&A, 590, A55 \\ Bell E., 2003, ApJ, 586, 794 \\ Bendo et al., 2012, MNRAS, 419, 1833 \\ Bendo et al., 2015, MNRAS, 448, 135 \\ Bennett et al., 2003, ApJS, 148, 97 \\ Berriman et al., 2016, AAS, 22734813 \\ Bianchi et al., 2000, A\&A, 359, 65 \\ Bianchi, 2007, A\&A, 471, 765 \\ Bianchi, 2008, A\&A, 490, 461 \\ Bianchi, 2013, A\&A, 552, 89 \\ Bianchi et al., 2016, MNRAS, submitted \\ Bocchio et al., 2012, A\&A, 545, 124 \\ Bocchio, M., Jones, A.P., Verstraete, L., Xilouris, E.M., Micelotta, E.R., Bianchi, S. 2013, A\&A 556, A6 \\ Bocchio, M., Jones, A.P., Slavin, J.D. 2014, A\&A, 570, A32 \\ Bolatto, Wolfire \& Leroy, 2013, ARA\&A, 51, 207 \\ Boquien et al., 2011, AJ, 142, 111 \\ Calzetti et al., 2007, ApJ, 666, 870 \\ Camps P. and Baes M., 2015, Astronomy and Computing, 9, 20 \\ Camps P., Baes M. and Saftly W., 2013, A\&A, 560, A35 \\ Cannon et al., 2006, ApJ, 652, 1170 \\ Clark et al., 2016, MNRAS, in press \\ Clark et al., 2015, MNRAS, 452, 397 \\ Compiegne et al., 2011, A\&A, 525, 103 \\ Conselice, 2008, "Pathways Through an Eclectic Universe", ASP Conference Series, 390, 403 \\ Cormier et al., 2010, A\&A, 518, 57 \\ da Cunha E., Charmandaris V., Díaz-Santos T., Armus, L., Marshall J. A. and Elbaz, D., 2010, A\&A, 523A, 78 \\ Dale et al., 2001, ApJ, 549, 215 \\ Dale et al., 2012, ApJ, 745, 95 \\ Davies J. and Burstein D., 1994, Proc of the NATO ARW "The Opacity of Spiral Discs", NATO ASI series, Vol 469 \\ Dasyra et al., 2005, A\&A, 437, 447 \\ Davies J. et al., 2010, A\&A, 518, 48 \\ Davies J., et al., 2011, MNRAS, 415, 1883 \\ Davies J. et al., 2011, MNRAS, 419, 3505 \\ Davies J. et al., 2012, MNRAS, 419, 3505 \\ Davies J. et al., 2013, MNRAS, 428, 834 \\ Davies J. et al., 2014, MNRAS, 438, 1922 \\ Davies L. et al., 2016, arXiv:1606.06299 \\ De Serego Alighieri et al., 2013, A\&A, 552, 8 \\ De Geyter et al., 2013, A\&A, 550, 74 \\ De Geyter et al., 2014, MNRAS, 441, 869 \\ De Geyter et al., 2015, MNRAS, 451, 1728 \\ De Jong et al. and the KiDS and Astro-WISE consortiums, 2013, Experimental Astronomy, 35, 25. \\ De Looze et al., 2010, A\&A, 518, 54 \\ De Looze et al., 2012a, MNRAS, 419, 895 \\ De Looze et al., 2012b, MNRAS, 427, 2797 \\ De Looze et al., 2014, A\&A, 571, 69 \\ Deschamps et al., 2015, A\&A, 577, A55 \\ Desert et al., 2008, A\&A, 481, 411 \\ Disney M., Davies J. and Phillipps S., 1989, MNRAS, 239, 939 \\ Draine, 2003, ARA\&A, 41, 241 \\ Draine \& Li, 2007, ApJ, 657, 810 \\ Driver et al., 2007, MNRAS, 379, 1022 \\ Dunne et al., 2003, MNRAS, 341, 589 \\ Dunne et al., 2011, MNRAS, 417, 1510 \\ Edge et al., 2013, ESO Msngr, 154, 32. \\ Eales et al., 2000, AJ, 120, 2244 \\ Eales et al., 2012, ApJ, 761, 168 \\ Eales et al., 2015, MNRAS, 452, 3489 \\ Edmunds M. and Eales S., 1998, MNRAS, 299, 29 \\ Egami et al., 2010, A\&A, 518, 12 \\ Eskew et al., 2012, AJ, 143, 139 \\ Fairclough, 1986, MNRAS, 219, 1p \\ Fanciullo et al., 2015, A\&A, 580, 136 \\ Galametz et al., 2011, A\&A, 532, 56 \\ Galliano et al., 2003, A\&A, 407, 159 \\ Galliano et al., 2005, A\&A, 434, 867 \\ Galliano et al., 2011, A\&A, 536, 88 \\ Galliano et al., 2016, in preparation \\ Gomez et al., 2010, A\&A, 518, 45 \\ Green et al., 2015, ApJ, 810, 25 \\ Griffin et al., 2010, A\&A, 518, 3 \\ Grossi et al., 2010, A\&A, 518, 52 \\ Groves B. et al., 2015, MNRAS, 426, 892 \\ Hendrix T., Keppens R. and camps P., 2015, A\&A, 575, A110 \\ James A. et al., 2002, MNRAS, 335, 753 \\ Jones, 2012a, A\&A, 542, 98 \\ Hughes et al. 2013, A\&A, 550, 115 \\ Hughes et al. 2015, Proceedings of the IAU Symposium "Galaxies in 3D across the Universe", Volume 309, 320 \\ Hunt L. et al., 2015, A\&A, 576, 33 \\ Jones 2012b, A\&A, 540, 1 \\ Jones 2012c, A\&A, 545, 2 \\ Jones, A. P. 2013, A\&A, 555, A39 \\ Jones, A.P., Fanciullo, L., K\"{o}hler, M., Verstraete, L., Guillet, V., Bocchio, M., Ysard, N. 2013, A\&A, 558, A62 \\ Jones, A.P., Ysard, N., K\"{o}hler, M., Fanciullo. L., Bocchio, M., Micelotta, E., Verstraete, L., Guillet, G. 2014, Faraday Discussion Meeting 168, 313 \\ Jones, A.P., Habart, E, A\&A, 581, A92 \\ Jones, A.P., K\"{o}hler, M., Ysard, N., Dartois, E., Godard, M., Gavilan, L. 2015, A\&A, 588, 43 \\ Kelly et al., 2012, ApJ, 752, 55 \\ Kessler M. et al., 1996, A\&A, 315, 27 \\ Kirkpatrick et al., 2014, ApJ, 789, 130 \\ K\"{o}hler, M., Stepnik, B., Jones, A.P., Guillet, V., Abergel, A., Ristorcelli, I., Bernard, J.-P. 2012, A\&A, 548, A61 \\ Knapp et al., 1989, ApJS, 70, 329 \\ Lawrence et al., 2013, ViZier online data catalogue:UKIDSS-DR9, LAS, GCS and DXS surveys \\ Lebouteiller et al., 2012, A\&A, 546, 94 \\ Lianou S., Xilouris E., Madden S. C. and Barmby P., 2016, MNRAS, 461, 2856 \\ MacLachlan J. et al., 2011, ApJ, 741, 6 \\ Madden S. et al., 2012, IAUS, 284, 141 \\ Magrini et al., 2011, A\&A, 535, 13 \\ Makarov et al. 2014, A\&A, 570, A13 \\ Menard et al., 2010, MNRAS, 405, 1025 \\ Meidt et al., 2014, ApJ, 788, 144 \ Misiriotis and Bianchi, 2002, A\&A, 384, 866 \\ Mosenkov A. et al., 2016, A\&A, in press \\ Munoz-Mateos et al., 2011, ApJ, 731, 10 \\ Negrello et al., 2013, MNRAS, 429, 1309 \\ Neugebauer G. et al., 1984, ApJ, 278, L1 \\ Nguyen et al., 2010, A\&A, 518, 5 \\ Oliver et al., 2012, MNRAS, 424, 1614 \\ Panter B., Jimenez R., Heavens A. and Charlot S., (2007), MNRAS, 378, 1550 \\ Peebles and Nusser, 2010, Nature, 465, 565 \\ Pilbratt et al., 2010, A\&A, 518, 1 \\ Planck collaboration XXV, 2015, A\&A, 582, A28 \\ Poglitsch et al., 2010, A\&A, 518, 2 \\ Popescu et al. 2000, A\&A, 362, 138 \\ Popescu C. and Tuffs R., 2002, MNRAS, 335, 41 \\ Popescu et al. 2011, A\&A, 527, A109 \\ Puget et al., 1996, A\&A, 308, L5 \\ Remy-Ruyer et al., 2012, IAUS, 284, 149 \\ Remy-Ruyer et al., 2013, A\&A, 557, A95 \\ Remy-Ruyer et al., 2014, A\&A, 563, 31 \\ Remy-Ruyer et al., 2015, A\&A, 582, 121 \\ Rice et al., 1988, ApJS, 68, 91 \\ Roussel et al., 2013, PASP, 125, 125, 1126 \\ Saftly W. et al., 2013, A\&A, 554, A10 \\ Saftly W., Baes M. and Camps P., 2014, A\&A, 561, A77 \\ Saftly W. et al., 2015, A\&A, 576, A31 \\ Sanders D. and Mirabel I., 1996, ARA\&A, 34 749 \\ Sanders et al., 2003, AJ, 126, 1607 \\ Sandstrom et al., 2013, ApJ, 775, 5 \\ Saunders et al., 1990, MNRAS, 242, 318 \\ Schechtman-Rook et al., 2012, ApJ, 746, 70 \\ Schruba et al., 2012, AJ, 143, 138 \\ Shetty et al., 2009a, ApJ, 696, 223 \\ Shetty et al., 2009b, ApJ, 696, 676 \\ Skrutskie et al., 2006, AJ, 131, 1163 \\ Smith, 2012a, PhD thesis University of Cardiff \\ Smith et al., 2012b, ApJ, 756, 40 \\ Smith et al., 2012c, ApJ, 748, 123 \\ Smith et al., 2016, MNRAS, in press (arXiv160701020) \\ Sodroski et al., 1997, ApJ, 480, 173 \\ Solarz A., Takeuchi T. and Pollo A., 2016, arXiv:1607.08747 \\ Stalevski et al., 2012, MNRAS, 420, 2756 \\ Tabatabaei et al., 2014, A\&A, 561, 95 \\ Tielens A., 2005, "The Physics and Chemistry of the Interstellar Medium", Cambridge University Press, Cambridge, UK. \\ Tsai and Mathews, 1996, ApJ, 468, 571 \\ Valentijn E., 1990, Nature, 346, 153 \\ Viaene S. et al., 2015, A\&A, 579, 103 \\ Viaene S. et al., 2016, A\&A, submitted \\ Viero et al., 2015, ApJ, 809, 22 \\ Werner M. et al., 2004, ApJS, 154, 1 \\ Whittet, D., "Dust in the Galactic Environment (2nd Edition)", IOP Series in Astronomy and Astrophysics \\ Wright et al., 2010, AJ, 140, 1868 \\ Xilouris et al., A\&A, 1997, 325, 135 \\ Xilouris et al., A\&A, 1998, 331, 894 \\ Xilouris et al., A\&A, 1999, 344, 868 \\ Ysard, N., K\"{o}hler, M., Jones, A., Miville-Desch�nes, M.-A., Abergel, A., Fanciullo, L. 2015, A\&A, 577, A110 \\ Ysard, N., K\"{o}hler, M., Jones, A.P., Dartois, E., Godard, M., Gavilan, L. 2015b, A\&A, 588, 44 \\ Zubko et al., 2004, ApJS, 152, 211 \\ %% If you wish to include an acknowledgments section in your paper, %% separate it off from the body of the text using the
16
9
1609.06138
1609
1609.08058_arXiv.txt
We study how close-in systems such as those detected by \textit{Kepler} are affected by the dynamics of bodies in the outer system. We consider two scenarios: outer systems of giant planets potentially unstable to planet--planet scattering, and wide binaries that may be capable of driving Kozai or other secular variations of outer planets' eccentricities. Dynamical excitation of planets in the outer system reduces the multiplicity of Kepler-detectable planets in the inner system in $\sim20-25\%$ of our systems. Accounting for the occurrence rates of wide-orbit planets and binary stars, $\approx18\%$ of close-in systems could be destabilised by their outer companions in this way. This provides some contribution to the apparent excess of systems with a single transiting planet compared to multiple; however, it only contributes at most 25\% of the excess. The effects of the outer dynamics can generate systems similar to Kepler-56 (two coplanar planets significantly misaligned with the host star) and Kepler-108 (two significantly non-coplanar planets in a binary). We also identify three pathways to the formation of eccentric warm Jupiters resulting from the interaction between outer and inner systems: direct inelastic collision between an eccentric outer and an inner planet; secular eccentricity oscillations that may ``freeze out'' when scattering resolves in the outer system; and scattering in the inner system followed by ``uplift'', where inner planets are removed by interaction with the outer planets. In these scenarios, the formation of eccentric warm Jupiters is a signature of a past history of violent dynamics among massive planets beyond $\sim1$\,au.
\noindent The population of planet candidates detected by \textit{Kepler} shows a surplus of systems showing only one transiting planet \citep{Johansen+12,BallardJohnson16}, a finding that has been dubbed the ``\textit{Kepler} Dichotomy''. This translates into an excess of systems with only one \textit{planet} in the region probed by \textit{Kepler,} as altering the distribution of mutual inclinations amongst triple-planet systems cannot simultaneously account for the numbers of single-, double- and triple-transit systems \citep{Johansen+12}: a large fraction of the double-transit systems could be produced by intrinsically triple-planet systems, but this still requires an additional population of intrinsically single-planet systems to match the large observed number of single-transit systems. This suggests that Nature produces two distinct populations of inner planetary systems: one population of intrinsically single planets, and an additional population of multiple systems whose multiplicity peaks at three planets or higher. There are three possible explanations for this excess: \begin{itemize} \item There is a high false positive rate amongst single-transit systems. While most \textit{Kepler} multiple systems appear to be genuine \citep{Rowe+14}, \cite{Santerne+16} find a $\sim50\%$ false-positive rate for single giant \textit{Kepler} candidates. They speculate that the absolute number of false positives may be higher for the smaller candidates, although the rate could fall due to the increased frequency of smaller planets. \item Many systems form only one planet within $\lesssim1$\,au, while a smaller number form multiple. \cite{ColemanNelson16} find that, when starting from a large number of embryos, systems resembling the \textit{Kepler} singles only arise if one planet grows to be massive enough to clear out its neighbours, and speculate that many systems must form small numbers of embryos. Unfortunately, predicting the formation times, locations and numbers of these embryos is challenging, despite the significant effects that these initial conditions have on the embryos' subsequent growth and migration \citep[e.g.,][]{Bitsch+15}. \item Many systems form multiple planets within $\lesssim1$\,au, but many are later reduced in multiplicity by subsequent dynamical evolution as planets collide. This route may be supported by an additional ``dichotomy'' in the distributions of orbital eccentricities \citep[see][who argue for a two-component model for the eccentricity distribution, with a low-$e$ ($\sim0.01$) and a high-$e$ ($\sim0.2$) component; \citealt{Xie+16} argue for a similar ``eccentricity dichotomy'']{Shabram+16} and stellar obliquities (\citealt{MortonWinn14} find that stars with a single transiting planet have higher obliquity than those with multiple planets, while \citealt{Campante+16} favour a mixture model for the obliquities of single-planet host stars but a single model for hosts of multiple planets). This evolution may be driven by the internal dynamics of the \textit{Kepler} multiples \citep[e.g.,][]{Johansen+12,PuWu15,VolkGladman15} or by the effects of outer bodies such as binary stars or outer giant planets on the inner system \citep[e.g.,][]{Mustill+15}. \end{itemize} In this paper, we further explore the effects a dynamically active outer system can have on systems of multiple inner planets. We build on our previous work \citep{Mustill+15}, in which we considered the effects of a planet with an arbitrarily-imposed eccentricity on an inner system, by consistently modelling the dynamics in the outer system leading to such eccentricity excitation, through Kozai cycles or planet--planet scattering. We gauge the contribution of disruptive outer bodies to the \textit{Kepler} multiplicity function by destabilising and inclining inner systems, show that it is possible to occasionally generate large mutual inclinations or obliquities as in the tilted two-planet system Kepler-56 \citep{Huber+13} and the mutually inclined Kepler-108 \citep{MillsFabrycky16}, and identify several routes to forming eccentric warm Jupiters at a few tenths of an au. Can the \textit{Kepler} Dichotomy be resolved by appealing to instabilities driven by the internal dynamics of inner systems (henceforth, anything with an orbital period $P<240$\,d)? Probably not entirely: \textit{Kepler} triple-planet systems, for example, are robust to internal dynamical evolution. \cite{Johansen+12} showed that these triple-planet systems are too widely separated to undergo instability unless their masses are increased unrealistically, by a factor of around 100. Furthermore, when forced into instability in this way the outcome is typically only a reduction to a two-planet system. However, \cite{PuWu15} show that the higher-multiplicity systems are less stable, consistent with being the survivors from a continuous primordial population where the more closely-spaced systems were unstable. Meanwhile, \cite{BeckerAdams16} find that \emph{Kepler} multi-planet systems are inefficient at self-exciting their mutual inclinations: flat systems remain flat, and retain their high probablility of multiple transits. A high occurrence rate of inner planetary systems ($\sim50\%$) has been revealed by both RV surveys \citep{Mayor+11} and \textit{Kepler} \citep{Fressin+13}. But many of these inner systems do not exist in isolation. They may have wide-orbit companion planets, as in the case of Kepler-167, which possesses three super-Earths within 0.15\,au together with a transiting giant planet at 1.9\,au \citep{Kipping+16}. A number of studies have found wide-orbit candidates in the \textit{Kepler} light curves which transit only a small number of times and therefore are excluded from the KOI listings \citep{Wang+15,Osborn+16}; \cite{Uehara+16} estimate that at least 20\% of compact multi-planet systems also host giant planets beyond 3\,au, based on single-transit events in the KOIs; and \cite{Foreman-Mackey+16} estimate an average of 2 planets per star with periods between 2 and 25 years and radii between $0.1$ and $1\mathrm{\,R_J}$, $0.4$ planets per star in the same period range with radii between $0.4$ and $1\mathrm{\,R_J}$, and that these wide-orbit planets occur disproportionately often around stars already hosting inner planet candidates. \cite{Knutson+14} find that $50\%$ of hot Jupiter hosts also have a giant planet companion between 1 and 20\,au, while \cite{Bryan+16} similarly find an occurrence rate of $50\%$ for outer planetary companions to RV-detected inner planets of a range of masses, although their sample is more metal-rich than the \textit{Kepler} targets. \cite{Wang+15} found that half of their long-period \textit{Kepler} candidates exhibited transit timing variations, suggesting multiplicity. Regarding the presence of planetary systems in wide binaries, some \textit{Kepler} systems, such as Kepler-444 \citep{Campante+15} and Kepler-108 \citep{MillsFabrycky16}, reside in wide binaries. \cite{Ngo+15,Ngo+16} estimate that $50\%$ of hot Jupiters have a stellar companion between 50 and 2\,000\,au, around twice the rate for the average field star. There is currently debate about the extent to which the presence of an outer binary companion affects the existence of inner \textit{Kepler} planets \citep[e.g.,][]{Wang+14,Deacon+16,Kraus+16}. Statistics from systems without detected inner planets also reveal the prevalence of outer bodies. RV surveys reveal a population of ``Jupiter analogues'' (variously defined as low-eccentricity $\sim$Jupiter-mass planets at several au) of a few percent \citep{Rowan+16,Wittenmyer+16}. Direct imaging surveys are sensitive to super-Jovian planets at tens of au, finding an occurrence rate of around $10\%$ \citep{Vigan+12} for stars more massive than the Sun, falling to $1-2\%$ for Solar-type stars \citep{Galicher+16}. Microlensing reveals an occurrence rate of $\sim50\%$ for ice-line planets more massive than Neptune, where the host stars were typically sub-Solar in mass \citep{Shvartzvald+16}. Around half of Sun-like stars are in members of multiple stellar systems, with a period distribution peaking at $\sim10^5$ days \citep{Raghavan+10,DucheneKraus13}. Compared to the statistics in the previous paragraph, it may be that stars with known inner planets are more likely than other stars to host wide-orbit giant planets, although one should be wary of biases such as for example in the stellar metallicities. The configuration and evolution of bodies in the outer system can have significant dynamical effects on these inner systems. In \cite{Mustill+15}, we showed that a high-eccentricity giant planet \textit{en route} to becoming a hot Jupiter will destroy any existing close-in planets, thus explaining why hot Jupiters are typically not seen with close, low-mass companions. \cite{Mustill+15} also showed that, as the orbital binding energy of the eccentric giant can be comparable to that of the inner planets, the giant can in fact be ejected as a result of the interactions with the inner system, which may itself be reduced in multiplicity. Although hot Jupiters are relatively rare, being found around only $\sim1\%$ of stars \citep{Mayor+11,Howard+12,Fressin+13,Santerne+16}, models of high-eccentricity migration of hot Jupiters typically find that many more migrating giants are tidally disrupted than go on to become hot Jupiters \citep[e.g.,][]{Petrovich15b,Anderson+16,Munoz+16,PetrovichTremaine16}. Lower-mass planets may well be injected into the inner systems by the same dynamical mechanisms---scattering and Kozai perturbations---that give rise to hot Jupiters, and many outer planets thus sent inwards will attain pericentres insufficiently small for tidal circularisation, yet small enough to interact with inner planets at a few tenths of an au. All this motivates a general investigation into the influence of outer systems on inner \textit{Kepler}-detectable planets. While the bulk of \textit{Kepler}-detected planets lie at a few tenths of an au, work has shown that instabilities in outer systems can be devastating for material in the habitable zone at $\sim1$\,au \citep{VerasArmitage05,VerasArmitage06,Raymond+11,Raymond+12,Matsumura+13,KaibChambers16}. \cite{Carrera+16} find that the survivability of bodies increases closer to the star, and \citep{Huang+16b} study the effects on the excitation of \textit{Kepler}-like super Earths. Direct scattering is the most obvious effect of eccentricity enhancement in the outer system, but secular resonances can also play a role in destabilising inner systems \citep{Matsumura+13,Carrera+16}. Secular interactions can also have more subtle effects on inner systems, resulting in gentle tilts \citep{GratiaFabrycky17}, or excitation of mutual inclination \citep{Hansen16,LaiPu16}, which in turn can contribute to the observed multiplicities seen by \textit{Kepler}. Dynamical interaction between inner and outer systems may also account for the existence of eccentric warm Jupiters: giant planets with semi-major axes of a few tenths of an au and eccentricities of order $0.5$. While planet--planet scattering has long been recognised as a source of eccentricity excitation of giant planets \citep[e.g.][]{RasioFord96,WeidenschillingMarzari96,Chatterjee+08,JuricTremaine08,Raymond+11,Kaib+13}, \cite{Petrovich+14} showed that this process is ineffective at exciting eccentricities close to the star: on tight orbits, planets have a higher Keplerian velocity and so in order to impart a given change in velocity, a close encounter must occur at a smaller separation due to the reduced gravitational focusing, and such close encounters result instead in physical collision. Nor can eccentric warm Jupiters be explained by ``fast'' tidal migration of giant planets \emph{en route} to forming hot Jupiters, as the eccentricities of the observed planets lie below the tidal circularisation tracks along which such planets would migrate. Possible explanations are ``slow'' tidal migration, in which tidal dissipation only occurs briefly at the tip of a secular eccentricity cycle \citep{DawsonChiang14,Dong+14,Petrovich15b,PetrovichTremaine16}, and the physical collision of eccentric migrating giant planets with other planets on close-in orbits \citep{Mustill+15}. In this paper, we describe several other routes to the formation of eccentric warm Jupiters. In summary, at least a few 10s of per cent of inner systems can be expected to host outer planets and/or stars. In this paper we study the effects of these outer bodies on inner systems with $N$-body integrations. We set up two scenarios: outer planets in binary systems that may be subject to Lidov--Kozai oscillations \citep{Lidov62,Kozai62,Naoz16}, and tightly-packed systems of outer planets that are unstable to scattering; \cite{JuricTremaine08} and \cite{Raymond+11} show that the eccentricity distribution of giant planets is consistent with around $75-83\%$ of them having originally come from unstable multiple systems. We investigate the effects that the dynamics of the outer system have on the multiplicities of the inner planets and on their mutual inclinations. In Section~\ref{sec:nbody} we describe the set-up of our $N$-body integrations. We describe the outcomes of a set of control integrations in Section~\ref{sec:control}. We give the results for planets in binary systems in Section~\ref{sec:binaries} and for unstable scattering systems in Section~\ref{sec:scattering}, describing the effects on the multiplicities and mutual inclinations of inner planetary systems. In Section~\ref{sec:warmj} we describe three mechanisms leading to the formation of eccentric warm Jupiters: collision between an inner and an eccentric outer planet (Section~\ref{sec:collide-warmj}), secular forcing aided by ``freeze-out'' (Section~\ref{sec:secular}), and \emph{in-situ} scattering aided by ``uplift'' from the outer system (Section~\ref{sec:uplift}). We discuss our results in Section~\ref{sec:discuss}, notably the effects on \textit{Kepler} systems' multiplicities (Section~\ref{sec:kepler-multi}) and mutual inclinations (Section~\ref{sec:kepler-inc}), the generation of large obliquities or mutual inclinations (Section~\ref{sec:inc}), and summarise in Section~\ref{sec:conclude}.
\label{sec:conclude} We have run $N$-body simulations to study the effects of the dynamics of outer systems---experiencing Kozai perturbations and planet--planet scattering---on close-in inner systems such as those detected by \textit{Kepler}. Our main simulation sets are \textsc{Binaries}, where we add an extra outer planet and a stellar binary companion, and \textsc{Giants}, where we add a close-packed system of four outer planets. We address the issues of: the contribution of the ensuing perturbations to the ``\textit{Kepler} Dichotomy'' of an excess of single-transit systems, by excitation of mutual inclinations or outright destabilisation and loss of planets; the excitation of extreme mutual inclinations as in Kepler-108, or obliquities as in Kepler-56; and the formation of eccentric warm Jupiters. Our key findings are: \begin{itemize} \item In the most destructive cases, $40-50\%$ of inner systems lose one or more planets within 10\,Myr as a result of dynamics in the outer system. This applies to systems where Kozai cycles excite a large ($>0.5$) eccentricity on the outer planet, and to a subset of planet--planet scattering simulations that reproduces the observed eccentricity distribution for giant exoplanets. \item Over our entire set of simulation runs, including quiescent outer systems where Kozai cycles were not excited due to low inclination or extra precession, and where planet--planet scattering was weak or non-existent, this destabilisation fraction falls to $20-25\%$. \item In the inner systems that keep all their inner planets, mutual inclinations are not excited significantly. This is true both for inner systems starting with three and with two planets. Triple-planet systems that are reduced to double-planet systems experience more excitation however. \item These rates make some contribution to the \textit{Kepler} Dichotomy, but the majority must be explained through other means: with plausible estimates of the occurrence of suitable outer architectures, we find that $\approx18\%$ of \textit{Kepler} triple-planet systems would lose one or more planets, with $\approx10\%$ of triple-planet systems being reduced to singles, meaning that at least $75\%$ of the single-planet \textit{Kepler} systems do not arise from the dynamical mechanisms that we have studied. As the internal evolution of inner systems is inefficient at reducing multiplicities to zero or unity, formation or a high false positive rate amongst the single-planet candidates may play dominant roles. \item Similarly, there is a small contribution to the population of stars with \emph{no} inner planetary system, with $\approx5\%$ of triple-planet systems being reduced to ``zeros''. \item Although inclination effects are relatively unimportant for the population of \textsc{Kepler} planets as a whole, occasional interesting systems emerge. We find both tilted but coplanar systems such as Kepler-56, as well as highly-misaligned two-planet systems such as Kepler-108. \item We identify three routes to the formation of eccentric warm Jupiters: \emph{in-situ} scattering (possibly helped by ``uplift'' from outer system); secular eccentricity oscillations which can be ``frozen out'' if an outer planet is ejected; and direct inelastic collision between an outer and an inner planet as in \cite{Mustill+15}. Eccentric warm Jupiters form in 15\% of our scattering simulations which reproduce the observed eccentricity distribution of more distant giant planets. \end{itemize}
16
9
1609.08058
1609
1609.08091_arXiv.txt
A search for dark matter line-like signals was performed in the vicinity of the Galactic Centre by the H.E.S.S. experiment on observational data taken in 2014. An unbinned likelihood analysis was developed to improve the sensitivity to line-like signals. The upgraded analysis along with newer data extend the energy coverage of the previous measurement down to 100~GeV. The 18~h of data collected with the H.E.S.S. array allow one to rule out at 95\%~CL the presence of a 130~GeV line (at $l = -1.5^{\circ}, b = 0^{\circ}$ and for a dark matter profile centred at this location) previously reported in {\it{Fermi}}-LAT data. This new analysis overlaps significantly in energy with previous {\it{Fermi}}-LAT and H.E.S.S. results. No significant excess associated with dark matter annihilations was found in the energy range 100~GeV to 2~TeV and upper limits on the gamma-ray flux and the velocity weighted annihilation cross-section are derived adopting an Einasto dark matter halo profile. Expected limits for present and future large statistics H.E.S.S. observations are also given.
Weakly interacting massive particles (WIMPs) are among the most studied candidates to explain the longstanding elusive nature of dark matter (DM) and have been the target of a large number of searches (see \cite{Bertone:2004pz} for a review). In particular, the indirect detection of DM using gamma rays is considered one of the most promising avenues as it can probe both its particle properties and distribution in the universe WIMP annihilations produce a continuum energy spectrum of gamma rays up to the DM mass as well as one or several gamma-ray lines. Although the fluxes of such mono-energetic features are mostly suppressed compared to the continuum, a line spectrum is easier to distinguish in regions of the sky with high astrophysical gamma-ray backgrounds~\cite{bib:Conrad}. A previous search for line signatures using H.E.S.S. in phase I (H.E.S.S.~I) has been published~\cite{hess_line} with 112~h of observation time. As no significant excess was found, the study presented upper limits on the flux and velocity-averaged annihilation cross-section $\langle\sigma v\rangle$ at the level of $10^{-6} \mathrm{m}^{-2}\mathrm{s}^{-1}\mathrm{sr}^{-1}$ and $10^{-27} \mathrm{cm}^{3}\mathrm{s}^{-1}$ for WIMP masses between 500~GeV and 20~TeV. The space-borne {\it{Fermi}} Large Area Telescope ({\it{Fermi}}-LAT)~\cite{Atwood:2009ez} was until recently the only instrument capable of probing a DM induced gamma-ray line signal in the direction of the Galactic Centre of around 100 GeV in energy. Analyses based on public data have found indications of an excess signal at around 130 GeV in the vicinity of the Galactic Centre finding a best fit position for the centroid of the excess at ($l = -1.5^{\circ}, b = 0^{\circ}$) \cite{Bringmann:2012vr,bib:4,su_fink,Bringmann:2012ez}. Later, revised analyses of the {\it{Fermi}}-LAT team found background-compatible results~\cite{Ackermann:2013uma,Ackermann:2015lka}. In order to resolve the controversy with an independent measurement, the H.E.S.S. collaboration performed dedicated observations of the Galactic Centre vicinity using its newly commissioned fifth telescope. The larger effective area and lower energy threshold allow to eliminate the energy gap between previously reported {\i{Fermi}}-LAT and H.E.S.S.~I results. The present paper is organised as follows: first the H.E.S.S. experiment and event reconstruction are briefly described, then the analysis method is discussed, followed by the presentation of the results and concluding remarks.
Analysis of data from dedicated H.E.S.S.~II observations of 18~h towards the vicinity of the Galactic Centre lead to the 95\%~CL exclusion of the $\langle\sigma v\rangle$ value associated to the 130 GeV excess reported in \cite{bib:4} in the {\it{Fermi}}-LAT data. The likelihood method developed for this study has been successfully applied to estimate for the first time the sensitivity for a DM line search with the five telescope configuration of the H.E.S.S. experiment. New constraints on line-like DM signals have been obtained in the line scan in the energy range between 100~GeV and 2~TeV, bridging the gap between previously reported H.E.S.S. phase I and {\it{Fermi}}-LAT results. The analysis reported here has been performed under the hypothesis of the DM halo centred at the 130 GeV excess position, displaced with respect to the gravitational centre of the Galaxy. Moving the centre of the DM halo to $l = 0, b = 0$ implies a loss of sensitivity by a factor of at least eight for the line search studies. The conclusions about the sensitivity of H.E.S.S. in phase II remain valid for explorations close to the Galactic Centre and the current method will be employed on larger observational datasets in the future.
16
9
1609.08091
1609
1609.08572_arXiv.txt
The motion of the solar system with respect to the cosmic rest frame modulates the monopole of the Epoch of Reionization 21-cm signal into a dipole. This dipole has a characteristic frequency dependence that is dominated by the frequency derivative of the monopole signal. We argue that although the signal is weaker by a factor of $\sim100$, there are significant benefits in measuring the dipole. Most importantly, the direction of the cosmic velocity vector is known exquisitely well from the cosmic microwave background and is not aligned with the galaxy velocity vector that modulates the foreground monopole. Moreover, an experiment designed to measure a dipole can rely on differencing patches of the sky rather than making an absolute signal measurement, which helps with some systematic effects.
The earliest direct image of the universe comes from the observations of the Comic Microwave Background (CMB), which arises when photons and decouple from cosmic plasma a few hundred thousand years after the big bang at a redshift of $z\sim 1100$. The universe then enters a period known as ``dark ages'', where neutral hydrogen slowly cools and collapses into halos, but the first stars have not yet ignited. The first luminous object form at redshifts of $z\sim 20-40$, but very little is actually known about this early period. With time, galaxies form and start filling the universe with photo-ionizing radiation which re-ionizes the hydrogen in the inter-galactic medium in the process that is thought to have completed by redshift of around $z\sim 6$. This period in the evolution of the universe is known as the epoch of reionization (EoR). It is thought that structure in the universe during this period is characterized by growing bubbles of ionizied hydrogen surrounded by yet-to-be-ionized neutral hydrogen. The neutral hydrogen shines in radio in the 21-cm hydrogen line. Measurements of the redshifted 21-cm line are thus thought to be the most promising way of constraining reionization (EoR) \cite{2006PhR...433..181F,Morales2009:0910.3010v1,Pritchard2011:1109.6012v2}. They will teach us both about the astrophysics of this complex era in the evolution of the universe, as well as provide strong constraints on the value of the total optical depth to the surface of last-scattering, which will help with measurements of many cosmologically relevant parameters, most importantly the neutrino mass\cite{Liu2015:1509.08463v2}. Up to now, most experiments in the field have focused on either measuring the fluctuations in the 21-cm line by measuring the 21-cm brightness temperature and relying on the foreground smoothness to isolate it \cite{Tingay2012:1212.1327v1,Parsons2009:0904.2334v2,Rottgering2006:astro-ph/0610596v2,Paciga2013:1301.5906v2}, or on attempting to measure the global signal, the monopole of the 21-cm radiation from the EoR \cite{Monsalve2016:1602.08065v1,Ellingson2012:1204.4816v3,Greenhill2012:1201.1700v1}. The latter measurement is tempting, since the signal is relatively strong and simple back-of-the envelope calculations show that it should be easily achiveable based on SNR calculations. However, the experimental challenges are daunting: the foregrounds are brighter than the signal by orders of magnitude and vary very strongly across the sky, which makes calibration of the instrument and beams to the required level of precision is very difficult. In this note we make a very simple point that one could attempt to measure the dipole of the signal rather than the monopole. Although the signal is reduced by a factor of around 100, the systematic gains are very significant. The problem is in many ways analogous to the Cosmic Microwave Background -- measuring Cosmic Microwave Background (CMB) dipole is significantly easier than measuring the CMB monopole or the CMB temperature fluctuations. However, one should not take this analogy too far for two reasons. First, while the sky signal on large scales is dominated by the CMB at frequencies above $\sim$ 1GHz, this is not true for the 21-cm EoR signal: a total dipole is going to be dominated by the foregrounds by order of magnitude at the relevant frequencies. Second, while the dipole signal in the CMB is two orders of magnitude larger than the higher order mulitpoles, the same is not true for the the EoR signal, which has comparable or higher power at degree scales compared to dipole. Nevertheless, as we will discuss in this paper, the dipole measurement still has several attractive features in regards of systematic effects.
Differencing has proven to be one of the most successful paradigms in the experimental physics: differential measurements are easy, absolute measurements are hard. We apply this principle to the problem of measuring the EoR monopole. Due to our motion with respect to the cosmic rest frame, this signal is modulated in a dipole fashion. The amplitude of this dipole is supressed but somewhat less than $v_d/c$ factor due to a non-trivial frequency structure of the signal. This supression of the signal could be more than compensated by considerably easier systematic control in the dipole measurement: \begin{itemize} \item The direction of the CMB dipole is know very well and more importantly, the galactic foreground will have both a different true dipole and the doppler dipole of the foregrounds will be different: motion of the solar system with respect to the CMB is not the same as its motion with respect to the galaxy. This can be used to estimate the residual foreground contamination. \item The standard differencing techniques well known in the radio astronomy can be used to great advantage in this set up. This should help in dealing with radio frequency interference, the amplifier $1/f$ noise and the earth's atmosphere. However, the mean beam chromaticity will remain a significant issue. \item The signal derived in this way could be used to cross-check measurements derived from the monopole, since the information content is the same. In fact, one could imagine an experiment that would measure both at the same time. \item Since the signal is proportional to the derivative of the monopole with respect to the frequency, this technique could be potentially very efficient for reionization scenarios that happen rapidly. \end{itemize} We have made a back-of-the-envelope estimate of the require signal-to-noise and determined that signal is in principle measurable in a reasonable amount of time for a reasonable experiment. We hope that this warrants a more accurate forecasts, which would take into account the spactial and frequency variation of foregrounds and work out an optimal map-making scheme. \appendix
16
9
1609.08572
1609
1609.01179_arXiv.txt
{The magnification effect due to gravitational lensing enhances the chances of detecting moderate-redshift ($z \sim 1$) sources in very-high-energy (VHE; $E > 100$~GeV) $\gamma$-rays by ground-based Atmospheric Cherenkov Telescope facilities. It has been shown in previous work that this prospect is not hampered by potential $\gamma-\gamma$ absorption effects by the intervening (lensing) galaxy, nor by any individual star within the intervening galaxy. In this paper, we expand this study to simulate the light-bending effect of a realistic ensemble of stars. We first demonstrate that, for realistic parameters of the galaxy's star field, it is extremely unlikely (probability $\lesssim 10^{-6}$) that the direct line of sight between the $\gamma$-ray source and the observer passes by any star in the field close enough to be subject to significant $\gamma\gamma$ absorption. Our simulations then focus on the rare cases where $\gamma\gamma$ absorption by (at least) one individual star might be non-negligible. We show that gravitational light bending will have the effect of avoiding the $\gamma-\gamma$ absorption spheres around massive stars in the intervening galaxy. This confirms previous results by Barnacka et al. and re-inforces arguments in favour of VHE $\gamma$-ray observations of lensed moderate-redshift blazars to extend the redshift range of objects detected in VHE $\gamma$-rays, and to probe the location of the $\gamma$-ray emission region in those blazars.}
16
9
1609.01179
1609
1609.01209_arXiv.txt
We present results from Johnson $UBV$, Kron-Cousins $RI$ and Washington $CT_1T_2$ photometries for seven van den Bergh-Hagen (vdBH) open clusters, namely, vdBH\,1, 10, 31, 72, 87, 92, and 118. The high-quality, multi-band photometric data sets were used to trace the cluster stellar density radial profiles and to build colour-magnitude diagrams (CMDs) and colour-colour (CC) diagrams from which we estimated their structural parameters and fundamental astrophysical properties. The clusters in our sample cover a wide age range, from $\sim$ 60 Myr up to 2.8 Gyr, are of relatively small size ($\sim$ 1 $-$ 6 pc) and are placed at distances from the Sun which vary between 1.8 and 6.3 kpc, respectively. We also estimated lower limits for the cluster present-day masses as well as half-mass relaxation times ($t_r$). The resulting values in combination with the structural parameter values suggest that the studied clusters are in advanced stages of their internal dynamical evolution (age/$t_r$ $\sim$ 20 $-$ 320), possibly in the typical phase of those tidally filled with mass segregation in their core regions. Compared to open clusters in the solar neighbourhood, the seven vdBH clusters are within more massive ($\sim$ 80 $-$ 380$\msun$), with higher concentration parameter values ($c$ $\sim$ 0.75$-$1.15) and dynamically evolved ones.
\citet{vdbh75} performed a uniform survey over a $\sim$ 12$\degr$ strip of the southern Milky Way extending from {\it l} $\approx$ 250$\degr$ and {\it l} $\approx$ 360$\degr$. They employed the Curtis-Schimidt telescope of the Cerro Tololo Interamerican Observatory and a pair of blue and red filters. From that survey the authors recognised 262 star clusters, 63 of which had not been previously catalogued. For each idenfied object, they assessed the richness of stars on both plates as well as the possible reality of being a genuine star cluster. Up to date, less than 25 per cent of the van den Bergh-Hagen (vdBH) objects have some estimation of their fundamental properties (reddening, distance, age, etc). In general terms, according to the most updated version of the open cluster catalogue compiled by \citet[][version 3.5 as of January 2016]{detal02}, vdBH clusters are mostly of relatively small size, with diameters smaller than $\sim$ 5 pc, although some few ones have diameters twice as big this value. On the other hand, although $\sim$ 60 per cent of them are located inside a circle of 2 kpc in radius from the Sun, the remaining ones reach distances as large as $\sim$ 12 kpc. Indeed, nearly 15 per cent of the sample is located at distances larger than 5 kpc. As for their ages, the vdBH clusters expand over an interesting age regime, from those with some few Myr up to the older ones with more than 3 Gyr. At this point, it appears interesting to estimate fundamental parameters of those overlooked vdBH objects, particularly those located far away from the Sun, in order to improve our knowledge of the Galactic open cluster system beyond the solar neighbourhood. In this paper, we present a comprehensive photometric study of vdBH\,1, 10, 31, 72, 87, 92 and 118; the last four clusters were discovered by \citet{vdbh75}. As far as we are aware, previous photometric studies were performed for vdBH\, 1 (=Haffner\,7), 10 (= Ruprecht\,35) and 31 (= Ruprecht\,60) \citep{moitinhoetal2006,vetal08,bb10,cetal13,giorgietal2015}. In Section 2 we describe the collection and reduction of the available photometric data and their thorough treatment in order to build extensive and reliable data sets. The cluster structural and fundamental parameters are derived from star counts and colour-magnitude and colour-colour diagrams as described in Section 3. The analysis of the results of the different astrophysical parameters obtained is carried out in Section 4, where implications about the stage of their dynamical evolution are suggested. Finally, Section 5 summarizes the main conclusion of this work.
We present results of seven star clusters identified by \citet{vdbh75}, namely, vdBH\,1, 10, 31, 72, 87, 92, and 118, for which particular attention was not given until now. The clusters were observed through the Johnson $UBV$, Kron-Cousins $RI$ and Washington $C$ filters; four of them (vdBH\,72, 87, 92 and 118) are photometrically study for the first time. The multi-band photometric data sets were used to trace the cluster stellar density radial profiles and to build CMDs and CC diagrams, from which we estimate their structural parameters and fundamental astrophysical properties. Their radial profiles were built from star counts carried out throughout the observed fields using the final photometric catalogues. We derived the cluster radii from a careful placement of the background levels and fitted King and Plummer models to derive cluster core, half-mass and tidal radii. We then applied a subtraction procedure developed by \citet{pb12} to statistically clean the star cluster CMDs and CC diagrams from field star contamination in order to disentangle star cluster features from those belonging to their surrounding fields. The employed technique makes use of variable cells in order to reproduce the field CMD as closely as possible. The availability of three CC diagrams and six CMDs covering wavelengths from the blue up to the near-infrarred allowed us to derive reliable ages, reddenings and distances for the studied clusters. We exploited such a wealth in combination with a new age-metallicity diagnostic diagram for the Washington system and theoretical isochrones computed by \citet{betal12} to find out that the clusters in our sample cover a wide age range, from $\sim$ 60 Myr up to 2.8 Gyr, are of relatively small size ($\sim$ 1 $-$ 6 pc) and are placed at distances from the Sun which vary between 1.8 and 6.3 kpc, respectively. They are located between the Carina and Perseus spiral arms, with the exception of vdBH\,118, which is closer to the Galactic centre than the Carina arm. It also belongs to the sample of the farthest known clusters placed between the Carina and Crux spiral arms. We estimated lower limits for the cluster present-day masses as well as half-mass relataxion times. Their resulting values suggest that the studied clusters have advanced states of their internal dynamical evolution, i.e., their ages are many times the relaxation times ($\sim$ 20 $-$ 320). When combined with the obtained structural parameters, we found that the clusters are possible in the phase typical of those tidally filled with mass segregation in their core regions. We compared the cluster masses, concentration parameters and age/$t_r$ ratios with those for 236 clusters located in the solar neighbourhood. The seven vdBH clusters are within more massive ($\sim$ 80 $-$ 380$\msun$), with higher $c$ values, and dynamically evolved ones. There is also a broad correlation between the cluster massses and the dynamical state, in the sense that the more the advanced the internal dynamical evolution, the less the cluster mass. The apparent decrease of clusters with masses smaller than $\sim$ 200$\msun$ and ages $\sim$ 100 times older than their relaxation times could suggest the beggining of cluster dissolution, with some exceptions. Finally, we found that the MFs of vdBH\,1, 72 and 87 follow approximately the \citet{salpeter55}'s relationship, while vdBH\,10, 31 and 92 seem to show shallower MFs for the lower mass regime.
16
9
1609.01209
1609
1609.01515_arXiv.txt
The Galactic bulge is dominated by an old, metal rich stellar population. The possible presence and the amount of a young (a few Gyr old) minor component is one of the major issues debated in the literature. Recently, the bulge stellar system Terzan 5 was found to harbor three sub-populations with iron content varying by more than one order of magnitude (from 0.2 up to 2 times the solar value), with chemical abundance patterns strikingly similar to those observed in bulge field stars. Here we report on the detection of two distinct main sequence turn-off points in Terzan 5, providing the age of the two main stellar populations: 12 Gyr for the (dominant) sub-solar component and 4.5 Gyr for the component at super-solar metallicity. This discovery classifies Terzan 5 as a site in the Galactic bulge where multiple bursts of star formation occurred, thus suggesting a quite massive progenitor possibly resembling the giant clumps observed in star forming galaxies at high redshifts. This connection opens a new route of investigation into the formation process and evolution of spheroids and their stellar content.
\label{intro} The picture of galaxy bulges formation is still highly debated in the literature \citep[for a review of the Milky Way bulge, see, e.g.,][]{rich13,origlia14}. Among the proposed scenarios, three main channels can be grossly distinguished: dissipative collapse \citep[e.g.,][]{ballero07, mcwilliam08}, with possibly an additional component formed with a time delay of a few Gyr \citep[e.g.][]{tsu12, grieco12}, dynamical secular evolution of a massive disk that buckles into a bar \citep[e.g.,][]{combes81, raha91, saha13}, and merging of substructures either of primordial galaxies embedded in a dark matter halo \citep[e.g.,][]{scannapieco03, hopkins10}, or massive clumps generated by early disk fragmentation \citep[e.g.][]{immeli04, carollo07, elmegreen08}. In the merging scenarios most of the early fragments rapidly dissolved/merged together to form the bulge. However, a few of them could have survived the total disruption \citep[e.g.][]{immeli04, carollo07, elmegreen08} and it is very possible that such fossil relics are still observable somewhere in the host galaxy, grossly appearing as normal globular clusters (GCs). Because of their original large mass, these clumps should have been able to retain the iron-enriched ejecta and the stellar remnants of the supernova (SN) explosions, and they likely experienced more than one burst of star formation. As a consequence, we expect these fossil relics to host multi-\emph{iron} sub-populations and, possibly, also a large number of neutron stars. Clearly, finding stellar systems with these properties would provide crucial observational support to the 'merging" scenario for bulge formation. Until recently, no empirical probes of such fossil clumps in galaxy bulges were available. The situation changed in 2009, when \citet{ferraro09} discovered two stellar components with very different \emph{iron} abundances in Terzan 5, a stellar system in the Milky Way bulge previously catalogued as a GC, with the only peculiarity of hosting the largest population of millisecond pulsars in the Galaxy \citep{ransom05}. The two components of Terzan 5 appear well distinct at the level of the red clump (RC) and red giant branch (RGB) in the combined NIR-optical color-magnitude diagram (CMD; \citealp{ferraro09, massari12}). Moreover, they display significantly different iron and $\alpha$-element abundances, the metal-poor population having [Fe/H]$=-0.25$ dex and [$\alpha/$Fe]$=+0.34$, the metal-rich one showing [Fe/H]$=+0.27$ dex and [$\alpha/$Fe]$=+0.03$ (\citealp{origlia11}; a minor component at [Fe/H]$=-0.8$ has been also detected, extending the internal metallicity range of Terzan 5 over more than 1 dex; see \citealp{origlia13, massari14}). Given these abundance patterns, the observed RC split could be explained either in terms of an age difference of a few Gyr, or in terms of a different helium content in two nearly coeval sub-populations \citep{dantona10, lee15}. The chemical abundance patterns measured in Terzan 5 are strikingly similar to those observed toward the Galactic bulge, while no other stellar system within the Milky Way outer disk and halo or in the Local Group show analogous properties \citep{chiappini99, matteucci05, lemasle12}. This opens the intriguing possibility that Terzan 5 is a fossil relic of one of the structures that contributed to generate the Galactic bulge. The bulge is known to be dominated by an old ($>10$ Gyr) stellar population with solar-like metallicity \citep{clarkson08,valenti13,rich13}. However, growing indirect evidence \citep{bensby13, dekany15, nataf15} supports the existence of a minor, significantly younger component, with age not precisely determined yet. Thus, measuring the absolute ages of the two major stellar components in Terzan 5 is of paramount importance also in the context of the bulge formation history. Here we present the accurate determination of the main sequence turn-off (MS-TO) region in Terzan 5, from which we determined the age of its two main stellar populations. The paper is organized as follows. The data-sets used and the data reduction procedures are presented and discussed in Section 2. The analysis of the CMDs, and the measure of the ages and radial distributions of the two populations are presented in Section 3. Section 4 is devoted to discussion and conclusions.
\label{discussion} Both the multi-peak iron distribution \citep{ferraro09, origlia11, origlia13, massari14} and the clear separation of the two MS-TO/SGBs (this paper) observed in Terzan 5 suggest that the star formation process in the system was not continuous, but characterized by distinct bursts. Dating the two main sub-populations provides the time scale of the enrichment process. After an initial period of star formation, which occurred at the epoch of the original assembly of the system ($\sim 12$ Gyr ago) and generated the two sub-solar components from gas enriched by SNII, Terzan 5 experienced a long phase ($t \sim 7.5$ Gyr) of quiescence during which the gas ejected from both SNeII and SNeIa accumulated in the central region. Then, approximately 4.5 Gyr ago, the super solar component formed. This requires the Terzan 5 ancestor to be quite massive (at least a few $10^8 M_\odot$; \citealt{baum08}). The amount of gas retained by the proto-Terzan 5 system was huge: on the basis of the observed red clump populations \citep{ferraro09,lanzoni10}, we estimate that the young component in Terzan 5 currently amounts to roughly $7.5\times 10^5 M_\odot$ (the same size of a 47 Tucanae-like globular cluster) corresponding approximately to $38\%$ of the total mass of the system. These results definitely put Terzan 5 outside the context of genuine CGs. Indeed, apart from the overall morphological appearance, Terzan 5 shares no other property with globulars, neither in terms of chemistry (GCs show inhomogeneities only in the light-elements; \citealt{carretta09}), nor in the enrichment history and age spread of the sub-populations (the enrichment timescale in GCs is of a few $10^8$ yrs and their light-element sub-populations are thus almost coeval; \citealt{dercole08}), nor in terms of the mass of the progenitor (GCs did not retain the high-velocity SN ejecta and therefore do not need to have been as massive as Terzan 5 in the past; \citealt{baum08}). {\it If Terzan5 is not a genuine GC, what is it then?} As shown in Figure \ref{alpha}, the metallicity distribution and chemical abundance patterns of Terzan 5 are strikingly similar to those observed in bulge stars \citep{gonzalez11,ness13,johnson14,rojas14,ryde16}: not only the iron abundance, but also the amount of $\alpha$-enhancement and its dependence on metallicity are fully consistent with those measured in the Galactic bulge. Note that setting such an abundance pattern requires quite special conditions. In fact, the large value of [Fe/H] ($\sim -0.2$) at which the $[\alpha/$Fe] vs [Fe/H] relation changes slope necessarily implies that an exceptionally large number of SNeII exploded over a quite short timescale (i.e. before the explosion of the bulk of SNeIa, which polluted the medium with iron, generating the knee in the relation). In turn, this requires a very high star formation efficiency. Indeed, the stellar populations in the Galactic halo/disk system \citep{chiappini99,matteucci05} and in the dwarf galaxies observed in the vicinity of the Milky Way \citep{lemasle12} show abundance patterns inconsistent with those plotted in the figure: they do not reach the same high iron content and the knee in the $[\alpha/$Fe] vs [Fe/H] relation occurs at much lower metallicities ([Fe/H]$\sim -1$ and [Fe/H]$\sim -1.5$ in the disk/halo and in dwarfs, respectively), indicating a significantly less efficient SNII enrichment process. Hence, within the Local Group, the abundance patterns shown in Figure \ref{alpha} can be considered as a univocal signature of stellar populations toward the Galactic bulge. Because of the tight chemical link between Terzan 5 and the bulge, it seems reasonable to ask whether there is any connection also in terms of stellar ages, in particular, between the super-solar (and young) population of Terzan 5 and the super-solar component of the bulge. Deep photometric studies \citep{clarkson08,valenti13} of a few bulge fields properly decontaminated from the disk population indicate that the vast majority of the bulge is significantly old (up to $80\%$ is older than 10 Gyr; \citealt{clarkson08, nataf15}). However, a multi-peak metallicity distribution very similar to that of Terzan 5 is observed in samples of giant, red clump, and lensed dwarf stars in the bulge (\citealp{ness13, rojas14, bensby13}, respectively; but see \citealp{johnson14}). Moreover, in close analogy with what found for Terzan 5, two prominent epochs of star formation are estimated for the lensed dwarfs (more than 10 Gyr ago for stars at [Fe/H]$<0$, and $\sim 3$ Gyr ago for the youngest super-solar objects; \citealp{bensby13,nataf15}). The possible presence of a younger population was also recently suggested in the photometric and spectroscopic study of the bulge within the GAIA-ESO Survey. \citet{rojas14} detected a bimodal magnitude distribution for the two RC samples with different metallicities observed in their surveyed field closest to the Galactic plane (the Baade's Window), where geometric effects due to the X-shaped morphology of the bulge should be negligible, thus concluding that this might be due to an intrinsic age difference of the order of 5 Gyr. Interestingly enough, the difference in magnitude ($\Delta K \sim 0.3$), the metallicities (super-solar in the bright RC and sub-solar in the fainter one) and the proposed difference in age ($\Delta t \sim 5$ Gyr) of their two RC populations turn out to be in nice agreement with those measured in the two main sub-populations of Terzan5. The collected evidence suggests that the proto-Terzan 5 system was a massive structure formed at the epoch of the Milky Way bulge formation. Indeed, $10^8-10^9 M_\odot$ clumps are observed in early disks at high redshifts, thus confirming that such massive structures existed in the remote epoch of the Galaxy assembling ($\sim 12$ Gyr ago), when also Terzan 5 formed. These giant clumps are thought to form from the fragmentation of highly unstable disks, to experience intense star formation episodes over quite short timescales (less than 1 Gyr)\footnote{The possibility of such clumps to survive the energetic feedback from young stars has been critically discussed in the literature \citep[see, e.g.,][]{hopkins12}. However recent observations suggest that indeed these structure can survive for several $10^8$ yr (a lifetime of 500 Myr has been estimated by \citet{zanella15}, thus supporting the scenario where these clumps can contribute to bulge formation.} and then to coalesce and give rise to galaxy bulges (e.g., \citealp{elmegreen08}). Notably the metallicity and the $\alpha$-element enhancement of the high-z clumps ([Fe/H]$= -0.6$ and $[\alpha/$Fe$]\sim 0.25-0.7$; \citealp{pettini02,halliday08}) are in agreement with the values measured in the two oldest stellar populations of Terzan 5 (and in the Galactic bulge), thus suggesting that the major star formation events occurred at early epochs and over a quite short time-scale, before the bulk of SNeIa exploded. Hence the progenitor of Terzan 5 might have been one of these clumps, which survived the complete destruction. Most of the pristine gas was frenetically converted into stars within the giant clumps that populated the forming bulge. The coalescence of most of these systems in the very early epoch contributed to form the ``old bulge'' that we currently observe.\footnote{The proposed scenario might also naturally account for the $\gamma$-ray emission recently detected in the central region of our Galaxy \citep{brandt15}. This has been proposed to be originated by Millisecond Pulsars dispersed in the field by the dissolution of globular clusters in the early epoch of the Galaxy formation. Further support to this hypothesis is obtained by assuming that the disrupting structures were metal-rich and massive clumps similar to the proposed progenitor of Terzan 5. In fact, a high metallicity (testifying the explosion of a large number of SNeII; see Figure 9) combined with a large total mass (allowing the retention of both SNeII ejecta and neutron stars), and with a high collision rate (as measured by \citet{lanzoni10} for Terzan 5), makes the proto-bulge massive clumps the ideal environment for forming Millisecond Pulsars. Indeed, Terzan 5 is the stellar system hosting the largest known population of such objects in the Galaxy \citep{ransom05}. The large number of Millisecond Pulsars generated within the clumps and then dispersed in the bulge may therefore account for the $\gamma$-ray emission detected in the Galactic center. Note that, at odds with pulsars, the emission of the reaccelerated pulsars is expected \citep{bhatta91} to persist for long time ($10^{10}$ yr).} However, a few of them survived the destruction. A possibility is that the proto-Terzan5 system was originally less massive \citep{behrendt16} and more compact than other clumps and, because of its lower mass, it did not rapidly sink and merge in the central region, but it was scattered out to large radii by dynamical interactions with more massive sub-structures (indeed such events are observed to occur in simulations; \citealt{bournaud16}). Thus the proto-Terzan5 clump might have evolved in an environment significantly less violent than the central forming bulge, and, as a giant ``cocoon'', it may have retained a large amount of gas ejected by SNe for several Gyr, before producing a new generation of stars (such a bursty star formation histories, with long periods of quiescence, are observed in the Universe; e.g., \citealp{van15}). For instance, the most recent burst in Terzan 5 could have been promoted by a major interaction with bulge sub-structures, possibly also driving the mass-loss phenomenon that significantly reduced its mass (from the initial $\sim 10^8 M_\odot$, to the current $10^6 M_\odot$; \citealt{lanzoni10}). An alternative possibility is that Terzan 5 is the former nuclear cluster of a massive satellite that contributed to generate the Milky Way bulge via repeated mergers at high redshift \citep[e.g.][]{hopkins10}. In order to account for the observed chemical abundance patterns (Figure \ref{alpha}), the satellite hosting the Terzan 5 progenitor should have been much more metal rich (and massive) than the dwarf galaxies currently populating the Local Group. Although such objects are not present anymore in the Milky Way vicinity \citep{chiappini99,matteucci05,lemasle12}, they possibly existed at the epoch of the Galaxy assembly\footnote{On the other hand, a scenario where the structure hosting the proto-Terzan 5 was accreted in a single event well after the bulge formation seems to be quite implausible, since it would require that such a structure independently experienced a chemical evolutionary history very similar to that of the bulge (see Figure \ref{alpha}).}. Thus, the proto-Terzan 5 host might have been one of several satellites that merged together to originate the bulge \citep{hopkins10}. A speculation on its mass can be derived from the mass-metallicity relation of $z=3-4$ galaxies \citep{mannucci09}, by assuming that the satellite was somehow ($<0.5$ dex) more metal-poor than its nucleus (which seems to be the case at least for the most massive structures; see Figure 4 in \citealt{paudel11}). This yields to a mass of the proto-Terzan5 host galaxy of a few $10^9 M_\odot$. Thus similar primordial massive structures (see \citealt{hopkins10}) could have contributed to the mass budget of the Galactic bulge ($2 \times 10^{10} M_\odot$; \citealt{valenti16}). The bulge formation history in general, its broad metallicity distribution and its kinematics cannot be explained within a simple scenario (either merging, dissipative collapse or some disk instability/secular evolution), but likely all these processes and perhaps others have played some role. Complexity is a matter of fact in all the galactic components, especially in the central region of the Galaxy. Any ad hoc model that neglects complexity to support one specific scenario is poorly thorough. While the presence of bar(s) and disks with complex chemistry and kinematics in the central region of our Galaxy is well established since quite a long time, an observational evidence of the existence of fossil remnants of massive clumps in our bulge, as those observed at high-z, is still lacking. Terzan 5 could be the first such evidence, providing a direct link between the distant and the local Universe and also calling for models able to account for the survival of similar structures and for the existence of a younger (minor) stellar populations in galaxy bulges. From an observational point of view, it is urged to check whether other stellar systems similar to Terzan 5 still orbit the Milky Way bulge and to constrain the amount of intermediate-age stars therein. Finally, it is also worth mentioning that the presence of massive clumps that merged/dissolved to form the bulge, or survived disruption and evolved as independent systems, does not necessarily exclude the formation (and possible secular evolution) of disks, bars and other sub-structures as those observed toward the Galactic bulge.
16
9
1609.01515
1609
1609.05931_arXiv.txt
It is unclear how frequently life and intelligence arise on planets. I consider a Bayesian prior for the probability $\PETI$ that intelligence evolves at a suitable site, with weight distributed evenly over $\ln(1 - \ln \PETI)$. This log log prior can handle a very wide range of $\PETI$ values, from $1$ to $10^{-10^{122}}$, while remaining responsive to evidence about extraterrestrial societies. It is motivated by our uncertainty in the number of conditions that must be fulfilled for intelligence to arise, and it is related to considerations of information, entropy, and state space dimensionality. After setting a lower limit to $\PETI$ from the number of possible genome sequences, I calculate a Bayesian confidence of $18\%$ that aliens exist within the observable Universe. With different assumptions about the minimum $\PETI$ and the number of times intelligence can appear on a planet, this value falls between $1.4\%$ and $47\%$. Overall, the prior leans towards our being isolated from extraterrestrial intelligences, but indicates that we should not be confident of this conclusion. I discuss the implications of the prior for the Search for Extraterrestrial Intelligence, concluding that searches for interstellar probes from nearby societies seem relatively effective. I also discuss the possibility of very small probabilities allowed by the prior for the origin of life and the Fermi Paradox, and note that similar priors might be constructed for interesting complex phenomena in general.
\label{sec:Intro} \emph{Of course} we are not alone. We now know the Earth is but one of billions of planets in the Galaxy, and the Milky Way is but one of billions of galaxies \citep{Johnson10,Cassan12,Petigura13,Zackrisson16}. Intelligent life arose on Earth through natural processes. Since the laws of physics and astrophysical environments of galaxies are basically uniform through the observable Universe, there's no reason why analogous processes could not happen on other planets. Self-organization is a very general phenomenon capable of generating the complexity required by life \citep{Kauffman95}, and life appeared early in Earth's history, suggesting its ubiquity \citep{Lineweaver02,Ward04}. Intelligence may also be common: our biosphere is filled with examples of convergent evolution \citep{ConwayMorris03}, and several clades of animals demonstrate cognition and even tool use \citep[e.g.,][]{Marino02,Emery06,Hochner06}. Even if the odds of it happening on a particular planet are one in a trillion, billions of other societies have evolved over the Universe's history \citep{Frank16}. \emph{Of course} we are alone. With a benign astronomical environment and rare geological processes, the Earth is a far better long-term home for life than a typical terrestrial planet \citep{Gonzalez01,ConwayMorris03,Ward04}. Even simple proteins or self-replicating nucleic acids are enormously complex; the odds of a planet generating functional molecules that assemble themselves into a working cell may be beyond astronomically tiny \citep{Yockey00,ConwayMorris03}. Even if life did appear on another planet, human-like intelligence is a very specific adaption to the very specific pressures that the ancestors of \emph{Homo sapiens} experienced. So many adaptions occurred on the way that it is unlikely that sequence of influences will recur if things were slightly different \citep{Simpson64,Gould89,Mayr01}. And in fact, most organisms get along fine without intelligence; prokaryotes form the great majority of living things \citep[see][]{Whitman98}. There's no good reason to believe that our intelligence is anything common if our own biosphere is anything to go by \citep{Simpson64,Mayr01,Lineweaver09}. These are, in broad strokes, \emph{a priori} arguments for and against there being extraterrestrial intelligence (ETI) in the observable Universe.\footnote{I am purposefully vague about what intelligence is, but I basically mean an organism that is biologically capable of building technology that can send signals across interstellar distances. $\PETI$ is proportional to $f_l f_i$ in Drake's equation \citep[described in][among others]{Sagan63,Tarter01}, but also depends on the number of ``birthsites'' per planet. A species that actually develops this technology is called a technological society in this work. Throughout this paper, I largely ignore cultural evolution (the $f_c$ factor of the Drake equation), but in principle it could be the limiting factor in whether SETI will find anything \citep{Ashkenazi95,Davies10}. Even then, one could use a log log prior for $f_c$ since there are a finite number of distinct societies according to the Bekenstein Bound.} It is clear that there is an astronomical number of rocky, terrestrial planets in our past light cone where life and intelligence could develop, roughly $\sim 10^{21}$ \citep{Zackrisson16}. The number of planets is known to around an order of magnitude, but it is unfortunately useless unless we know the odds that intelligence actually does arise on a planet --- for all we know, it could be $10^{-100}$ or smaller, in which case we are effectively alone. On the other hand, there is no well-motivated estimate for that probability. Even granting that \emph{Homo sapiens} is unlikely to recur, there could be many ways for intelligence to arise \citep[as in][]{Gould87,Cirkovic14}, and a planet has millions and millions of chances to find even one. Our uniqueness in Earth's biosphere might indicate that probability of intelligence evolving is much less than 1 \citep[e.g.,][]{Mayr01,Lineweaver09}. But our uniqueness on Earth, by itself, is about equally compatible with the probability being $0.01$, $10^{-10}$, $10^{-20}$, $10^{-100}$, and $10^{-10^{9}}$, values which would have very different implications for how populated the Universe is. Ideally, the matter could be settled empirically, which is the aim of the Search for Extraterrestrial Intelligence (SETI; \citealt{Cocconi59,Tarter01}). SETI has sought evidence for extraterrestrials through many programs and an increasing number of methods, from the traditional surveys for radio broadcasts \citep[e.g.,][]{Tarter85,Blair92,Horowitz93,Anderson02,Gray02,Siemion10}, to searches for laser light \citep{Shvartsman93,Reines02,Howard04,Hanna09,Borra12}, high energy radiation (\citealt{Harris02}; see also \citealt{Learned94,Corbet97,Lacki15}), extraterrestrial technology in the Solar System \citep{Freitas83,Steel95}, artificial ``megastructures'' the sizes of planets \citep{Arnold05,Wright16,Boyajian16} or star systems \citep{Slysh85,Timofeev00,Jugaku04,Carrigan09,Villarroel16}, and the engineering of entire galaxies \citep{Kardashev64,Annis99,Wright14-Results,Griffith15,Zackrisson15,Lacki16}. But so far, no alien societies have been found yet, and there is no consensus about what that means \citep{Brin83,Cirkovic09-Fermi}. Do extraterrestrials exist but remain too quiet to be observable yet \citep[e.g.,][]{Freitas85,Scheffer94,HaqqMisra09}, or would they rapidly grow until they become obvious even across cosmic distances \citep[as in][]{Hart75,Tipler80,Wright14-SF}? \subsection{The Anthropic and Copernican Principles} \label{sec:Anthropic-Copernican} In the absence of hard evidence, the debate has occasionally turned to philosophical arguments. It is indisputable that life with human-like intelligence exists in the form of humanity. As with the SETI null results, the interpretation of even this trivial positive result is disputed. The debate frequently centers around two principles: the Anthropic Principle and the Copernican Principle. The Anthropic Principle essentially says that our existence or situation is somehow inevitable, regardless of how special or improbable we are \citep{Carter74}. The most commonly invoked version is the Weak Anthropic Principle, which applies if the Universe is very large. The Weak Anthropic Principle can be formulated in terms of observations, as a statement about inference: we cannot deduce the probability of our evolution just from our existence. Our situation could be very special (but not unique) and still be consistent with observation. In statistical terms, as long as the probability of intelligent life evolving is nonzero, the likelihood of it existing in an infinite Universe is 1. The inferential Weak Anthropic Principle is also interpreted as a statement about selection bias \citep{Carter74,Carter83}. The Weak Anthropic Principle can also be stated in in terms of theoretical predictions, as a statement about causality: in a sufficiently big Universe, the appearance of humanity somewhere is to be expected, even if the probability of that happening in a particular location is very low but non-zero. The conditions necessary for our evolution may just be one of the special, rare events that occasionally happen in an infinite Universe. There are stronger versions of the Anthropic Principle, which apply not just to the contingent circumstances within the Universe, but to the fundamental laws of physics themselves \citep{Carter74,Barrow83}. The most extreme formulations argue that conscious observers or humanity play some crucial role in the functioning of the Universe. These versions imply that the Universe is actually compelled to produce us, and it would be logically impossible for the Universe to exist without us \citep{Barrow83}. This is in contrast with the causal Weak Anthropic Principle, in which the probability of our not existing in an infinite Universe has measure $0$; the impossibility in the Weak case is merely statistical rather than logical. As such, stronger Anthropic Principles say that humanity is truly special in terms of function or role, not just in terms of being rare, and they are far more controversial. In tension with the Anthropic arguments is the Copernican Principle. The fundamental argument of the Copernican Principle is that we are not in a special location of the Universe, as demonstrated by centuries of astronomical observation \citep[e.g.,][]{Sagan94}. More generally, we should assume that our circumstances are not special (as in its controversial application by \citealt{Gott93}). As far as we can observe, the Earth, Solar System, and Galaxy are fairly typical in astrophysical terms; other bodies like them should be common throughout the Universe. After all, they are the result of natural processes that could occur anywhere. The Copernican Principle is related to the Principle of Mediocrity: in the absence of information, we are more likely to find ourselves in a typical situation than a rare one \citep[e.g.,][]{Brin83,Vilenkin95}. Even if the evolution of intelligent life is extremely rare, we can still hold to a weaker argument that I will call the Weak Copernican Principle. The Weak Copernican Principle states that our evolution is the result of physical processes that have a non-zero probability of occurring independently elsewhere in the Universe. If one accepts a naturalistic view of evolution, this may seem trivial, but it is not. It is logically possible for a class of events to have probability measure $0$: for example, one could flip a fair coin infinitely many times and get only heads. The Weak Copernican Principle is also technically distinct from the causal Weak Anthropic Principle, even if they both imply our existence is expected in a large enough Universe. The Weak Copernican Principle implies the evolution of intelligence at a suitably distant location is independent of our evolution. In contrast, one could imagine that the evolution of intelligence on one planet somehow makes it impossible for it to evolve anywhere else in the Universe. For example, the Universe might be a computer simulation which is designed to randomly place life on one and only one planet. This could still be consistent with the Weak Anthropic Principle, as long as the probability of life appearing the first time is large enough. On the other hand, the Weak Copernican Principle doesn't say that our evolution is inevitable; it still applies if the chance that life evolves around a star is $10^{-100}$ and there are only $10$ stars in the Universe. In some sense, it is clear that both the Anthropic and Copernican arguments are partly true. The Weak Anthropic Principle is true in that most of the Universe's volume is not filled with intelligent life forms, even if every planet is habitable. For that matter, the other planets in our Solar System and the Sun are unfit for human habitation; this was not clear a few centuries ago \citep{Crowe99}. The Copernican Principle is true in that there are planets besides the Earth, and solar systems besides our own. Just a few decades ago, it was an open question if exoplanets existed at all or if the Solar System was the result of an improbable stellar event \citep{Dick98}, and the existence of other planets was also unclear a few centuries ago. Now the problem is to figure out how to extrapolate these principles beyond the evolution of stars and planets to the evolution of life and intelligence. \subsection{The question of priors} \label{sec:Priors} The philosophical debate about the existence of aliens can be understood as a debate about priors. In Bayesian probability theory, a prior is a subjective judgment about how much one believes in a hypothesis before an observational test is done. Given a continuously varying parameter $\alpha$, the prior $dP_{\rm prior}/d\alpha$ takes the form of a probability distribution function (PDF) over the allowed values of $\alpha$ \citep{Trotta08}. When new evidence arrives, Bayes' theorem describes how the prior can be transformed into a posterior describing subjective levels of belief after considering evidence from an observation \citep{Trotta08}: \begin{multline} \label{eqn:Bayes} P_{\rm posterior} ({\rm Hypothesis} | {\rm Observation}) = \\ {\cal L} ({\rm Hypothesis} | {\rm Observation}) \frac{P_{\rm prior} ({\rm Hypothesis})}{P ({\rm Observation})}. \end{multline} Bayes' theorem requires the likelihood ${\cal L} ({\rm Hypothesis} | {\rm Observation}) = P({\rm Observation} | {\rm Hypothesis})$, which is the probability that one would make a given observation if the hypothesis is true. The likelihood can frequently be estimated theoretically for a well-characterized model and a well-understood experiment. In addition, Bayes' theorem requires an evidence factor $P({\rm Observation})$, which is a normalizing factor. It basically is the probability that one would make an observation according to a prior, including the cases where the hypothesis is true and where the hypothesis is false. For the continuous parameter $\alpha$, Bayes' theorem is phrased as \begin{equation} \label{eqn:BayesPDF} \frac{dP_{\rm posterior}}{d\alpha} = \frac{{\cal L} (\alpha | {\rm Observation}) \displaystyle \frac{dP_{\rm prior}}{d\alpha}}{\displaystyle \int {\cal L} (\alpha | {\rm Observation}) \frac{dP_{\rm prior}}{d\alpha} d\alpha}. \end{equation} Although the choice of prior is subjective, the Principle of Mediocrity is a general guiding principle. It says that, in the absence of evidence, we should assume that no particular value is special, and therefore we should favor no value over another \citep{Trotta08}. Otherwise, if all of the prior weight is concentrated into a few hypotheses, we effectively assume whichever hypothesis we wish to prove. Then even if evidence strongly points towards an alternate hypothesis, we essentially ignore it and cling to the old theory (for example, the ``Presumptuous Philosopher'' thought experiment recounted in \citealt{Bostrom03}). For a continuous parameter, the prior should not be too strongly weighted towards one value, which is fulfilled if it is flat. Note that a flat prior for a parameter $\alpha$ does not remain flat if the variable $\alpha$ is transformed, such as if we then consider $\ln \alpha$. If $\alpha$ might have values that vary over orders of magnitude, a logarithmic prior that is uniform in $\ln \alpha$ (flat log prior) seems like a reasonable choice, since it has no scale. \citep[e.g.,][]{Trotta08,Spiegel12,Tegmark14}. Different conclusions are reached if different facts of our evolution are emphasized as representative. The timescales for our evolution is a common source of speculative reasoning. The idea is that a habitable planet has some unchanging chance of producing intelligent life in a unit of time, $\Gamma_{\rm ETI}$. This is appropriate if the appearance of intelligence depends on a process that is independent of history, like an evolutionary process generating certain key traits through a random walk \citep{Carter83}. Life appeared quite early in our planet's history, which would not be typical if life arose through such random processes ($\Gamma_{\rm life} \ll 10^{-10}\ \yr$). \citet{Lineweaver02} interprets this observation as evidence that life arises quickly on planets and is common \citep[as in][]{Ward04}, but others argue that intelligence can only arise if life appears early and this atypicality could be an anthropic bias \citep{Hanson98}. The choice of a prior on $\Gamma_{\rm life}$ also affects whether meaningful constraints are then set on $\Gamma_{\rm life}$ \citep{Spiegel12}. \citet{Behroozi15} applies an analogous argument on cosmological scales, arguing for a large $\Gamma_{\rm ETI}$ large because the Earth formed before most of the Universe's virialized gas had a chance to collapse into planets. On the other hand, \citet{Carter83} noted that the timescale for humanity's evolution is close to the habitable lifespan of the Earth. He essentially interprets this observation according to a flat log prior in the evolutionary timescale. Of all the many orders of magnitude this timescale could have been, it is unlikely to have matched the Earth's lifespan so closely, so he interprets the coincidence as the result of anthropic selection; the expected timescale is much longer, and intelligent life is very rare. Furthermore, using a simple model of evolution, he argues that the timing is related to the number of unlikely steps that occurred along the way of our evolution (\citealt{Carter83}; see also \citealt{Hanson98,Carter08,Davies10}). But this argument too has been disputed; \citet{Cirkovic09-Reset} argues that astrophysical extinction events like gamma ray bursts can slow down the actual time it takes for intelligent life to evolve even if the unimpeded timescale is fairly short. It's also possible that a critical evolutionary step is directly tied to the sun's properties \citep{Chyba05}. \citet{Livio99} suggests the critical step is the development of an ozone layer, which is related to the photodissociation of atmospheric water vapor into oxygen; this process is only efficient for blue stars no more long-lived than the Sun. It's also possible that the evolution timescale is not the relevant factor, because the evolution of life is constrained by earlier arbitrary events. For example, it is unlikely that a life form will greatly change the genetic code mapping amino acids to DNA nucleotide sequences; the cost is too great as it risks turning all of the genes into gibberish \citep{Crick68}. But this code was set very early in the development of life. If the appearance of intelligence depended on having a particular genetic code \citep[compare with][]{ConwayMorris03}, then whether it evolves on a particular planet could have little to do with the time available. By itself, this wouldn't explain the coincidence between the Sun's lifetime and the time for humanity to evolve, but one could imagine that events very early in life's evolution launches it on a nearly fixed course that predetermines whether and how long until intelligence appears. If there's a small chance that this trajectory leads to intelligence appearing in 10 Gyr, a much smaller chance that intelligence appears in 1 Gyr, a much smaller chance that it appears in 100 Myr, and so on, most intelligence would appear near the end of their planet's lifespan, without depending on the presence of discrete evolutionary barriers along the way. In a recent book, \citet{Tegmark14} presented a relatively simple argument that suggests that we are alone. The probability that intelligent life $\PETI$ arises on a given planet is unknown, even at an order of magnitude level, so we can adopt a prior that is uniform in $\log_{10} \PETI$. If $\log_{10} \PETI$ is between $-21$ and $0$, then we are not alone. But we have no reason to set $-21$ as a lower limit; given our ignorance, it could easily extend to $-100$ or further. Because of the huge range in $\log_{10} \PETI$ allowed by our ignorance, relatively little weight is left to be spread over the range of $-21 < \log_{10} \PETI < 0$, so this reasonable prior indicates that we are likely alone in the observable Universe \citep{Tegmark14}.\footnote{This telling is slightly altered from its presentation in \citet{Tegmark14}. The first difference is that the book focuses on a flat log prior in the distance to the nearest alien society, although a flat log prior in $\PETI$ is given as the motivation. The fundamental quantity is the probability that intelligence evolves on a world, and the number of worlds tracks comoving volume. The logarithm of the comoving distance is not proportional to the logarithm of the comoving distance if space is slightly curved. Second, \citet{Tegmark14} applies the Fermi Paradox \citep[described in][]{Hart75,Tipler80} to rule out $-10 \la \log_{10} \PETI < 0$, closing the window for which aliens could exist in the observable Universe. But systematic uncertainties can blunt null results, and the prior weight in this window could be so low for the log prior that applying the Fermi Paradox has insignificant value.} But this argument has a problem --- the lower bound is left undefined, which can lead to strange results. Fundamental physics implies that there are at most $\sim e^{3 \times 10^{122}}$ configurations for the observable Universe \citep{Egan10}, so we might take $\log_{10} \PETI = -10^{122}$ as a lower bound. Unfortunately, this prior then makes it essentially impossible to convince its holder that aliens exist, since the prior probability for their presence in the observable Universe is $\sim 10^{-122}$. Besides requiring an extremely high level of statistical confidence before concluding a positive result is correct, it is essentially impossible to rule out the possibility of systematic errors to that degree. Even for a well characterized experiment, you could always decide that the evidence is always fraudulent, or that you are hallucinating. While such possibilities are unlikely, can you really be sure that they are more unlikely than $1$ part in $10^{122}$? \subsection{Introducing the log log prior} \label{sec:IntroLogLog} A fundamental disagreement in estimates of $\PETI$ is the number of conditions that are required for intelligent life to evolve. Suppose there are $N$ conditions that must be fulfilled for aliens to appear, and for simplicity, each is independent of each other and the probability of each holding is $1/2$. Then $\PETI = 2^{-N}$: so if $N = 1$, $\PETI = 1/2$; if $N = 10$, $\PETI = 1/1,024$; if $N = 100$, $\PETI \approx 10^{-30}$ and so on. If $N$ is uncertain at the order of magnitude level, then even the order of magnitude of $\PETI$ is also uncertain at the order of magnitude level. This accounts for the vast disagreements about $\PETI$: an optimist who believes that only a few conditions are relevant can end up thinking that $\PETI \approx 1$, while a pessimist that believes that thousands of conditions need to be fulfilled can find combinatorially small estimates of $\PETI \ll 10^{-1,000}$. The notion of an uninformative prior thus suggests that we should use a prior for the number of conditions that is constant in $\log N$ \citep[e.g.,][]{Trotta08}. This translates to a prior that is constant in $\log |\log \PETI|$. The advantage of this prior is that it can handle scenarios where $N$ is allowed to range up to $\sim 3 \times 10^{122}$, the entropy of the Universe (with $\PETI \approx e^{-3 \times 10^{122}}$), while remaining responsive to any future evidence that aliens exist. \subsection{Outline and conventions} The loose motivation for the log log prior is developed further in more quantitative terms in Section~\ref{sec:Basis}. The concepts of entropy, information, and state space dimensionality play key roles. I discuss some problems that arise when trying to formulate a log log prior and apply it. I also provide a simple model of a SETI experiment to demonstrate the prior's response. The Bayesian credibility that ETIs exist in our past light cone is calculated in Section~\ref{sec:Results}. I use various estimates of the entropy of biological systems and their environments to establish a lower limit to $\PETI$ for a planet. I also describe what happens if we consider smaller birthsites, to allow for the possibility that intelligence evolves off of planets or can evolve many times on a planet. In Section~\ref{sec:SETIReach}, I evaluate SETI surveys according to how much of the prior's weight they might constrain. Then I discuss some additional problems and implications of the log log prior in Section~\ref{sec:Discussion}: (1) In a small Universe, one can construct a joint prior on the Universe's size and $\PETI$, complicating the weighting. (2) The log log prior suggests that the diversity of intelligent species is beyond astronomically vast. (3) The small probabilities considered for $\PETI$ raise the issue if there's a similarly small probability that intelligent life is starfaring, which would neutralize the Fermi Paradox. (4) A log log prior might be useful for estimating credibility in the rates of any complex phenomenon, including life itself. I conclude the paper with a summary (Section~\ref{sec:Summary}). I use the values of the fundamental constants and cosmological parameters listed in Table~\ref{table:Constants} throughout this paper. \begin{deluxetable}{lll} \tablecolumns{3} \tablecaption{Constants used in this paper\label{table:Constants}} \tablehead{\colhead{Name} & \colhead{Value} & \colhead{Description}} \startdata $c$ & $2.998 \times 10^{10}\ \cm\ \sec^{-1}$ & Speed of light\\ $h$ & $6.626 \times 10^{-27}\ \erg\ \sec$ & Planck's constant\\ $G$ & \parbox[c]{3cm}{$6.674 \times 10^{-8}$\\ \hphantom{1.5cm}$\times \dyn\ \cm^2\ \gram^{-2}$} & Newton's constant\\ $k_B$ & $1.381 \times 10^{-16}\ \erg\ \Kelv^{-1}$ & Boltzmann's constant\\ $N_A$ & $6.022 \times 10^{23}$ & Avogadro's number\\ $\amu$ & $1.661 \times 10^{-24}\ \gram$ & Atomic mass unit\\ $\Msun$ & $1.989 \times 10^{33}\ \gram$ & Solar mass\\ $\Mpc$ & $3.0857 \times 10^{24}\ \cm$ & Megaparsec\\ $H_0$ & $67.74\ \km\ \sec^{-1}\ \Mpc^{-1}$ & Hubble's constant\\ $\Omega_b$ & $0.04866$ & Cosmic baryon density\\ $\Omega_r$ & $5.385 \times 10^{-5}$ & Cosmic photon density\\ $\Omega_m$ & $0.3089$ & Cosmic matter density\\ $\Omega_{\Lambda}$ & $0.6911$ & Dark energy density \enddata \tablecomments{The values of the fundamental constants and units are taken from \citet{Olive15}. I use the $H_0$, $\Omega_b$, and $\Omega_m$ from \citet{Ade15-Params} (the ``TT, TE, EE+lowP+lensing+ext'' column of Table 4). The value of $\Omega_r$ is calculated under the assumption that only the Cosmic Microwave Background contributes to the cosmic radiation density, and that it has a temperature $2.725\ \Kelv$ \citep{Fixsen09}. I assume $\Lambda$CDM cosmology with $\Omega_{\Lambda} = 1 - \Omega_m - \Omega_r$.} \end{deluxetable} This paper discusses both Bayesian probability, our confidence in a hypothesis, and frequentist probability, an inherent property of stochastic processes in the Universe. Although both can appear together in Bayes' equation, they are very different in meaning. To help distinguish them, I will use the symbol $P$ for Bayesian probabilities and ${\cal P}$ for frequentist probabilities. The most common of each symbol is $P_{\rm crowded}$, the Bayesian credibility assigned to the hypothesis that there are aliens in our past light cone, and $\PETI$, the frequentist fraction of birthsites that evolve intelligent life. This paper also makes combinatorial arguments about the multitudes of ways of combining a number of objects or traits. I use $N$ or $\ell$ to enumerate physical things or properties which are actually present in the Universe, like the number of planets in our past light cone or the number of amino acids in a protein. For the number of possible combinations of these objects, almost all of which will never be realized within the observable Universe, I use ${\cal N}$. Finally, I use $S$ for physical, measurable entropy and ${\cal S}$ for unitless Boltzmann-like entropies.
\label{sec:Discussion} \subsection{What if the Universe is not infinite?} \label{sec:SmallUniverse} I have assumed that the Universe is essentially infinite, so that the likelihood of our existence is $1$. However, multiverse scenarios have their own problems. Aside from the difficulty in testing them, there is the measure problem, which is an uncertainty about how to assign probabilities in an infinite Universe. Naive extrapolations of the current $\Lambda$CDM cosmology imply that most observers, even those with our exact memories, are Boltzmann brains produced by thermal fluctuations of the cosmological event horizon \citep[e.g.,][]{Dyson02,Albrecht04,deSimone10}. A short-lived Universe can end before many Boltzmann brains appear, though \citep{Page08}. \begin{figure} \centerline{\includegraphics[width=9cm]{BasicPriorExplanation.eps}} \figcaption{A joint prior on $\ln \ln N_{\star}$ and $\Pi$ is constrained by observations of the Universe's size, thermodynamic lower bounds on the probability of life evolving (Weak Copernican Principle), and our own existence (Weak Anthropic Principle). We are isolated if the true values of $\PETI$ and $N_{\star}$ lie above and to the left of the heavy dash-dotted line.\label{fig:SmallUniversePrior}} \end{figure} If the Universe is small, though, we have to contend with at least two unknown parameters: the true number of birthsites in the universe $N_{\star}$ and $\PETI$. We could then codify our uncertainty with a joint prior $d^2 P_{\rm prior}/(dN_{\star} d\PETI)$. The joint prior can be integrated to find the marginal prior describing our prior belief in $\PETI$ alone: \begin{equation} \frac{dP_{\rm prior}}{d\PETI} = \int_0^{\infty} \frac{d^2 P_{\rm prior}}{dN_{\star} d\PETI^{\prime}}\Big|_{\PETI^{\prime} = \PETI} dN_{\star}. \end{equation} Likewise, the marginal prior on $N_{\star}$ is \begin{equation} \frac{dP_{\rm prior}}{dN_{\star}} = \int_0^1 \frac{d^2 P_{\rm prior}}{dN_{\star}^{\prime} d\PETI}\Big|_{N_{\star}^{\prime} = N_{\star}} d\PETI. \end{equation} There would be a few obvious bounds on these parameters, as indicated in Figure~\ref{fig:SmallUniversePrior}. The minimum size of the Universe is constrained by observation, and there are lower limits on $\PETI$ from bounds on the entropy of living systems. If we presume that we are the result of a stochastic process, the inferential Weak Anthropic Principle becomes the observation that the probability of our own existence is ${\cal L} (\ge 1~{\rm society} | \PETI) \approx 1 - \exp(-\PETI N_{\star})$. Using this likelihood in Bayes' Rule essentially removes the weight from possibilities where $\PETI N_{\star} \ll 1$, while leaving intact the weight where $\PETI N_{\star} \gg 1$. How one would implement such a joint prior is unclear, however. The simplest method might be to start out with a constant probability density everywhere, after transforming to the variables $\Pi$ and $\ln \ln N_{\star}$ (Figure~\ref{fig:UniformJointPrior}, left panel). After applying the inferential Weak Anthropic Principle, the joint density remains constant for values where $\PETI N_{\star} \gg 1$. Unfortunately, the resulting marginalized PDFs are no longer flat in $\Pi$ and $\ln \ln N_{\star}$ --- the log log prior described in this paper no longer applies to $\PETI$. Instead, we would favor scenarios where the Universe is large. Independently, we would favor scenarios where $\PETI$ is big. We would not favor scenarios where both $N_{\star}$ and $\PETI$ are big simultaneously, though. If we ever do discover aliens (red lines in Figure~\ref{fig:UniformJointPrior}), the marginalized PDF on $\ln \ln N_{\star}$ would suddenly become flat, because all values of $N_{\star}$ are compatible with abundant aliens. Proof that we are isolated (dashed gray lines) moderately increase our belief in big Universes. \begin{figure} \centerline{\includegraphics[width=9cm]{UniformJointPrior.eps}} \figcaption{A uniform joint prior and the marginalized priors on $\Pi$ and $\ln \ln N_{\star}$ derived from it. A joint prior with constant density favors belief in large $\PETI$ or large $N_{\star}$ after marginalization. The red line demonstrates the effect of a detection by SETI, while the dashed grey lines demonstrate the effect of proof that we are isolated.\label{fig:UniformJointPrior}} \end{figure} Another possibility would be to decree that the log log prior on $\PETI$ alone must be correct. The joint prior would be more heavily weighted for small $\PETI$ (darker shading in Figure~\ref{fig:JointVsMarginalizedPriors}, left panel) so that the marginalized prior for $\PETI$ matches the log log prior. This scheme heavily weights scenarios where the Universe is large, since only a large Universe is consistent with our existence if $\PETI$ is small and our evolution is random. As with the uniform joint prior, discovering aliens flattens the marginal prior for $\ln \ln N_{\star}$, while proving our isolation increases our belief that $\ln \ln N_{\star}$ is large. \begin{figure*} \centerline{\includegraphics[width=9cm]{ConstPETIJointPrior.eps}\includegraphics[width=9cm]{ConstNStarJointPrior.eps}} \figcaption{Comparison of joint priors with different weights, and their derived marginalized priors on $\Pi$ and $\ln \ln N_{\star}$. Darker blue shading indicates a heavier joint prior density. If the joint prior density is scaled so that the marginalized $\Pi$ prior is flat in $\Pi$ (left), then a large Universe is strongly favored. If the joint prior density is instead scaled so that the marginalized $\ln \ln N_{\star}$ prior is uniform in $\ln \ln N_{\star}$ (right), a large $\PETI$ is strongly favored. The red line demonstrates the effect of a detection by SETI, while the dashed grey lines demonstrate the effect of proof that we are isolated.\label{fig:JointVsMarginalizedPriors}} \end{figure*} Or we could decree that the marginal prior for $\ln \ln N_{\star}$ is constant, and place more weight in scenarios where $\ln \ln N_{\star}$ is small (Figure~\ref{fig:JointVsMarginalizedPriors}, right panel). Now the marginalized prior on $\Pi$ is skewed to favor large $\PETI$. SETI surveys would have interesting effects on the marginalized PDF for the Universe's size. If we discovered aliens, the marginalized PDF for $\ln \ln N_{\star}$ would have a sharp peak near its lower limit because of the prior's weighting. Proving we are isolated leads to a sharp fall-off near the lower limit, while slightly increasing our confidence in a big Universe. Hence, implementing the joint prior involves somewhat arbitrary decisions about how to weight it. One could probably consider more complicated schemes, like having a flat prior in the number of intelligent species in the Universe. Unless the marginalized prior on $\PETI$ was specifically forced to be the log log prior, the log log prior used in the other sections no longer applies. Aside from these practical difficulties, there is the philosophical issue of what counts as the Universe. Some alternatives to inflationary cosmology are effectively multiverses in that they have an infinite number of places to live \citep{Rubenstein14}. For example, ekpyrotic cosmologies posit that the Universe's evolution is basically cyclic, lasting an infinite time but being occasionally reset by some process \citep{Steinhardt02}. Because birthsites can be defined temporally, the endless lifespans of the Universe in these scenarios still provide an endless number of chances for life and intelligence to evolve, as long as $\PETI$ does not change between cycles. Even if we had proof that the Universe was finite in space and time, the many worlds interpretation of quantum mechanics still could guarantee our existence if it's true, since each branch of the Universe's wavefunction is as real as the others, and would seem real to its inhabitants \citep{Tegmark14}. As long as we evolve on any branch of the wavefunction, the Weak Anthropic Principle applies. The correct interpretation of quantum mechanics may never be proven experimentally, so the question of whether the Universe is small or big may always be metaphysical. \subsection{The diversity of ETIs} \label{sec:Diversity} A curious aspect of the log log prior is that it suggests that there is a combinatorially high number of possible intelligent species, of order $\sim 10^{10^9}$. This should be true as long as $\ln \PETI \ga -{\cal S}_{\rm genome}$, and if the possible genome sequences are even remotely equiprobable. Then the number of possible intelligent species ${\cal N}_{\rm ETI}$ is roughly given by \begin{equation} \ln {\cal N}_{\rm ETI} \approx {\cal S}_{\rm genome} - \ln \PETI. \end{equation} I estimated ${\cal S}_{\rm genome} \approx 4 \times 10^9$ in Section~\ref{sec:GenomeEntropy}. According to the log log prior, using the genome entropy bound, we generally expect $\ln \PETI$ to be several orders of magnitude below $4 \times 10^9$, so $\ln {\cal N}_{\rm ETI} \approx 4 \times 10^9$. This conclusion can be avoided if there are a few species (presumably including \emph{Homo sapiens}) that are extreme attractors in genome space. The odds would have to be heavily skewed in favor of these species, with them being $\sim 10^{10^9}$ times being more likely than the mean probability over genome space, to affect the estimate of $\ln {\cal N}_{\rm ETI}$, though. If lifeforms are distinguished only by their proteomes, then we can use the proteome entropy to estimate the number of distinct types of intelligent life: \begin{equation} \ln {\cal N}_{\rm ETI} \approx {\cal S}_{\rm proteome} - \ln \PETI. \end{equation} Using the proteome entropy in the log log prior implies $\sim 10^{10^7}$ kinds of intelligent life. While much smaller than the number of distinct species, this is still a vast number. Most ``types'' would then consist of $\exp(\ln {\cal N}_{\rm ETI}^{\rm species} - \ln {\cal N}_{\rm ETI}^{\rm proteome}) \approx 10^{10^9}$ species of intelligent life, distinguished by their non-coding DNA and the order in which the genes are coded. These enormous numbers are a natural result if not all of the information in the genome or proteome is relevant for the development of intelligence. Even if intelligent species must be basically humanoid, would their development depend on the presence of hair, the number of fingers and vertebrae, having the same taste receptors as humans, much less the structure of every enzyme? If not, then there are an exponentially large number of species possible from all the combinations of non-vital traits. If evolution is contingent, then the odds that intelligence evolved on Earth might be like the odds that a tornado passes through a given location on a given day. The weather is a highly chaotic system and contingent; a single stray gust of wind just a few weeks before would completely change the weather. It doesn't follow that every last eddy in the planet's history is necessary for there to be a tornado at that location. If history were changed, new opportunities could arise; a breeze that in our history would have inhibited the tornado could have helped create a different one if that stray gust of wind happened. While it's probably true, as Gould famously said, that ``\emph{Homo sapiens} is an entity, not a tendency'' \citep{Gould89}, intelligence is probably a panoply, not an entity. Whether or not it's also a tendency is an empirical question.\footnote{Gould himself made this point in \citet{Gould87}.} These estimates say nothing about the phenotypes of possible alien intelligences. They could have radically different biochemistries, or they could look identical to humans while remaining a completely different species genetically. \subsection{Small probabilities and the Fermi Paradox} \label{sec:HumanFuture} The power of the Fermi Paradox is that it bends the argument from large numbers --- usually taken to be the strongest argument in favor of aliens --- against the existence of aliens. Our being alone among the trillions of planets in the observable Universe requires an incredibly small $\PETI$. But the existence of trillions of technological societies in the observable Universe requires that the probability that they spread into space is incredibly small. The beyond astronomically small probabilities considered in this paper might undermine both arguments, though. Just as a log log prior encompasses tiny $f_l f_i$, a modified version could accommodate tiny $f_c$. One possible ``filter'' between developing technology and achieving starflight is a standoff involving nuclear weapons \citep[as in][]{Sagan83}. Although we survived the Cold War, the Anthropic Principle reminds us that our vantage point is biased \citep{Cirkovic10} --- perhaps in virtually all histories we really did annihilate ourselves. There were several incidents in which global nuclear war was avoided only due to the actions of a few people \citep[e.g.,][]{UoCS15}.\footnote{See also https://en.wikipedia.org/wiki/List\_of\_nuclear\_close\_calls (accessed 18 September 2016).} Maybe those actions were themselves flukes --- a rare fluctuation in the thermal noise of someone's brain might be amplified into an otherwise unlikely stray thought that in turn stays someone's hand during a nuclear crisis. If that was what happened, then the nuclear filter could be essentially absolute. There are other possible filters. In analogy with there being an unknown number $N$ of conditions necessary for intelligent life to arise, we may face an unknown number $N$ of future crises before we attain starflight and cosmic engineering. If those crises are independent, and if the probability that a society survives each are of order $1/2$, then the odds of a society achieving starflight could easily be smaller than $10^{-21}$ if $N \ga 70$. A philosophical observation known as the Doomsday Argument appears to support to hypotheses that the odds are against anyone attaining starflight. The basic idea, a kind of temporal Copernican Principle, is that it's unlikely that we are among the very first humans to have every lived, so the total number of humans who will ever live must not be many trillions \citep{Gott93}. It is a counterweight to \citet{Hart75}'s formulation of the Fermi Paradox: an interstellar society could expand across its home galaxy at least, embracing billions of stars and many trillions of people. \citet{Knobe06} used similar reasoning to make a Universal Doomsday Argument: because interstellar societies are so big, they would dominate the population of all sapient observers unless they were extremely rare. Unless we are incredibly atypical, the population of the Universe cannot be concentrated into starfaring societies; thus, the fraction of technological societies that spread across interstellar space is negligible \citep{Knobe06}. The Doomsday Argument itself is extremely contentious, though. Obviously it is wrong for some people in the history of humanity, so it's not inconceivable that it's wrong for us \citep{Tarter07}. Another response is the Self-Indication Assumption, which states that we should favor hypotheses that predict a larger number of observers \citep{Olum02}. It is a bit like the causal Weak Anthropic Principle: the probability that we exist is larger if the Universe has more opportunities for us to exist. This assumption has its own problems, as it allows essentially zero prior weight on the idea that the Universe is small \citep[e.g.,][]{Bostrom03}. Overall, the debate around both the Doomsday Argument and the Self-Indication Assumption arises from problems similar to those of the flat log prior for $\PETI$. Each suppresses the prior weight for some entirely reasonable sounding hypothesis --- star travel being possible, our not living in a vast multiverse, or our not being isolated --- by factors of billions at least, so that no actual evidence could ever persuade us otherwise \citep[c.f.,][]{Olum02,Bostrom03}. Our own future evolution, at least, has some important differences with the evolution of life and intelligence. Most importantly, we have goals, whereas natural evolution does not. We can also anticipate future crises. Future crises are not necessarily independent of each other, either; many of the crises may involve fundamentally similar problems, like scarcity. Nor is it clear that there always are bottlenecks. For example, the nuclear standoff during the Cold War may not have been inevitable in our own history \citep{Rhodes86}, much less in alien societies. Furthermore, like life in general, societies can be robust. If one didn't know the history of life on Earth, one might conclude its survival for billions of years is nigh impossible given how many crises could arise. But it has survived that long simply by being so resilient, and while this may be a fluke that we observe due to the Anthropic Principle \citep{Cirkovic10}, this is not generally thought to be the case.\footnote{If one does accept the Doomsday Argument, it also predicts that we are very unlikely to be the very last people who ever live \citep{Gott93}, suggesting that technological societies do not collapse at the slightest provocation.} On the other hand, if one does accept the thesis behind the Fermi Paradox, the log log prior actually strengthens its power. As noted in Section~\ref{sec:SETIReach}, the Milky Way encompasses $80\%$ of the prior's weight for crowded Universe scenarios, and about half is for $\PETI \ga 10^{-4}$. Definitive evidence for our being alone in the Galaxy is then good evidence that we are alone in the observable Universe, with $\PCrowded$ cut to $\sim 0.2 \times 0.2 \approx 4\%$. The null results from searches for Type III societies reduce the $\PCrowded$ estimates further, subject to systematic uncertainties about these surveys' grasp. \subsection{The prior applied to exolife and other complex phenomena} \label{sec:LifePrior} The main ingredients plugged into the log log prior, the number of birthsites and the bound on entropy, are very generic. Similar priors could be constructed for any potentially rare, complex phenomenon, including life itself. Indeed, the origin of life has been proposed to be the limiting factor for $\PETI$ \citep{ConwayMorris03}, in which case the log log prior for intelligence is the log log prior for life. The protocell chemical entropy provides a plausible but very conservative lower bound on the probability that life arises. As shown in Table~\ref{table:PCrowdedEstimates}, the log log prior then implies a $15\%$ credibility that life has arisen on another planet in the Universe. In coming decades, it may be possible to identify signs of life on the nearest exoplanets \citep{Seager14}, suggesting a test of the log log prior: we should never find any signs of independent life on any exoplanet. This is a very weak test, though, as the credibility for lifeless neighbor planets is only $\la 85\%$. On the other hand, favorable probabilities for life's appearance may come from applying the timing of its origin on Earth to the log log prior \citep{Lineweaver02}. Suppose the early Earth had many birthsites, appearing at a rate of $\Gamma_{\rm birth}$. Assuming that there was a duration $\Delta t_1$ between when life could have started and when it did \citep{Spiegel12}, the Earth had $N_{\rm birth} = \Gamma_{\rm birth} \Delta t_1$ birthsites before life got started. The expected rate that life appears during the window is $\Gamma_{\rm life} = \PLife \Gamma_{\rm birth}$, where $\PLife$ is the probability that a birthsite generates life \citep[as in][]{Scharf16}. For an upper limit on $\Gamma_{\rm birth}$, I take the mass of the Earth's hydrosphere and apply the ML98 bound. The specific internal energy of liquid water at $300\ \Kelv$ and standard pressure\footnote{The difference in specific enthalpy for ice at melting and ice at absolute zero is $300\ \Joule\ \gram^{-1}$, and the specific enthalpy of melting is another $330\ \Joule\ \gram^{-1}$, all at standard pressure \citep{Feistel06}. Heating liquid water, with a specific heat of $4.2\ \Joule\ \gram^{-1}\ \Kelv^{-1}$, from the melting point to $300\ \Kelv$ requires $110\ \Joule\ \gram^{-1}$. I ignore the (minor) correction from enthalpy to internal energy.} is $740\ \Joule\ \gram^{-1}$, for a total $\Delta E$ of $1.3 \times 10^{34}\ \erg$, a maximum $\Gamma_{\rm birth}$ of $4 \times 10^{60}\ \sec^{-1}$, and a maximum $N_{\rm birth}$ of $1.2 \times 10^{77} (\Delta t_1 / \Gyr)$. \citet{Spiegel12} demonstrate how a prior is affected by the timing of life's appearance. Roughly speaking, we can group $\Gamma_{\rm life}$ into two categories: a fast case when $\Gamma_{\rm life} \ga (\Delta t_1)^{-1}$ and a slow case $\Gamma_{\rm life} \la (\Delta t_2)^{-1}$. In the slow case, $\Delta t_2$ describes the window that life could have appeared while still allowing intelligent life to evolve by now; because of the very weak dependence of $\Pi$ on $N_{\rm birth}$ from the double logarithm, I replace it with $\Delta t_1$ for simplicity. Then, the posterior probability that life appears quickly is \begin{equation} P_{\rm fast}^{\rm posterior} \approx \left(1 + \frac{1 - P_{\rm fast}^{\rm prior}}{{\cal B} P_{\rm fast}^{\rm prior}}\right)^{-1}, \end{equation} where ${\cal B}$ is the Bayes' factor\footnote{Denoted ${\cal R}$ in \citet{Spiegel12}.}, the ratio of likelihoods for the slow and fast cases. In their ``optimistic'' case, ${\cal B} = 15$ and $\Delta t_1 = 0.2\ \Gyr$. The log log prior, however, starts out disfavoring the fast case, which has a prior probability $\ln[1 + \ln N_{\rm birth}]/\ln[1 - \ln \PMin]$. For the protocell entropy $\PMin$ and the maximal birthsite rate calculated above, this is $19\%$. The optimistic case of \citet{Spiegel12} then gives $P_{\rm fast}^{\rm posterior} \approx 78\%$. That value weakly supports the conclusion of \citet{Lineweaver02} that life emerges quickly on planets. Yet the ``conservative'' cases of \citet{Spiegel12} merely increase $P_{\rm fast}$ to $20\%$. Furthermore, $\Gamma_{\rm birth}$ is probably much lower than the maximum value. In principle, $N_{\rm birth} \approx 1$, for which $P_{\rm fast}^{\rm prior} \approx 2\%$ and $P_{\rm fast}^{\rm posterior} \approx 20\%$ in the optimistic case. The key improvement is that the log log prior has well-defined bounds --- if ${\cal B}$ definitely exceeds $100$ then it \emph{does} favor rapidly appearing life even for $\ln N_{\rm birth} \approx 1$, unlike the logarithmic prior, which can have an arbitrarily small normalization \citep{Spiegel12}. One potential problem with applying the log log prior to different complex phenomena indiscriminately is that these phenomena may be dependent on one another. The evolution of intelligent life on a planet happens only if life appears on the planet first. One can posit a whole chain of dependent phenomena of increasing rarity: life, complex multicellular life, intelligent life, humanoid intelligences, \emph{Homo sapiens}, humans that share your exact memories, and chemically identical versions of you. Properly constraining the probability of each step would require the construction of a joint prior on the probability of each step happening. The marginalized prior density for each step would then no longer be a log log prior. Generally, the Bayesian expectation for the probabilities of earlier steps would be higher than with a simple log log prior (c.f. Section~\ref{sec:SmallUniverse}). A stronger test of the log log prior than exolife alone may be to check whether known astrophysical phenomena are evenly distributed in $\Pi$. For example, if our past light cone contains $N$ kinds of stellar phenomena, about $N/2$ should be expressed in a random sample of $\sim 10^3$ stars over their lifetimes. Distinct phenomena might be classified according to criteria proposed by \citet{Harwit81}. The log log prior is a plausible framework for evaluating evidence for or against alien societies. It can be justified from the uncertainty in the number of constraints that need to be fulfilled for intelligence to evolve, and it can be phrased in terms of entropy differences, information, or state space dimensionality. The main advantage of the log log prior is that it can accommodate a great range of $\PETI$, from $e^{-3 \times 10^{122}}$ to $1$. Unlike a flat log prior, it responds to observations even in the face of possible systematic errors. The potential for systematic errors is inevitable for any realistic experiment. The log log prior can provide a guide for measuring the relative power of SETI surveys. Essentially by design, the log log prior is not all that profound in its content. It basically is just the statement that, for all we know, $\PETI^{-1}$ could be $1$, $10^{1}$, $10^{10}$, $10^{100}$, $10^{1,000}$, $10^{10,000}$, $10^{100,000}$, $10^{1,000,000}$, $10^{10,000,000}$, $10^{100,000,000}$, or $10^{1,000,000,000}$, so we might as well consider any of those values as an equally valid possibility (using $\PMin = 10^{-10^9}$ here as an example). The calculations of $P_{\rm crowded}$ just amount to the observation that for the first three of those values, we're not isolated, while for the others, we are, so the odds that we're isolated are a few to one. The prior can even be summarized in non-Bayesian terms: we are very uncertain about the number of factors that contribute to intelligence's evolution, so we are very, \emph{very} uncertain about the probability that it happens. What is new is simply the emphasis placed on each value. A flat log prior amounts to the statement that, for all we know, $\PETI^{-1}$ could be $10^0$, $10^1$, $10^2$, and so on, through $10^{999,999,998}$, $10^{999,999,999}$, or $10^{1,000,000,000}$, so we might as well consider any of \emph{those} values as an equally valid possibility. But this inherently implies a belief that $\log_{10} \PETI \sim 10^{-9}$ with near certainty. It tacitly implies that we are virtually certain that we are isolated. Or, in non-Bayesian terms, we are a bit uncertain about the number of factors that contribute to intelligence's evolution, and we are very uncertain about its probability, but we are quite sure it's small.\footnote{We could do much worse still, though, with an flat inverse prior \citep{Spiegel12}. This would be the statement that $\PETI^{-1}$ could be $1$, $2$, $3$, ..., $10^{1,000,000,000} - 2$, $10^{1,000,000,000} - 1$, or $10^{1,000,000,000}$ so we might as well consider any of \emph{these} values as an equally valid possibility. Now one would be nearly certain that $10^{-1,000,000,000} \le \PETI \la 10^{-999,999,998}$, far too precise to be realistic.} It is important to remember that the prior is a \emph{Bayesian} probability, measuring our confidence about the abundance of ETIs, whereas $\PETI$ is a \emph{frequentist} probability, an intrinsic feature of birthsites. While I estimate a $\sim 15 \text{--} 20\%$ (Bayesian) credibility that there are other intelligent species in the observable Universe, for almost all $\PETI$, the frequentist probability ${\cal P} ({\rm isolated} | \PETI)$ that we're isolated is almost always $\sim 0$ or $\sim 1$. Conversely, the fact that this frequentist probability is probably $\sim 0$ or $\sim 1$ does not mean we should be confident about whether or not we can contact aliens. Just because there are $\sim 10^{21}$ planets in the observable Universe, it does not follow that there must be ETIs among them, because we are not totally confident that $\PETI \ga 10^{-21}$. And even if the evolution of intelligence depends on a vast number of contingent events, it does not follow that we must be alone, because we are not confident that there are just a few ways for intelligence to evolve. According to the log log prior, we should approach SETI with a degree of agnosticism about whether there are aliens in the observable Universe. Despite that, the prior consistently leans towards mild skepticism about their presence. I find that the most realistic bound on $\PMin$, the genome entropy, implies that $P_{\rm crowded} \approx 18\%$. Even with the most optimistic assumptions, with the maximal number of birthsites possible and using the protein shape entropy to bound $\PMin$, $P_{\rm crowded} \approx 47\%$. Of course, $P_{\rm isolated}$ is quite far from the traditional $95\%$ credibility threshold for a conclusion. A positive detection would be one of the most profound discoveries ever. This epochal potential more than offsets the relatively moderate risks for the relatively low spending on SETI. Still, it does mean the null results from SETI are not surprising. The ``Great Silence'' is an expected result of the log log prior. The log log prior is not without its own issues. Most importantly, it's not clear how to define a ``birthsite''. Should it refer to an entire planet, or maybe something smaller, like a speciation event? If we use a large body as a birthsite, we should account for the possibility that life or intelligence arises an unknown, possibly large number of times. If we use common events as birthsites, which is appropriate if $\PETI$ depends on the time a habitat is hospitable, then they almost certainly interact with one another, making it difficult to calculate the number of ETIs expected. The issue is related to the subjective decision about how to handle values of $\PETI$ near $1$, where $\ln |\ln \PETI|$ itself diverges. I chose to use the auxiliary variable $\Pi = \ln (1 + \ln \PETI)$, but this is not the only possible choice. The prior is also slightly affected by the choice of the base of the inner logarithm. The other main issue is which value of $\PMin$ to use. The cosmological entropies ideally should be absolute bounds. Yet they are not totally robust. The entropy within the particle or event horizons increases without bound in cosmologies without dark energy or if $w > -1$, although it takes $\sim 10^{40}$ years for this to seriously affect the estimates of $P_{\rm crowded}$. Within the $\Lambda$CDM cosmology, observers living more than $\sim 100$ billion years in the future may not be aware there is a cosmological event horizon \citep{Krauss07}. Furthermore, while well motivated, the entropy of the cosmological horizons and Bekenstein-like bounds in general are not empirically proven. On smaller scales, we could use the chemical entropy of an organism or a planetary biosphere to limit $\PMin$. However, $\PETI$ values as small as these $\PMin$ essentially imply that we are Boltzmann brains, in which case none of our reasoning can be justified. Even the larger $\PMin$ associated with the protocell entropy is compatible with strange evolutionary histories, such as those where life is juggled across the worlds of the Solar System, or even different star systems entirely. Values of $\PMin$ as small as those considered in this paper implicitly require that the Universe is very large if our evolution is a stochastic process. Multiverse theories face the measure problem \citep[e.g.,][]{Albrecht04} and may not be testable. If one favors a small Universe, one can set up a joint prior on the size of the Universe and $\PETI$, but then the marginalized prior on $\PETI$ alone is not the log log prior anymore unless the weighting is uneven. Additionally, the Universe would have to be small in time as well as space, disallowing cyclic cosmologies. Furthermore, the many-worlds interpretation of quantum mechanics cannot hold if the Universe is small \citep{Tegmark14}, but we may never know whether or not it is true. As frequently acknowledged, $\PETI$ itself is not the only factor determining whether we will ever find an alien society \citep{Sagan63,Sagan73,Bates78,Forgan11}. SETI surveys are only constraining if we look for traces that are physically possible, commonly produced by technological societies, and last a long time. The weighted reach of current SETI surveys, the number of targets they look at, is quite good according to the log log prior (Table~\ref{table:SurveyReaches}). Their grasps, however, the amount of prior weight they can constrain after considering the effectiveness of the survey method, are debatable. Current methods are haunted by the potentially short lifespan of radio or optical broadcasting, or by systematic uncertainties about whether megastructures are physically and socially plausible. ISearches for small probes in the Solar System may be an effective way to proceed, because they last so long and seem fairly feasible. The log log prior suggests that the probes do not have to be self-replicating, sweeping through the Galaxy ravenously, for this to be an effective tracer.
16
9
1609.05931
1609
1609.00725_arXiv.txt
We continue our investigations of the development and importance of the one-arm spiral instability in long-lived hypermassive neutron stars (HMNSs) formed in dynamical capture binary neutron star mergers. Employing hydrodynamic simulations in full general relativity, we find that the one-arm instability is generic in that it can develop in HMNSs within a few tens of milliseconds after merger for all equations of state in our survey. We find that mergers with stiffer equations of state tend to produce HMNSs with stronger $m=2$ azimuthal mode density deformations, and weaker $m=1$ components, relative to softer equations of state. We also find that for equations of state that can give rise to double-core HMNSs, large $m=1$ density modes can already be present due to asymmetries in the two cores. This results in the generation of $l=2$, $m=1$ gravitational wave modes even before the dominance of a one-arm mode that ultimately arises following merger of the two cores. Our results further suggest that stiffer equations of state give rise to HMNSs generating lower $m=1$ gravitational wave frequencies. Thus, if gravitational waves from the one-arm instability are detected, they could in principle constrain the neutron star equation of state. We estimate that, depending on the equation of state, the one-arm mode could potentially be detectable by second generation gravitational wave detectors at $\sim 10$ Mpc and by third generation ones at $\sim 100$ Mpc. Finally, we provide estimates of the properties of dynamical ejecta, as well as the accompanying kilonovae signatures.
The LIGO and Virgo collaboration recently ushered in the era of gravitational wave (GW) astronomy with the observation of two high confidence signals that are both consistent with the inspiral and merger of binary black hole systems \cite{LIGO_first_direct_GW,LIGOseconddirect}. This important milestone provides spectacular confirmation of the predictions of general relativity in the dynamical strong-field regime, the cleanest evidence yet for the existence of black holes, and gives important clues to the evolution of massive stars from inferred masses and limits on spins of the constituent black holes involved in the mergers. Over the next few years, it is anticipated that GWs will be observed not only from additional black hole binary mergers, but also from neutron star--neutron star (NSNS) and black hole--neutron star (BHNS) binary mergers. Coalescing NSNSs and BHNSs are not only important sources of GWs, but may also be the progenitor systems that power short gamma-ray bursts~\cite{EiLiPiSc,NaPaPi,MoHeIsMa,Berger2014,PaschalidisJet2015}. In addition, these systems could generate other detectable electromagnetic (EM) signals, prior to~\cite{Hansen:2000am,McWilliams:2011zi,Paschalidis:2013jsa, PalenzuelaLehner2013,2014PhRvD..90d4007P,Metzger2016MNRAS.461.4435M} and following~\cite{MetzgerBerger2012,2015MNRAS.446.1115M} merger. Combining GW and EM signals observed from such events could provide further precision tests of relativistic gravity, constrain the NS equation of state (EOS), and may help explain where the r-process elements in the Universe come from~\cite{Rosswog:1998gc}. However, the interpretation of such ``multimessenger'' signals from compact binaries will be limited by our theoretical understanding of the processes involved. Gaining the requisite knowledge requires simulations in full general relativity (GR) because of the crucial role relativistic gravitation plays in such systems. There have been multiple studies of compact binary mergers in full GR. For NSNSs, see~\cite{faber_review,Baiotti_Rezzolla_review2016} for reviews and~\cite{Paschalidis2012,gold,East2012NSNS,Bernuzzi:2013rza, Neilsen2014,2014PhRvD..90d1502K,2015arXiv150707100D,Dionysopoulou2015,Dietrich2015,2015arXiv151206397B, 2015PhRvD..91f4059S,2015PhRvD..92l4034K,Palenzuela2015,Kastaun2015,2015arXiv151206397B,2016PhRvD..93d4019F,2016arXiv160400782H,2016PhRvD..93f4082H,RLPSNSNSjet2016,2016arXiv160403445E,2016MNRAS.tmp..894R,Dietrich2016arXiv160706636D,2016arXiv160301918S} for some more recent results. These earlier works have advanced our knowledge of EOS effects, neutrino transport, ejecta properties, jet outflows and magnetospheric phenomena. In addition, several studies (see e.g.,~\cite{Stergioulas:2011gd,Bauswein2015a,Bauswein2015b,Bauswein2015c,Takami2014,Takami2015}) have investigated how GWs generated by oscillations of a hypermassive neutron star (HMNS) formed post-merger can be used to infer the nuclear EOS. These studies have focused primarily on the $l=2$, $m=2$ GW mode, which in terms of energy flux is the dominant component of the GW emission. However, the frequency of this mode is of order 2-3 kHz, outside the range where the advanced gravitational wave detectors have the most sensitivity. In a recent work of ours~\cite{PEPS2015}, we discovered that a so-called one-arm spiral instability can develop in HMNSs formed in high-eccentricity dynamical capture binary neutron star mergers. Heuristically speaking, once saturated the one-arm instability manifests as a dense core offset from the center of mass about which it rotates at roughly constant frequency, and the $m=1$ azimuthal deformation of the rest-mass density distribution dominates over all other non-zero $m$ modes. For more details on the history and features of the one arm spiral instability, and a discussion of dynamical capture versus field binaries, we refer the reader to ~\cite{PEPS2015,EPPS2016} and the references therein. This $m=1$ deformation of the rest-mass density generates $l=2$, $m=1$ GW modes. These $(2,\ 1)$ GW modes from the instability are not initially as strong as the $(2,\ 2)$ modes, but they have almost constant frequency and amplitude compared to the decaying $(2,\ 2)$ modes. Moreover, the $m=1$ nature of these modes means that their GW frequency is half that of the corresponding $m=2$ modes, lying in a band where ground-based gravitational wave detectors are more sensitive. Hence if the HMNS and conditions driving the instability survive for long enough, the $(2,\ 1)$ modes could eventually dominate the GW signal-to-noise. In~\cite{EPPS2016} we considered a range of NS spins and impact parameters, all corresponding to initially marginally unbound ($e\approx 1$) equal mass ($1.35M_\odot$ each) systems, and chose a single EOS. For this range of parameters we found that the one-arm instability develops when the total angular momentum at merger divided by the total mass squared is $J/M^2 \sim 0.9-1.0$. This range is relevant for quasi-circular NSNS binaries, and so we conjectured that the one arm instability should also manifest in qausicircular NSNS binaries if a HMNS forms post-merger. This was recently confirmed first by \cite{2016arXiv160305726R}, and subsequently by \cite{2016arXiv160502369L} who studied quasi-circular mergers of equal-mass and unequal-mass irrotational NSNSs, respectively. In this work, we continue our investigation of the relevance of the one-arm instability in HMNS remnants following eccentric NS mergers by expanding the parameter space of initial conditions and NS properties. In particular, we consider different equations of state, as well as unequal mass stars and different total masses for the binary. Our survey indicates that the one-arm spiral instability is generic in that it can be excited for all of the equations of state in our survey which form sufficiently long-lived HMNSs (i.e. which do not promptly collapse to black holes)\footnote{Though note that here we focus on impact parameters and initial NS spins that satisfy the criterion $J/M^2 \sim 0.9-1.0$ prior to merger, identified in~\cite{EPPS2016} as seeming to indicate when the remnant would be subject to the instability. Presumably here for the different EOSs and mass ratios similar criteria hold.}. At this point it is important to clarify that while the term ``instability'' would normally imply a mode whose amplitude grows from some initial small seed, throughout we will use the term ``one-arm spiral instability'' to also imply the dominance of a ``one-arm mode'' (a nearly constant amplitude $m=1$ density mode) in the final HMNS, even when the initial $m=1$ ``seed'' is large, for e.g. when the binary mass ratio deviates significantly from unity. We find that mergers with stiffer EOSs tend to produce HMNSs with stronger $m=2$ density deformations, and weaker $m=1$ components relative to softer EOSs. However, even for the stiffest EOS considered here, the HMNS can be dominated by a one-arm mode within a few tens of ms after merger. We also show that the EOS dependency of the size and oscillations of the resulting HMNS is imprinted in the frequency of the resulting GWs, which are sharply peaked in frequency space. In addition, we discover that the one-arm mode can dominate not only for equations of state that lead to toroidal and ellipsoidal HMNSs, but also for equations of state in which a double-core HMNS forms following merger. In this case, the one-arm mode dominates only after the two cores merge into one. Prior to merger of the two cores, the $m=2$ density mode dominates in the cases considered here. However, small $m=1$ asymmetries at merger give rise to asymmetric cores in the double-core remnant which, like the one-arm mode, may act as quasi-stationary sources of $l=2$, $m=1$ GW modes. As a side product we also estimate the properties of dynamically ejected matter, and find that for a given periapse distance unequal mass ratios eject more matter and the associated kilonovae signatures are brighter. The remainder of the paper is organized as follows. In Section~\ref{Methods} we describe the equations of state we survey, the initial data, and the methods we adopt for evolving the Einstein and general relativistic hydrodynamic equations; in Sec.~\ref{Results} we present the results from our simulations; and in Sec.~\ref{Conclusions} we conclude with a summary of our main findings and description of future work. Unless otherwise specified, below we adopt geometrized units where $G=c=1$.
\label{Conclusions} In this work we considered eccentric mergers of binary neutron stars, concentrating on how the equation of state and mass-ratio affect the evolution of the resulting HMNS, and in particular the onset of the one-arm instability. Similarly to~\cite{2016arXiv160305726R,2016arXiv160502369L}, we find that the one-arm instability can occur in stars with stiff and soft EOSs, and that the location of the narrowly peaked signal in frequency encodes information about the NS EOS. We also find that binaries with disparate mass-ratios can more readily seed larger one-arm modes in the resulting HMNS star, in agreement with \cite{2016arXiv160502369L}. Furthermore, we discovered that in addition to toroidal and ellipsoidal HMNSs, double-core HMNSs can develop the $m=1$ instability. In this latter HMNS structure, strong $m=1$ density perturbations can already be present in the double-core phase, due to asymmetries in the two cores. Thus, correspondingly large $l=2$, $m=1$ gravitational-wave modes can arise even before the dominance of a one-arm mode. In addition, our results suggest that stiffer equations of state produce HMNSs whose $m=1$ gravitational wave frequencies are lower than HMNSs formed with softer equations of state. This suggests that one should be able to find correlations between the frequency of the $l=2$, $m=1$ GW mode and the EOS, in the same spirit as the correlations that have been discovered for the $l=2$, $m=2$ mode~\cite{Stergioulas:2011gd,Takami2014,Takami2015,Bauswein2015a,Bauswein2015b,Bauswein2015c}. Thus, if gravitational waves from the one-arm instability are detected, they could in principle constrain the neutron star EOS. To make this robust, quantitative predictions will require the study of a larger suite of simulations with different equations of state, total masses and mass ratios, all of which will be pursued in future work. Through some simple estimates of the signal-to-noise ratio for aLIGO and the future ET we concluded that, depending on the equation of state, the one-arm mode could potentially be detectable by aLIGO at $\sim 10$ Mpc and by ET $\sim 100$ Mpc. However, these estimates must be checked against long, high-resolution simulations that account for correct microphysics and magnetic fields, the impacts of which will be the subjects of forthcoming works. Finally, we have computed the properties of dynamically ejected matter and find that at fixed periapse distance unequal-mass mergers eject more matter and the associated kilonovae signatures are brighter than in the case of equal-mass mergers (this is similar to the trends with mass ratio seen in quasi-circular merger simulations). The upcoming LSST survey with its exquisite sensitivity should be able to discern factors of a few difference in luminosity of kilonovae and hence constrain the properties of the ejected matter, and so potentially constrain the parameters of the NSNS system that merged. \ack We thank Stuart Shapiro for access to the equilibrium rotating NS code. This work was supported by NSF grant PHY-1607449, the Simons Foundation, NASA grant NNX16AR67G (Fermi), and by Perimeter Institute for Theoretical Physics. Research at Perimeter Institute is supported by the Government of Canada through the Department of Innovation, Science and Economic Development Canada and by the Province of Ontario through the Ministry of Research, Innovation and Science. Computational resources were provided by XSEDE/TACC under grant TG-PHY100053, TG-MCA99S008 and the Orbital cluster at Princeton University.
16
9
1609.00725
1609
1609.07619_arXiv.txt
{Intermediate-Mass Black Holes (IMBHs) are thought to be the seeds of early Supermassive Black Holes (SMBHs). While $\gtrsim$100 IMBH and small SMBH candidates have been identified in recent years, few have been robustly confirmed to date, leaving their number density in considerable doubt. Placing firmer constraints both on the methods used to identify and confirm IMBHs/SMBHs, as well as characterizing the range of host environments that IMBHs/SMBHs likely inhabit is therefore of considerable interest and importance. Additionally, finding significant numbers of IMBHs in metal-poor systems would be particularly intriguing, since such systems may represent local analogs of primordial galaxies, and therefore could provide clues of early accretion processes.} {Here we study in detail several candidate Active Galactic Nuclei (AGN) found in metal-poor hosts.} {We utilize new X-ray and optical observations to characterize these metal-poor AGN candidates and compare them against known AGN luminosity relations and well-characterized IMBH/SMBH samples.} {Despite having clear broad optical emission lines that are long-lived ($\gtrsim$10--13\,yr), these candidate AGN appear to lack associated strong X-ray and hard UV emission, lying at least 1--2 dex off the known AGN correlations. If they are IMBHs/SMBHs, our constraints imply that they either are not actively accreting, their accretion disks are fully obscured along our line-of-sight, or their accretion disks are not producing characteristic high energy emission. Alternatively, if they are not AGN, then their luminous broad emission lines imply production by extreme stellar processes. The latter would have profound implications on the applicability of broad lines for mass estimates of massive black holes.} {}
Active Galactic Nuclei (AGN) are usually found in massive, bulge-dominated galaxies that have already converted most of their gas into stars by the present epoch, and consequently tend to have high metallicities. Observations of AGN have borne this out, showing that AGN hosts usually possess metallicities ranging from solar to supersolar \citep[e.g.,][]{storchi-bergmann,hamann}. This raises the question as to whether low-metallicity AGN exist, and if so, in what types of galaxies? We can glean some insight from the black hole (BH) mass to bulge luminosity relation \citep{kormendy,magorrian,bentz} and the BH mass to bulge velocity dispersion relation \citep[$M_{\rm BH}-\sigma$;][]{haehnelt,gebhardt,ferrarese,tremaine,greene06}, both of which relate BH mass to galaxy growth. These relations have largely been established only in nearby massive galaxies \citep[with $M_{\rm BH}$$\sim$$10^{6}$--$10^{10}$\,$M_{\odot}$; e.g.,][]{McConnell2013}, where a BH's gravitational influence can be resolved and studied, but we expect that they should extend to higher and lower galaxy mass regimes. Although there is still much debate about how these relations behave in the low-mass regime \citep[e.g.,][]{barth05,greene10,jiang,graham13,sartori}, these relations naively imply that intermediate-mass BHs (IMBHs), which are thought to be the missing link between stellar-mass and supermassive BHs (SMBHs) and potential seeds of SMBHs in the early universe \citep[e.g.,][]{volonteri}, should occur in low-mass dwarf galaxies. The discovery and characterization of IMBHs are thus of particular interest. The most robust way to confirm the presence of a BH is via spatially resolved dynamical estimates. However, this method is difficult to employ in practice due to the small regions of influence around IMBHs. Thus it is often necessary to resort to indirect estimates. One method is to search for the telltale signs of broad-line emission associated with AGN activity due to photoionized gas within the gravitational influence of the BH. \citet{greene04, greene06}, \citet[hereafter I07]{izotov07}, \citet{reines13} and others, for instance, have used the SDSS spectroscopic archives to systematically search for such broad-line tracers, finding 10's to 100's of candidates. A second method is to use diagnostic line ratios that assess the underlying UV spectrum to separate out AGN activity from star formation and shocks \citep[e.g.,][]{groves,reines13,moran}. Finally, a third method is to search for X-ray, mid-IR, and/or compact, low-luminosity radio emission, which can sometimes be interpreted as unambiguous signs of AGN activity \citep[e.g.,][]{greene06,reines14,Hainline2016}; notably, 70\% of the \citet[][hereafter GH04]{greene04} sample have clear X-ray emission, strengthening their identification \citep[][hereafter D09]{desroches}. Despite the discovery of numerous AGN candidates in dwarf galaxies with these methods, few IMBHs with masses below 10$^{5}$ $M_{\odot}$ have been conclusively identified (e.g., NGC 4395, RGG118 and HLX-1) and few AGN overall have been found in metal-poor ($Z$$<$$0.2Z_{\odot}$) systems which might be considered analogs of early AGN hosts \citep{groves}.\\ In this work, we study several metal-poor systems which have been argued to host candidate AGN based on luminous, constant, broad emission lines. We place new constraints on these objects using X-ray observations from the {\it Chandra} Observatory and additional ground-based optical spectroscopy. The outline of the paper is as follows. In $\S$2, we describe the X-ray and optical observations and data reduction procedures. In $\S$3, we present the results and compare these to other known AGN, as well as possible alternative interpretations. In $\S$4, we summarize our work and discuss broader implications.\\ \begin{table*} \begin{center} {\small \caption[]{General Properties of Sample} \label{general} \begin{tabular}{ccccccccccc} \hline \noalign{\smallskip} Object & R.A. & Dec. & $z$ & $L_{\rm H\alpha, br}$ & $L_{\rm H\alpha, br}$/$L_{\rm H\alpha, nar}$ & $L_{\rm [O{\sc III}]}$ & $g$ & $M_{g}$ & $12 +\log{\rm O/H}$ & $M_{\rm BH}$ \\ \noalign{\smallskip} \hline \noalign{\smallskip} J0045+1339 & 00 45 29.2 & +13 39 09 & 0.29522 & 2.7$\times$10$^{41}$ & 0.42 & 3.0$\times$10$^{42}$ &21.8 & -18.6 & 7.9 & 2.4$\times$10$^{6}$ \\ J1025+1402 & 10 25 30.3 & +14 02 07 & 0.10067 & 3.2$\times$10$^{41}$ & 3.38 & 2.7$\times$10$^{41}$ & 20.4 & -17.7 & 7.4 & 5.1$\times$10$^{5}$ \\ J1047+0739 & 10 47 55.9 & +07 39 51 & 0.16828 & 1.6$\times$10$^{42}$ & 1.22 & 3.6$\times$10$^{42}$ & 19.9 & -19.2 & 8.0 & 3.1$\times$10$^{6}$ \\ J1222+3602 & 12 22 45.7 & +36 02 18 & 0.30112 & 2.8$\times$10$^{41}$ & 0.72 & 2.0$\times$10$^{42}$ & 21.3 & -19.1 & 7.9 & 6.3$\times$10$^{5}$ \\ J1536+3122 & 15 36 56.5 & +31 22 48 & 0.05619 & 4.6$\times$10$^{40}$ & 0.19 & 1.8$\times$10$^{39}$ & 17.5 & -19.3 & 8.3 & 3.3$\times$10$^{5}$ \\ J0840+4707 & 08 40 29.9 & +47 07 10 & 0.04219 & 8.5$\times$10$^{40}$ & 0.17 & 8.5$\times$10$^{39}$ & 17.6 & -18.5 & 7.6 & 3.0$\times$10$^{5}$ \\ J1404+5423 & 14 04 28.6 & +54 23 53 & 0.00117 & 1.9$\times$10$^{38}$ & 0.15 & 7.4$\times$10$^{36}$ & 16.7 & -12.4 & 7.9 & 4.2$\times$10$^{4}$? \\ \noalign{\smallskip} \hline \end{tabular} \tablefoot{ {\it Col 1}: Shortened SDSS object name, from I07. {\it Col 2}: Right ascension in epoch J2000.0. {\it Col 3}: Declination in epoch J2000.0. {\it Col 4}: Redshift. {\it Col 5}: Luminosity of the broad H$\alpha$, $L_{\rm H\alpha, br}$, in erg s$^{-1}$. {\it Col 6}: Ratio of broad/narrow H$\alpha$ luminosity components, $L_{\rm H\alpha, br}$/$L_{\rm H\alpha, nar}$. {\it Col 7}: Luminosity of the [O{\sc iii}] $\lambda$5007, $L_{\rm [O{\sc III}]}$, in erg s$^{-1}$. {\it Col 8}: Apparent magnitude, $g$, from SDSS, in ABmag. {\it Col 9}: Absolute magnitude, $M_{g}$, in ABmag. {\it Col 10}: Metallicity based on oxygen abundance, $12 + \log{\rm O/H}$. {\it Col 11}: Black hole mass, $M_{\rm BH}$, estimated from H$\alpha$ relation of GH05, in $M_{\odot}$. The last object, SDSS J1404+5423, is located $\sim$11\arcmin\ ($\sim$16\,kpc) from the nucleus of M101, which likely rules out a central massive black hole. % } } \end{center} \end{table*} \begin{table} \begin{center} \caption[]{Multi-epoch Broad H$\alpha$ Fluxes for AGN Candidates} \label{flux} \begin{tabular}{cccc} \hline \noalign{\smallskip} Object & $F_{\rm H\alpha, br}$ & Obs. Date & Telescope \\ \noalign{\smallskip} \hline \noalign{\smallskip} J0045+1339 & 16.4$\pm$1.7 & 2000 Jan 12 & SDSS \\ & 18.0$\pm$0.7 & 2007 Nov 15 & APO \\ & 20.4$\pm$0.4 & 2010 Feb 07 & APO \\ & 25.2$\pm$0.5 & 2013 Oct 27 & LBT \\ J1025+1402 & 165.0$\pm$5.7 & 2004 Mar 11 & SDSS \\ & 192.5$\pm$0.8 & 2008 Feb 06 & APO \\ & 227.1$\pm$4.5 & 2009 Nov 20 & APO \\ & 260.3$\pm$3.3 & 2010 Feb 07 & APO \\ & 276.8$\pm$6.1 & 2015 Feb 24 & APO \\ & 185.9$\pm$3.7 & 2016 Feb 14 & Keck \\ J1047+0739 & 289.2$\pm$9.1 & 2003 Jan 31 & SDSS \\ & 224.2$\pm$2.1 & 2008 Feb 06 & APO \\ & 190.5$\pm$3.9 & 2009 Nov 20 & APO \\ & 208.0$\pm$4.3 & 2010 Feb 07 & APO \\ & 233.7$\pm$5.1 & 2015 Feb 24 & APO \\ & 215.7$\pm$4.3 & 2016 Feb 14 & Keck \\ J1222+3602 & 16.1$\pm$1.8 & 2005 Mar 13 & SDSS \\ & 22.4$\pm$1.3 & 2008 Feb 06 & APO \\ & 27.0$\pm$0.6 & 2015 May 18 & LBT \\ J1536+3122 & 78.7$\pm$4.1 & 2004 Apr 24 & SDSS \\ J0840+4707 & 244.8$\pm$7.5 & 2001 Mar 13 & SDSS \\ J1404+5423 & 350.4$\pm$10.6 & 2004 Mar 24 & SDSS \\ \noalign{\smallskip} \hline \end{tabular} \tablefoot{ {\it Col. 1}: Shortened SDSS object name. {\it Col. 2}: Flux of H$\alpha$ broad component in units of 10$^{-16}$ erg\,s$^{-1}$ cm$^{-2}$. Only statistical errors are reported. {\it Col. 3}: Observation date (UT). {\it Col. 4}: Telescope. } \end{center} \end{table}
\label{sec:discussion} We now investigate hard X-ray (2--10 keV) constraints for the four I08 low-metallicity AGN candidates and the three \textsl{Chandra} observed galaxies from the I07 sample. Figure~\ref{Panessa-mix} compares the hard X-ray luminosity against the H$\alpha$ and [O{\sc iii}] luminosities for these objects, allowing us to examine how they lie compared to the AGN and star-forming relations of P06 and \citet{ranalli}, respectively. We also show in Figure~\ref{Panessa-mix} X-ray constraints for the P06 AGN, the GH04/D09 candidate IMBHs/SMBHs, and five additional objects: RGG 118, Pox 52, NGC 4395, Henize 2-10 and Mrk 709 (south). These last five dwarf galaxies are reported to host BHs of $\sim$10$^4$-10$^7$ \textsl{M$_{\odot}$} \citetext{RGG 118 -- \citealp{baldassare15}; Pox 52 -- \citealp{barth04}; NGC 4395 -- \citealp{filippenko}; Henize 2-10 -- \citealp{reines12}; Mrk 709 -- \citealp{reines14}} and thus provide useful comparisons for our investigation. The BH in RGG 118 is the smallest ever reported in a galaxy nucleus, while Pox 52 and NGC 4395 are of particular interest since their BH masses ($\sim$10$^{5}$ \textsl{M$_{\odot}$}) are better constrained; all lie in the IMBH regime. Pox 52 and NGC 4395 have similar properties and are classified as dwarf Seyfert 1 galaxies. It is noteworthy that NGC 4395 has no discernible bulge, and thus is not expected to host a central BH, but its nucleus exhibits all the characteristics of Seyfert activity, including broad emission lines and X-ray variability. The P06 AGN, GH04/D09 candidate IMBHs/SMBHs, and the five additional objects more or less all follow the expected AGN correlations, which represent various known couplings (e.g., broad and narrow line regions, X-ray corona) with the accretion disk. The only outlier appears to be the strong [O{\sc iii}] emission from Mrk 709 (south), pushing it $\sim$2 dex off the AGN relation of P06. This offset could be an X-ray deficit due to X-ray variability or obscuration, or an [O{\sc iii}] excess related to stronger AGN emission in the past. In general, the highest X-ray luminosity is shown in Fig.~\ref{Panessa-mix}, but X-ray variability still presumably contributes substantially to the dispersion in the P06 relations. Also shown are the I08 objects, denoted by downward arrows for X-ray upper limits and a red solid triangle. These categorically demonstrate a lack of X-rays for a given strength of broad H$\alpha$ and [O{\sc iii}] emission. As a sample, this is unusual and atypical of AGN. If the X-ray deficit is due to variability, we would expect the objects under study here to scatter around the P06 relation, which they do not. Another explanation could be strong line-of-sight obscuration ($N_{\rm H}$$\gtrsim$$10^{24}$ cm$^{-2}$), with the caveat that the few photons that are detected in J1047+0739 are low-energy ones. However, assuming this geometry for all four I08 objects and a standard AGN orientation paradigm might imply a large population of unobscured AGN in dwarf galaxies that is not currently observed. A third possibility is that these objects are intrinsically X-ray weak, whereby the characteristic strong X-ray emitting corona is never produced. This odd behavior has been observed in a few Broad Absorption Line (BAL) quasars and luminous infrared galaxies \citep[e.g.,][]{Luo2014, Teng2015}, although the I08 objects ought to occupy a very different physical regime. Given the lack of understanding in these objects, this possibility remains difficult to exclude, however. We further investigate the nature of the X-ray deficit by examining the mid-infrared (MIR) emission for the sample, benefiting from the Wide-field Infrared Survey Explorer (WISE), which imaged the entire sky in four bands centered at 3.4, 4.6, 12 and 22 $\mu$m, called $W1$, $W2$, $W3$ and $W4$, respectively. If these I08 and I07 objects are highly obscured AGN (e.g., a large fraction of the accretion disk emission is blocked by a dusty torus), we may see their re-radiated emission at MIR wavelengths \citep[e.g.,][]{efstathiou1995}, since this hot dust emission is largely unaffected by further obscuration from the torus or interstellar medium. AGN are often revealed either by their characteristic X-ray-to-MIR ratios \citep{lutz2004, gandhi2009, asmus2015, stern2015} or red MIR colors \citep[e.g.,][]{richards2006, assef2010}. The X-ray-to-MIR ratio is well suited for detecting obscured AGN, since the X-ray emission is expected to be suppressed compared to the MIR emission \citep[e.g.,][]{alexander2008, lanzuisi2009, bauer2010}. Likewise, \citet{stern2012} found that a cut at $W1$$-$$W2$$\geq$0.8\,mag provides a very reliable AGN indicator of hot dust in both obscured and unobscured sources with $W2$$<$15.05\,mag, based on a WISE-selected sample of AGN. Figure~\ref{WISE-mix} shows the mentioned relations for the I08 and I07 objects from Table~\ref{tab3}, represented by red and green triangles, respectively, with upper limits denoted by downward arrows. On the left side, we compare the rest-frame hard 2--10 keV X-ray and 6\,$\mu$m luminosities of the objects against the standard relations from \citet{lutz2004} and \citet{stern2015}. Notably, all the galaxies lie below the AGN relations, most by factors of at least $\sim$10--60 (three sources have only X-ray upper limits and could be substantially lower still). This is consistent with our previous findings of X-ray deficits. On the right side, we show the observed MIR colors $W1$$-$$W2$ and $W2$$-$$W3$ for the I08 and I07 objects, where the dashed line denotes the $W1$$-$$W2$$\geq$0.8\,mag limit from \cite{stern2012}. Five objects lie above the line, suggesting an AGN classification. Critically, however, \citet{izotov2014} examined the MIR colors of $\sim$10,000 star-forming galaxies with strong emission lines and no obvious signs of AGN, detected in both SDSS and WISE, and found that a non-negligible number ($\sim$5\%) scattered above the fiducial AGN criterion of \citet{stern2012}, although they remained distinctly offset in $W2$$-$$W3$ from QSOs \citep[e.g., Figure~7c of][]{izotov2014}. \citet{izotov2014} found that these objects are mainly luminous galaxies with high-excitation H{\sc ii} regions, and their unusual WISE colors are produced by hot dust associated with radiation from young star-forming regions. Meaning that the I08 and I07 objects could belong to this tail of the star-forming galaxy population. Thus, some objects exhibit MIR properties consistent with highly obscured AGN activity, but we lack conclusive results that can differentiate AGN from star-forming regions for our samples. Notably, most of the I08 objects have strong limits on high-ionization emission lines like He{\sc ii} and [Ne {\sc v}], implying that there is also no strong non-thermal hard ionizing UV radiation present. The exception is J1222$+$3602, which has a [Ne {\sc v}]/He{\sc ii} ratio of $\sim$1.5. This is $\sim$5 times higher than in star-forming galaxies and is more consistent with the value for Seyfert 2 galaxies. In general, this is consistent with the observed deficit of X-ray emission, and implies that the BHs in the I08 objects may not be strongly accreting at all, or as I08 argued, that there is a high covering factor from the accretion disk itself that absorbs the hard UV (and now X-ray) emission. One final interesting scenario is for the circumnuclear gas to be influenced by the gravitational potential of a dormant IMBH/SMBH but excited by stellar processes. Strong narrow emission lines are present in all I08 objects, implying that stellar processes are able to generate sufficient photon or shock excitation. It could be possible that these processes are energetic enough to power the broad line excitation too. In this scenario, most of the ionizing stars or shocks would presumably need to reside close to the broad line gas in order to satisfy the observed UV continuum constraints. This seems like a physically implausible scenario. \begin{figure*} \centering \includegraphics[width=16cm]{SNe.png} \caption[SNe]{Time evolution of the broad H$\alpha$ luminosity for the I08 objects, as well as several luminous type IIn SNe \citep{kuncarayakti, aretxaga, stritzinger, smith, jencson} and the transient event SDSS1133 \citep{koss}. The I08 objects are roughly constant over periods of 10--13 yr; there is some variability between epochs at the $\sim$50\% level, but this could be due to calibration uncertainties. All error bars are smaller than the symbols.} \label{SNe} \end{figure*} One of the mechanisms proposed by I08 to explain the observed offset in broad H$\alpha$ is the presence of type IIn SNe, since this type of SNe can produce relatively long-lived, high broad H$\alpha$ luminosities. Figure~\ref{SNe} shows the time evolution for several luminous type IIn SNe compared to the I08 objects. The I08 objects show no significant variation in broad H$\alpha$ emission at least over periods of $\sim$10--13 years. While a few individual SNe have been observed to have broad H$\alpha$ luminosities as high as the I08 objects \citep[e.g., SN 2005ip and SN 2006jd;][]{stritzinger}, Figure~\ref{SNe} shows that such powerful SNe can only generate such luminosities for $\lesssim$3\,yr. SN 1978K stands out for having a broad H$\alpha$ luminosity that has shown little variation over the course of $\sim$25 years; however, its luminosity is 2--3 dex less than those of the I08 objects. Thus, the I08 objects would require large numbers of such SNe to maintain the bright, roughly constant broad H$\alpha$ luminosities that are observed. Because such SNe are thought to be produced by high mass stars, this scenario seems unlikely based on the star formation rates, initial mass function, and general lack of transients observed in such dwarfs. \citet{koss} reported on an unusually persistent transient, SDSS J113323.97$+$550415.8 (hereafter SDSS1133), in the nearby blue compact dwarf galaxy Mrk 177, which shares some similarities with the I08 objects. Unlike the I08 objects, the transient lies 5\farcs8 away from the apparent nucleus (a projected offset of 0.8\,kpc) and has slowly varying broad Balmer line emission with a velocity offset of $\sim$200--800 km\,s$^{-1}$ from the host nucleus and a peak luminosity of $\sim 10^{40}$ erg\,s$^{-1}$. Such properties imply that SDSS1133 could be a black hole recoil candidate, although some traits can also be explained by a luminous blue variable star that was erupting for decades before exploding as a type IIn SN. With respect to the latter, Koss et al. argue that this event would represent one of the most extreme episodes of pre-SN mass-loss ever discovered. Mrk 177 shows signs of clumpy star formation, analogous to the extended emission observed for two I08 galaxies. The emission lines from the Mrk 177 host nucleus ($Z$$\approx$$Z_{\odot}$) and SDSS1133 generally place both in the star-forming galaxy region of the diagnostics plots, although at its centroid SDSS1133 shifts somewhat into the Seyfert galaxy regime. SDSS1133 also shows 1--3 magnitudes of optical variability on timescales of $\sim$1--60 years, unlike the I08 objects. It is interesting to consider that if SDSS1133 were coincident with the nucleus, it would be more strongly contaminated by the nuclear star formation and vary less, and therefore be even more comparable to the I08 objects. Intriguingly, SDSS1133 is marginally detected by \textsl{Swift} XRT with 7.6 $\pm$ 3.4 background-subtracted 0.3--10 keV counts, corresponding to a 0.3–-10 keV luminosity of 1.5$\times$10$^{39}$ erg s$^{-1}$ (adopting a power-law spectrum with fixed $\Gamma=1.9$). With this limited amount of X-ray emission, Koss et al. cannot distinguish between the AGN and SN scenarios, similar to the case of J1047+0739. The broad H$\alpha$ (from 2003) and X-ray (from 2013) luminosities of SDSS1133 place it $\sim$1.5 dex off the P06 AGN relation, similar to the I08 objects, although these were not taken contemporaneously and the X-ray emission could have been significantly higher 10 years beforehand.
16
9
1609.07619
1609
1609.09097_arXiv.txt
Extensive observations from the \emph{Kepler} spacecraft have recently revealed a new outburst phenomenon operating in cool pulsating DA (hydrogen atmosphere) white dwarfs (DAVs). With the introduction of two new outbursting DAVs from \emph{K2} Fields 7 (EPIC\,229228364) and 8 (EPIC\,220453225) in these proceedings, we presently know of six total members of this class of object. We present the observational commonalities of the outbursting DAVs: (1) outbursts that increase the mean stellar flux by up to $\approx$15\%, last many hours, and recur irregularly on timescales of days; (2) effective temperatures that locate them near the cool edge of the DAV instability strip; and (3) rich pulsation spectra with modes that are observed to wander in amplitude/frequency.
Over 97\% of stars in the Milky Way ultimately end their lives as compact white dwarf stars. Without appreciable nuclear fusion, white dwarfs evolve further by cooling. When hydrogen-atmosphere (DA) white dwarfs near the average surface gravity of $\log{g}\approx 8$ cool through the range $12{,}500$\,K $> {T}_{\mathrm{eff}} > 10{,}600$\,K, partial ionization of atmospheric hydrogen induces stellar pulsations. We observe these DAs as photometric variables (DAVs), and the measured frequencies of variability are eigenfrequencies of the stars. The tools of asteroseismology enable us to constrain the interior structures of DAVs from these measurements. The \emph{Kepler} spacecraft observed one field of view nearly continuously for over 4 years in its original mission. For a maximum of 512 pre-selected targets, time series photometry was collected at short cadence---roughly every 1\,min rather than every 30\,min. With pulsation periods of $\sim 10$ minutes, short cadence \emph{Kepler} observations promised to capture by far the most complete record of DAV behavior. \citet{Hermes2011} identified the first DAV in the original \emph{Kepler} mission field. WD\,J1916+3938 was observed by \emph{Kepler} at short cadence as KIC\,4552982 for over 1.5 years with a 86\% duty cycle. Besides revealing a rich pulsation spectrum with 20 significant modes, these data captured a new outburst-like phenomenon operating in this star \citep{Bell2015}. A total of 178 flux enhancements reaching peaks of 2--17\% above the quiescent value and lasting 4--25\,hr were detected. These outbursts carry a total energy of order $10^{33}$\,erg and have an average recurrence timescale of 2.7 days. The observed time distribution of the outbursts favors delays longer than 2 days, beyond which their occurrences are consistent with Poisson statistics. With a spectroscopic ${T}_{\mathrm{eff}} = 10{,}860 \pm 120$\,K at $\log{g} = 8.16\pm 0.06$, KIC\,4552982 is one of the coolest DAVs known. After the \emph{Kepler} spacecraft's second reaction wheel failure in May 2013, the new \emph{K2} mission was devised for continued science operations in new fields in the ecliptic plane every $\approx$80 days \citep{Howell2014}. \citet{Hermes2015} discovered another cool DAV---PG\,1149+057 (EPIC\,201806008) with ${T}_{\mathrm{eff}} = 11{,}060 \pm 170$ and $\log{g} = 8.06\pm 0.05$---to exhibit 10 outbursts in 78.8 days. These outbursts caused instantaneous flux enhancements at high as 45\% that lasted between 9--36\,hr. With a $Kepler$ magnitude of $K_p = 15.0$, this is the brightest outbursting DAV, and the corresponding high signal-to-noise ratio of the light curve enabled them to establish that the outbursts affect the pulsations---with pulsations generally having higher amplitudes and shorter periods during outbursts---definitively proving that these outbursts are occurring on the pulsating target star. \citet{Bell2016} inspected the light curves of over 300 spectroscopically confirmed DA white dwarfs that were submitted for \emph{Kepler} observations through \emph{K2} Campaign 6 and discovered two additional outbursters in Fields 5 and 6: EPIC\,211629697 and EPIC\,229227292. Both are also cool DAVs, demonstrating that only those within 500\,K of the empirical cool edge of the DAV instability strip are observed to undergo outbursts. EPIC 211629697 ($\log{g} = 7.94\pm 0.08$, ${T}_{\mathrm{eff}} = 10{,}780 \pm 140$) showed 15 outbursts in 74.8 days of Campaign 5 data, with an average spacing of 5.0 days. These outbursts reached peaks of 8--15\% and lasted 6--38 hours. EPIC\,229227292 ($\log{g} = 8.02\pm 0.05$, ${T}_{\mathrm{eff}} = 11{,}191 \pm 170$) is the most frequent outburster, exhibiting 33 outbursts every 2.4 days on average over 78.9 days of Campaign 6 observations. These reached peak fluxes of 4--9\% with 3--21 hour durations. Alongside these observational developments, a possible physical mechanism has been proposed. J.~J.\ Hermes described the potential for nonlinear mode coupling to cause outbursts in his talk at this conference, borrowing from the theoretical work of \citet{Wu2001}. In this model, a resonant coupling can transfer energy from a driven parent mode into two daughter modes. If these daughter modes are damped at the base of the convection zone, they will deposit their energy there, heating the surface of the star. This mechanism for dumping pulsational energy may explain the empirical location of the cool edge of the DAV instability strip, which theoretical calculations predict to be thousands of degrees cooler than observed \citep[e.g.,][]{VanGrootel2012}. In these proceedings, we present two more outbursting DAVs: EPIC\,229228364 and EPIC\,220453225, from \emph{K2} Fields 7 and 8. We analyze the short cadence \emph{K2} data for these targets in Section~\ref{sec:anal}. We take a brief comparative look at the observational properties of the six known members of the new outbursting class of DAV in Section~\ref{sec:class}. Finally, we discuss future observational prospects for these objects in Section~\ref{sec:future}.
16
9
1609.09097
1609
1609.00455_arXiv.txt
Based on the LAMOST survey and Sloan Digital Sky Survey (SDSS), we use low-resolution spectra of 130,043 F/G-type dwarf stars to study the kinematics and metallicity properties of the Galactic disk. Our study shows that the stars with poorer metallicity and larger vertical distance from Galactic plane tend to have larger eccentricity and velocity dispersion. After separating the sample stars into likely thin-disk and thick-disk sub-sample, we find that there exits a negative gradient of rotation velocity $V_{\phi}$ with metallicity [Fe/H] for the likely thin-disk sub-sample, and the thick-disk sub-sample exhibit a larger positive gradient of rotation velocity with metallicity. By comparing with model prediction, we consider the radial migration of stars appears to have influenced on the thin-disk formation. In addition, our results shows that the observed thick-disk stellar orbital eccentricity distribution peaks at low eccentricity ($e \sim 0.2$) and extends to a high eccentricity ($e \sim 0.8$). We compare this result with four thick-disk formation simulated models, and it imply that our result is consistent with gas-rich merger model.
The formation and evolution of the Galaxy are very important issues in modern astrophysics. Since \citet{Gilmore83} firstly introduced the thick-disk, the basic components of the Galactic disk are thin-disk and thick-disk populations. The two components differ not only in their spatial distribution profiles but also in their kinematics and metallicity \citep{majewski93,Ojha96,Freeman02}. In the spatial distribution, the range of scale height for the thin-disk vary from 220 to 320 pc, while that of the thick-disk is from 600 to 1100 pc \citep{Du03,Du06,Jia14}. The range of scale length for the disk (including thin disk and thick disk) vary from 2 to 4 kpc, but some evidences for a short scale length for the thick disk were also given by \citep[e.g.][]{Bensby11,Cheng12,Hayden15}. The thick-disk generally has lower rotational velocity and larger velocity dispersion, has a lower average metallicity ([Fe/H] $\sim -0.7$ dex)\citep{Gilmore85}, and has enhanced $\alpha$-element abundances than the thin disk \citep{Gratton96, Fuhrmann98, Prochaska00, Bensby03, Reddy06, Bensby07, Fuhrmann08}. The origin of the thick disk have been investigate by authors \citep[e.g.][]{Quinn86, Freeman87, Abadi03, Brook04, Brook05, Brook07, Schonrich09} and have not been resolved. Some currently discussed models of formation mechanisms for the thick disk predict various trend between the kinematics properties and metallicity of disk stars, as well as between their kinematics and spatial distributions. For example, models of Gas-rich merger predict a rotational velocity gradient with Galactocentric distance for disks stars near the solar radius \citep[][]{Brook07}. Models of disk heating via satellite mergers or a growing thin disk can induce a notable increase in the mean rotation and velocity dispersions of thick disk stars \citep[][]{Villalobos10}. \citet{Sales09} also showed that the distribution of orbital eccentricities for nearby thick disk stars could provide constraints on these proposed formation models. Comparisons the predictions of the models with observed kinematics properties of Galactic disk are helpful to understand the formation and evolution of Galaxy. To understanding the formation and chemical evolution of the Galaxy components, we need more chemical and kinematics information of a large number of stars in larger areas which will greatly increase the spatial coverage of the Galaxy. The large-scale spectroscopic surveys make it possible by providing ideal databases such as radial velocities and stellar atmospheric parameters (Teff, log $g$, [Fe/H], etc). A number of papers have employed the kinematics characters and chemical abundances to study the Galactic structure and formation, based on spectroscopic survey data. For instance, \citet{Bond10}, \citet{Carollo10} , \citet{Lee11} and \citet{Smith09} have characterized the halo and disk base on Sloan Digital Sky Survey \citep[SDSS;][]{York00} and its sub-survey Sloan Extension for Galactic Understanding and Exploration \citep[SEGUE;][]{Aihara11,Yanny09, Beers06}. SDSS III's Apache Point Observatory Galactic Evolution Experiment \citep[APOGEE;][]{Eisenstein11} has higher resolution than SEGUE, also being used to explore the kinematic of disk \citep[e.g.][]{Bovy12,Bovy15, Ness16}. Comparing to SEGUE, the Radial Velocity Experiment (RAVE) provide a bright complement to the SEGUE sample \citep{Siebert11}. Many works on kinematics of Galactic disk \citep[e.g.][]{Binney14,CasettiDinescu11,Siebert08} have been done on RAVE data. Of course, there also have some works based on Gaia-ESO internal data-release \citep[e.g.][]{RecioBlanco14,Kordopatis15}. The ongoing Large Sky Area Multi-Object Fiber Spectroscopic Telescope survey \citep[LAMOST, also called Guoshoujing Telescope;][]{Cui12, Deng12, Zhao12, Luo12} has release more than two millions stellar spectra with stellar parameters in the DR2 catalog. This data set will provide a vast resource to study details of the velocity distribution and give constraints on the dynamical structure and evolution of the Galactic disk. In this study, we make use of the F/G dwarf stars selected from the LAMOST survey to explore the observed correlations of kinematic velocity and orbital eccentricity with metallicity and distance from the Galactic plane. We also compare the observation results with the predictions of different models and expect to obtain some clues for the disk formation. The outlines of this paper are as follows. In Section 2, we take a brief overview of the LAMOST, observation data and derive the individual three-dimensions velocity. Section 3 presents the results of the observation. The discussions are given in Section 4. A summary and conclusions are given in Section 5.
In this paper, we use 130,043 F/G-type dwarf stars from the LAMOST DR2 data to investigate kinematics and metallicity distribution of the Galactic disk. Our sample comprises stars with $6.5<R< 9.5$ kpc, $0.1< |Z|< 3$ kpc, log $g > 3.5$, $-1.2<$[Fe/H]$<0.6$, and $\rm S/N >$ 15. It shows that our sample stars could be mainly contributed from the disk system. In the intermediate-metallicity range $-1.2<$ [Fe/H] $<-0.5$, the orbital eccentricity generally decreases with increased metallicity, and there is a relatively larger gradient. In the metal-rich range $-0.2<$ [Fe/H] $<0.6$, there is little or no correlation between orbital eccentricity and metallicity [Fe/H] for stars at $0.1< |Z|< 0.5$ kpc and $0.5< |Z|< 1.0$ kpc, however, for stars farther from the Galactic plane ($1.0<|Z|<3.0$ kpc), the eccentricity has a little increase with increasing metallicity. Those trends are also found in the correlation between $\sigma_{U}$ and $\sigma_{V}$ with metallicity. Moreover, the observed thick-disk stellar orbital eccentricity distribution peaks at low eccentricity ($e \sim 0.2$) and extends to a high eccentricity ($e \sim 0.8$). We then compare this result with four thick-disk formation models, and it appears that the observed distribution is consistent with the gas-rich merger model and against with accretion model. We examine the rotational velocity with $|Z|$ and R at the high-metallicity ([Fe/H]$>-0.1$) and intermediate-metallicity range($-0.8 <$[Fe/H]$<-0.6$). It show that there exists a clear gradient of $V_{\phi}$ with $|Z|$, and there is only a negligible rotational velocity gradient with the Galactocentric radius $R$ for both metallicity range. In addition, the rotation velocity increases with increased metallicity [Fe/H] in the range $-1.2<$ Fe/H]$<-0.2$ and has a peak at [Fe/H]$\sim-0.2$, then it becomes a slightly decrease when [Fe/H]$>-0.2$. After separating the sample stars into likely thin-disk and thick-disk sub-sample, we find that there exits a negative gradient of rotation velocity $V_{\phi}$ with metallicity [Fe/H] for the likely thin-disk sub-sample, and the thick-disk sub-sample exhibit a larger positive gradient of rotation velocity with metallicity. The gradient for the likely thin-disk sub-sample qualitatively agrees with the predictions of the radial migration models \citep{Loebman11}. We consider the radial migration of stars appears to have influenced on the thin-disk formation. The gradient for the thick-disk sub-sample is consistent with the result of pure N-body models of \citet{Curir12}. However, to detailed quantitative comparisons with these observation results, it need to constructed more physically realistic models and simulations.
16
9
1609.00455
1609
1609.00380_arXiv.txt
{{\CppTransport} is a numerical platform that can automatically generate and solve the evolution equations for the 2- and 3-point correlation functions (in field space and for the curvature perturbation $\zeta$) for any inflationary model with canonical kinetic terms. It makes no approximations beyond the applicability of tree-level perturbation theory. Given an input Lagrangian, {\CppTransport} performs symbolic calculations to determine the `Feynman rules' of the model and generates efficient {\CC} to integrate the correlation functions of interest. It includes a visualization suite that automates extraction of observable quantities from the raw $n$-point functions and generates high quality plots with minimal manual intervention. It is intended to be used as a collaborative platform, promoting the rapid investigation of models and systematizing their comparison with observation. This guide describes how to install and use the system, and illustrates its use through some simple examples. \par\vspace{2cm} \mbox{}\hfill \includegraphics[scale=0.1]{LOGO-ERC} \quad \includegraphics[scale=0.5]{LOGO-STFC}}
There is now broad agreement that the inflationary scenario~\cite{Guth:1980zm,Starobinsky:1980te,Albrecht:1982wi, Hawking:1981fz,Linde:1981mu,Linde:1983gd} provides an acceptable framework within which to interpret observations of the very early universe. In this scenario, all structure arises from a primordial distribution of gravitational potential wells laid down by quantum fluctuations during an early phase of accelerated expansion. After inflation the universe is refilled with a sea of cooling radiation, and matter begins to condense within these potential wells. This generates a network of structure inheriting its statistical properties from those of the seed quantum fluctuations. By measuring these properties we can hope to infer something about the microphysical modes whose fluctuations were responsible. Information about the pattern of correlations visible within our Hubble patch can be extracted from any observable that traces the condensed matter distribution. To determine the viability of some particular inflationary model we must compare these observations with predictions that carefully account for the precise character of quantum fluctuations given the field content, mass scales and coupling constants of the model. Unfortunately these calculations are challenging. Various approximate schemes to compute a general $n$-point function are known, but even where these are available they require numerical methods except in special cases. Such schemes typically break the calculation into two pieces: a `hard' contribution---that is, involving comparatively large momenta---characterized by wavenumbers near the scale of the sound horizon $k/a \sim H / c_s$, and a `soft' contribution involving comparatively low wavenumbers $k/a \ll H / c_s$~\cite{Dias:2012qy}. The soft contribution can be computed using the classical equations of motion and is normally the only one to be handled exactly. The hard contribution is estimated by assuming all relevant degrees of freedom are massless and non-interacting. A typical example is the $\delta N$ formula for the equal-time two-point function of the curvature perturbation $\zeta$, \begin{equation} \langle \zeta(\vect{k}_1) \zeta(\vect{k}_2) \rangle_t = \aunderbrace[l1r]{N_\alpha(t, t_\ast) N_\beta(t, t_\ast)}_{\text{soft part}} \aoverbrace[L1R]{\langle \delta\phi^\alpha(\vect{k}_1) \delta\phi^\beta(\vect{k}_2) \rangle_{t_\ast}}^{\text{hard part}} . \label{eq:deltaN-approx} \end{equation} The indices $\alpha$, $\beta$, \ldots, label species of scalar field (with summation implied over repeated indices) and the subscript attached to each correlation function denotes its time of evaluation. The times are ordered so that $t \geq t_\ast$. Taking $t_\ast$ to label an epoch when $k_1 / a = k_2 / a \sim H / c_s$ makes $N_\alpha N_\beta$ correspond to the soft piece and the $t_\ast$ correlation function correspond to the hard piece. This division is entirely analogous to the factorization of hadronic scattering amplitudes into a hard subprocess followed by soft hadronization. Any scheme of this type will break down if the hard initial condition can not be approximated by the `universal' massless, non-interacting case. In recent years it has been understood that there is a rich phenomenology associated with this possibility, including `gelaton-like'~\cite{Tolley:2009fg} or `QSFI-like'~\cite{Chen:2009we,Chen:2009zp,Chen:2012ge} effects. With sufficient care these effects can be captured in an approximation scheme such as~\eqref{eq:deltaN-approx}, but the approach becomes more complex---and even if this is possible we have only exchanged the problem for computation of the hard component $\langle \cdots \rangle_{t_\ast}$. If this hard component is not universal, then the problem is no easier than calculation with which we started. A different way in which~\eqref{eq:deltaN-approx} loses its simplicity occurs when there is not a single hard scale, but a number of widely separated scales. For example, this can occur in an $n$-point function with $n \geq 3$ where the external wavenumbers $\vect{k}_i$ divide into groups characterized by typical magnitudes $\mu_1$, $\mu_2$, \ldots, $\mu_N$ and $\mu_1 \ll \mu_2 \ll \cdots \ll \mu_N$. In this case the factorization in~\eqref{eq:deltaN-approx} becomes more involved~\cite{Kenton:2015lxa}, and must be modified in a way depending on the precise hierarchy of groups. Taken together, these difficulties generate a significant overhead for any analysis where accurate predictions of $n$-point functions are important. The form of this overhead varies from model to model, and even on the range of wavenumbers under consideration. If we choose to pay the overhead and pursue this approach, we encounter three significant obstacles. First, a sizeable investment may be required---due to field-theory calculations of the hard component---% before analysis can commence for each new model. Second, because each hard component is model-specific, there may be limited opportunities for economy by re-use. Third, if we implement the hard component of each model individually (perhaps using a range of different analytic or numerical methods), we must painfully test and validate the calculation in each case. \subsection{Automated calculation of inflationary correlation functions} To do better we would prefer a completely general method that could be used to obtain accurate predictions for each $n$-point function, no matter what mass spectrum is involved or what physical processes contribute to the hard component. Such a method could be used to compute each correlation function directly, without imposing any form of approximation. The same problem is encountered in any area of physics for which observable predictions depend on the methods of quantum field theory. The paradigmatic example is collider phenomenology, where the goal is to compare theories of beyond-the-Standard-Model physics to collision events recorded at the Large Hadron Collider. In both early universe cosmology and collider phenomenology the challenge is to obtain sufficiently accurate predictions from a diverse and growing range of physical models---and, in principle, the obstacles listed above apply equally in each case. But, in collider phenomenology, the availability of sophisticated tools to \emph{automate} the prediction process has allowed models to be developed and investigated at a remarkable rate. Examples of such tools include \href{http://theory.sinp.msu.ru/~pukhov/calchep.html}{{\CompHEP}/{\CalcHEP}}~\cite{Pukhov:1999gg,Boos:2004kh,Pukhov:2004ca,Belyaev:2012qa}, \href{http://www.feynarts.de/formcalc/}{\FormCalc}~\cite{Hahn:1998yk,Hahn:2000kx,Hahn:2006qw,Agrawal:2011tm}, \href{http://helac-phegas.web.cern.ch/helac-phegas/helac-phegas.html}{\HELAC}~\cite{Kanaki:2000ey,Cafarella:2007pc}, \href{http://madgraph.hep.uiuc.edu}{\MadGraph}~\cite{Maltoni:2002qb,Alwall:2007st,Alwall:2011uj,Alwall:2014hca}, \href{https://sherpa.hepforge.org/trac/wiki}{\SHERPA}~\cite{Gleisberg:2003xi,Gleisberg:2008ta} and \href{https://whizard.hepforge.org}{\Whizard}~\cite{Moretti:2001zz,Kilian:2007gr}. (For an early review of the field, see the \emph{Les Houches Guide to MC Generators}~\cite{Dobbs:2004qw}.) Their common feature is support for automatic generation of LHC event rates directly from a Lagrangian by mixing three components: (1) symbolic calculations to construct suitable Feynman rules, (2) automatically-generated compiled code to compute individual matrix elements, and (3) Monte Carlo event generators to convert these matrix elements into measurable event rates. This strategy of automation has successfully overcome the difficulties encountered in developing cheap, reliable, model-dependent predictions. In addition, reusable tools bring obvious advantages of simplicity and reproducibility. There have also been indirect benefits. For example, the existence of widely-deployed tools has provided a common language in which to express not only the models but also the elements of their analysis. In early universe cosmology the available toolbox is substantially less developed. A number of public codes are available to assist computation of the two-point function, including \href{http://theory.physics.unige.ch/~ringeval/fieldinf.html}{\FieldInf} \cite{Ringeval:2005yn,Martin:2006rs,Ringeval:2007am}, \href{http://modecode.org}{\ModeCode} and \href{http://modecode.org}{\MultiModeCode} \cite{Mortonson:2010er,Easther:2011yq,Norena:2012rs,Price:2014xpa}, and \href{http://pyflation.ianhuston.net}{\PyFlation}~\cite{Huston:2009ac, Huston:2011vt,Huston:2011fr}. But although these codes are \emph{generic}---they can handle any model within a suitable class---% they are not \emph{automated} in the sense described above, because expressions for the potential and its derivatives must be obtained by hand and supplied as subroutines. For the three-point function the situation is more restrictive. Currently the only public code is \href{https://sites.google.com/site/codecosmo/bingo}{\BINGO}~\cite{Hazra:2012yn, Sreenath:2014nca} which is limited to single-field canonical models. Partially, this difference in availability of solvers for the two- and three-point functions has arisen because a direct implementation of the Feynman calculus is not straightforward for $n$-point functions with $n \geq 3$. In these cases, conversion of formal Feynman integrals into concrete numerical results usually depends on techniques such as Wick rotation that are difficult to implement without an analytic expression for the integrand. Such expressions are seldom available for the time-dependent backgrounds required by cosmology, making integration over the time variable more demanding than for Minkowski-space scattering amplitudes. For this reason it would be considerably more convenient to work with an explicitly real-time formulation. Recently, Dias et al. described a formulation of field theory with these properties. It can be applied to time-dependent backgrounds more straightforwardly than the traditional machinery of Feynman diagrams~\cite{DiasFrazerMulryneSeery}. This formulation is based on direct computation of the $n$-point functions by an evolution or `transport' equation, allowing most of the complexities of field theory to be absorbed into calculation of suitable initial conditions. In inflation these initial conditions \emph{are} universal, \emph{provided} the calculation is started at sufficiently early times where all scalar fields can be approximated as massless. Therefore, obtaining suitable initial conditions becomes a one-time cost, the results of which are easy to compute numerically. The remaining challenge is to implement the evolution equations that bring these initial conditions to the final time of interest. These also have a universal form, parametrized by coefficient matrices that depend only algebraically on the model at hand. Using this scheme it becomes possible to implement automated calculation of inflationary correlation functions in the same sense as the tools used in collider phenomenology. By performing suitable symbolic calculations we can compute the necessary coefficient matrices for any model, and given knowledge of these matrices it is straightforward to generate specialized code that implements the necessary evolution equations. When compiled this code will take advantage of any opportunities for optimization detected by the compiler, making evaluation of each correlation function as rapid as possible. Finally, by mapping each correlation function over a range of wavenumbers we place ourselves in a position to determine any late-universe observable of interest. \subsection{The {\CppTransport} platform} {\CppTransport} is a platform that implements this prescription. It is the result of three years of development, amounting to roughly 60,000 lines of {\CC}, and consists of three major components: \begin{enumerate} \item A \emph{translator} (17,000 lines) converts `model description files' into custom {\CC} implementing suitable evolution equations and initial conditions. The model description file enumerates the field content of the model, lists any parameters required by the Lagrangian, and specifies the inflationary potential. It is also possible to document the model by providing a rich range of metadata. \item Once this specialized {\CC} code is available it must be compiled together with a runtime support system to produce a finished product capable of integrating the transport equations and producing correlation functions. The \emph{management system} (29,000 lines) is the largest component in the runtime support and has responsibility for coordinating integration jobs and handling parallelization. It also provides database services. \item The remaining component is a \emph{visualization and reporting suite} (15,000 lines) that can process the raw integration data to produce observable quantities and present the results as plots or tables. The reporting component generates interactive HTML documents containing these outputs for easy reading or sharing with collaborators. \end{enumerate} \begin{figure} \begin{center} \includegraphics[scale=0.75]{Diagrams/organization} \end{center} \caption{\label{fig:organization}Block diagram showing relation of {\CppTransport} components.} \end{figure} A block diagram showing the interaction among {\CppTransport} components is given in Fig.~\ref{fig:organization}. To investigate some particular model normally requires the following steps: \begin{enumerate} \item Produce a suitable model-description file and process it using the {\CppTransport} translator. \item Produce a short {\CC} code that couples the runtime system to some number of model implementations produced by the translator---at least one, but up to as many as required. Each implementation is pulled in as a header file via the \mintinline{c++}{#include} directive. The code can define any number of \emph{integration tasks}, \emph{post-processing tasks} and \emph{output tasks} that describe the work to be done: \begin{itemize} \item \emph{Integration tasks} associate a single model with a fixed choice for the parameters required by its Lagrangian, and initial conditions for the fields and their derivatives. The task specifies a set of times and configurations (assignments for the wavenumbers $\vect{k}_i$ characterizing each correlation function) at which samples should be stored. \item \emph{Post-processing tasks} act on the output from an integration task or other post-processing task. They are typically used to convert the field-space correlation functions generated by integration tasks into observable quantities, such as the correlation functions of the curvature perturbation $\zeta$. Further post-processing tasks can compute inner products of the $\zeta$ three-point function with commonly-defined templates. \item \emph{Output tasks} draw on the data produced by integration and post-processing tasks to produce summary plots and tables. \end{itemize} \item When compiled and executed, the code writes all details of its tasks into a \emph{repository}---an on-disk database that is used to aggregate information about the tasks and the numerical results they produce. \item The runtime system uses the information stored in a repository to produce output for each task on demand. The results are stored in the repository and information about them is collected in its database. \item Once predictions for the required correlation functions have been obtained, they can be converted into science outputs: \begin{itemize} \item If relatively simple observables are required, such as a prediction of the amplitude or spectral indices for the $\zeta$ spectrum or bispectrum, then it may be sufficient to set up an output task to compute these observables directly. The result can be written as a set of publication-ready plots in some suitable format such as PDF, SVG or PNG, or as ASCII-format tables listing numerical values. \item Output tasks support a limited range of observables. For more complex cases, or to produce plots by hand, the required data can be exported from the databases stored inside the repository. {\CppTransport} does not itself provide this functionality, but because its databases are of the industry-standard SQL type there is a wide selection of powerful tools to choose from. Many of these are freely available. \item To share information about the results that have been generated, {\CppTransport} can produce a report in HTML format suitable for exchanging with collaborators. These reports include a summary analysis of content generated by integration tasks. They also embed the plots and tables produced by output tasks. \end{itemize} \end{enumerate} \subsection{Summary of features} The remainder of this paper will describe these steps in more detail and illustrate how the numerical results they generate can be used to study inflationary models. Acting together, the components of {\CppTransport} provide much more than a bare implementation of the evolution equations for each $n$-point function. The main features of the platform are: \begin{itemize} \item Numerical results including \semibold{all relevant field-theory effects at tree-level}. The method correctly accounts for a hierarchy of mass scales, interactions among different field species, and correlation or interference effects around the time of horizon exit. It makes no use of approximations such as the separate universe method or the slow-roll expansion.% \footnote{In order to obtain accurate estimates of the initial conditions, the slow-roll approximation should be approximately satisfied at the initial time. For more details, see the accompanying technical paper~\cite{DiasFrazerMulryneSeery}. This also contains a discussion of the validity of the tree-level approximation.} \item An \semibold{SQL-based workflow} based on the \href{http://sqlite.org}{\SQLite} database management system,% \footnote{`SQL' is the \emph{Structured Query Language}, a set-based language used nearly universally to express queries acting on the most common `relational' type of database. It is useful because, to extract some subset from a large database, one need only \emph{describe} the subset rather than give an explicit algorithm to search for it; it is the responsibility of the database management system to devise a strategy to read and collate the required records. This is very convenient for scientific purposes because it allows a dataset to be analysed in many different ways, by many different tools, with only modest effort.} which {\CppTransport} uses for its data storage. Because SQL is an industry-standard technology there is a rich ecosystem of existing tools that can be used to read SQLite databases and perform real-time SQL queries. This enables powerful GUI-based workflows that allow \semibold{scientific exploitation and analysis without extensive programming}. \item A \semibold{fully parallelized} {\MPI}-based implementation that scales from laptop-class hardware up to many cores, using \semibold{adaptive load-balancing} to keep all cores fed with work. A \semibold{transactional design} means it is safe to run multiple jobs simultaneously, and automated \semibold{checkpointing and recovery} prevent work being lost in the event of a crash. If modifications are required then the messaging implementation is \semibold{automatically instrumented} to assist with debugging and performance optimization. \item Manages the \semibold{data lifecycle} by linking each dataset to a repository storing all information about the parameters, initial conditions and sampling points used for the calculation. The repository also collects \semibold{metadata about the integration}, such as the type of stepper used and the tolerances applied. Together, this information ensures that each dataset is properly documented and has long-term archival value. (All repository data is stored in human-readable JSON documents in order that this information is accessible, if necessary, without requiring the {\CppTransport} platform.) \item The repository system supports \semibold{reproducible research} by providing an unambiguous means to regenerate each dataset, including any products derived from it. This already provides clear benefits at the analysis stage, because it is not possible to confuse when or how each output was generated. But if shared with the community, the information stored in a repository enables every step of an analysis to be audited. \item When derived products such as plots or tables are produced, their dependence on existing datasets is recorded. This means that the platform can be provide \semibold{a detailed provenance for any data product tracked by the repository}. The reporting suite generates HTML documents containing a hyperlinked audit trail summarizing this provenance. Notes can be attached to each repository record, meaning that the report functions as a type of \semibold{electronic laboratory notebook}. \item \semibold{Leverages standard libraries}, including the \href{http://www.boost.org}{\Boost} {\CC} library. Integrations are performed using high-quality steppers taken from \href{http://www.boost.org/doc/libs/release/libs/numeric/odeint}{{\Boost}.{\odeint}}~\cite{2011AIPC.1389.1586A}. These steppers are interchangeable, meaning that they can be customized to suit the model in question. For difficult integrations, very high-order adaptive steppers are available. \item The translator is a full-featured tool in its own right, capable of customizing arbitrary template code for each model using sophisticated replacement rules. It understands a form of Einstein summation convention, making generation of \semibold{specialized template code} rapid and convenient. \end{itemize} \subsection{Notation and conventions} This document includes examples of computer code written in a variety of languages. To assist in understanding the context of each code block, its background is colour-coded according to the language: \begin{itemize} \item Shell input or output, blue background: \mintinline{bash}{export PATH=/usr/local/bin:$PATH} \item Configuration files, green background: \mintinline{text}{input = /usr/local/share/cpptransport} \item {\CC} source code, yellow background: \mintinline{c++}{class dquad_mpi;} \item Python source code, red background: \mintinline{python}{def plot:} \item {\CMake} scripts, olive background: \mintinline{cmake}{TARGET_LINK_LIBRARIES()} \item SQL code, magenta background: \mintinline{sql}{SELECT * FROM} \end{itemize} {\CppTransport} uses units where $c=\hbar=1$ but the reduced Planck mass $\Mp = (8\pi G)^{-1/2}$ can be set to an arbitrary value. Each inflationary model can have an arbitrary number of scalar fields. These are all taken to be singlets labelled by indices $\alpha$, $\beta$, \dots, and are written $\phi^\alpha$; their perturbations are $\delta\phi^\alpha$. {\CppTransport} does not use the slow-roll approximation, and therefore it is necessary to deal separately with the scalar field derivatives $\dot{\phi}^\alpha$ and $\delta\dot{\phi}^\alpha$. We often write these generically as $\pi^\alpha$ and collect them into a larger set of fields indexed by labels $a$, $b$, \ldots: \begin{equation} X^a = (\phi^\alpha, \pi^\alpha) \qquad \text{or} \qquad \delta X^a = (\delta\phi^\alpha, \delta\pi^\alpha) . \end{equation}
This section summarizes the command-line options recognized by {\CppTransport} executables. \vspace{3mm} \noindent Housekeeping functions: \begin{itemize} \item \option{{-}{-}help} \\ Display brief usage information and a list of all available options. \item \option{{-}{-}version} \\ Show version of {\CppTransport} used to build the model headers, and the version of the runtime system. These need not be the same, although {\CppTransport} requires the runtime system to be at least as recent as the version used to build the headers. \item \option{{-}{-}license} \\ Display licensing information. \item \option{{-}{-}models} \\ Show list of models understood by this executable. \item \option{{-}{-}no-colour} or \option{{-}{-}no-color} \\ Do not produce colourized output. Normally {\CppTransport} will detect whether the terminal in which it is running can support colour. However, if you are redirecting {\CppTransport}'s output to a file then you may wish to manually suppress the use of colour. \item \option{{-}{-}include}, or abbreviate to \option{-I} \\ Adds the following path to the list of paths searched for resources. Currently, the only resources needed by {\CppTransport} are those used by the HTML report generator. \end{itemize} \noindent Configuration options: \begin{itemize} \item \option{{-}{-}verbose}, or abbreviate to \option{-v} \\ Display extra status and update messages. \item \option{{-}{-}repo}, or abbreviate to \option{-r} \\ Should be followed by a path identifying the repository to be used. If the repository does not exist then new, blank repository is created. \item \option{{-}{-}caches} \\ \option{{-}{-}batch-cache} \\ \option{{-}{-}datapipe-cache} \\ Followed by a cache size in Mb. Sets the corresponding cache size (or both caches, if the option \option{{-}{-}caches} is used). The \emph{batching cache} is used to temporarily hold the data products from integration in memory before flushing them to disk; see~\S\ref{sec:what-happens}. The \emph{datapipe cache} stores data used to generate derived products. This normally requires database access, which can be time consuming on a slow filing system. Storing data in memory can give a significant performance boost if the same data is re-used. \item \option{{-}{-}network-mode} \\ Disable use of the {\SQLite} write-ahead log. Must be used if the repository is stored on a network filing system such as NFS or Lustre, but should otherwise be omitted. \end{itemize} \noindent Job specification: \begin{itemize} \item \option{{-}{-}create} \\ Write records held by this executable into the repository (\S\ref{sec:examine-k-database}). \item \option{{-}{-}task} \\ Followed by the name of task. Adds the named task to the list of work. \item \option{{-}{-}tag} \\ Specify a tag to be attached to any content groups generated by this {\CppTransport} job. For postintegration or output tasks, filters the available content groups to those that share the specified tag. Can be repeated multiple times to specify more than one tag. \item \option{{-}{-}checkpoint} \\ Set the checkpoint interval, measured in minutes. Overrides any default checkpoints set by individual tasks. \item \option{{-}{-}seed} \\ Seed jobs using the specified content group. \end{itemize} \noindent Repository actions: \begin{itemize} \item \option{{-}{-}object} \\ Select objects to be modified. Regular expressions can be used between curly braces \mintinline{bash}+{+ $\cdots$ \mintinline{bash}+}+. \item \option{{-}{-}lock} \\ Lock repository records (preventing modification or deletion) for content groups matching the object specification list. \item \option{{-}{-}unlock} \\ Unlock repository records matching the object specification list. \item \option{{-}{-}add-tag} \\ Add the specified tag to content groups matching the object specification list. Can be repeated multiple times to add more than one tag. \item \option{{-}{-}delete-tag} \\ Remove the specified tag from content groups matching the object specification list. Can be repeated to delete multiple tags. \item \option{{-}{-}add-note} \\ Add the specified note (which should be quoted if it contains spaces) to any content groups matching the object specification list. Can be repeated to add multiple notes. \item \option{{-}{-}delete-note} \\ Specifies a note to remove by number (check the repository record using \option{{-}{-}info} to obtain a list of notes). Can be applied to multiple content groups, but will remove the same numbered note from each list. \item \option{{-}{-}delete} \\ Remove content groups matching the object specification list, provided no other content groups depend on them. Notice that operations can be chained. For example, \option{{-}{-}unlock} and \option{{-}{-}delete} can be specified at the same time, in which case unlocking is performed \emph{before} deletion. The same applies to other operations such as adding or removing tags and notes. If \option{{-}{-}lock} is specified then the record is locked only after all other operations have been processed. \end{itemize} \noindent Repository reporting and status: \begin{itemize} \item \option{{-}{-}record} \\ Perform recovery on the repository; see~\S\ref{sec:checkpointing}. \item \option{{-}{-}status} \\ Print brief report showing repository status. Includes available tasks and the number of content groups attached to each task, in addition to the details of any in-flight jobs. \item \option{{-}{-}inflight} \\ Similar to \option{{-}{-}status}, but shows details of in-flight jobs only. \item \option{{-}{-}info} \\ Report on a specified repository record. Matches any objects whose names begin with the specified string, so it is not necessary to write the name out exactly. Alternatively, a regular expression can be provided by wrapping it in curly braces \mintinline{bash}+{+ $\cdots$ \mintinline{bash}+}+. \item \option{{-}{-}provenance} \\ Report on the provenance of a specified output content group. The provenance report shows all content groups that contributed to each derived product generated as part of the group. Name matching is as for \option{{-}{-}info}. \item \option{{-}{-}html} \\ Write a HTML-format report on the contents of the repository to the specified folder. \end{itemize} \noindent Plotting options: \begin{itemize} \item \option{{-}{-}plot-style}, or abbreviate to \option{-p} \\ Select a plotting style. See the discussion in~\S\ref{sec:environment}. \item \option{{-}{-}mpl-backend} \\ Force {\CppTransport} to use a specified {\Matplotlib} backend. See the discussion in~\S\ref{sec:environment}. \end{itemize} \noindent Journaling options: \begin{itemize} \item \option{{-}{-}gantt} \\ Write a process Gantt chart, showing the activities of each process in a multiprocess MPI job, to the specified file. Any output format supported by {\Matplotlib} may be used, selected by its extension. Alternatively the extension \file{.py} may be specified to obtain the Python script suitable for generating the plot. \item \option{{-}{-}journal} \\ Write a (very detailed) JSON-format journal showing the MPI communication between workers. Mostly of value when debugging. \end{itemize}
16
9
1609.00380
1609
1609.09268_arXiv.txt
Isolated hot subdwarfs might be formed by the merging of two helium-core white dwarfs. Before merging, helium-core white dwarfs have hydrogen-rich envelopes and some of this hydrogen may survive the merger. We calculate the mass of hydrogen that is present at the start of such mergers and, with the assumption that hydrogen is mixed throughout the disrupted white dwarf in the merger process, estimate how much can survive. We find a hydrogen mass of up to about $2 \times 10^{-3}\,\Msol$ in merger remnants. We make model merger remnants that include the hydrogen mass appropriate to their total mass and compare their atmospheric parameters with a sample of apparently isolated hot subdwarfs, hydrogen-rich sdBs. The majority of these stars can be explained as the remnants of double helium white dwarf mergers.
\label{sec:intro} A hot subdwarf is a member of one of the subdwarf B (sdB), helium sdB (He-sdB), subdwarf O (sdO) or helium subdwarf O (He-sdO) classes. These stars are recognized and classified by properties of their spectra, such as the profiles and relative strength of hydrogen and helium spectral lines, or by detailed spectral analysis and comparison with model atmospheres \citep{2009ARA&A..47..211H, 2016PASP..128h2001H, 2013A&A...551A..31D}. Such atmospheric modelling shows that these stars have effective temperatures, $T_{\rmn{eff}}$, between about $20$ and $80\,\rmn{kK}$ and logarithmic surface gravities, $\log_{10}(g/\rmn{cm}\,\rmn{s}^{-2}) \equiv \log g$, between about $5$ and $6.5$. The sdB and He-sdB stars have smaller $T_{\rmn{eff}}$ than the sdO and He-sdO stars. The helium abundance is often parametrized by the ratio of helium to hydrogen photospheric number densities, $y=n_{\rmn{He}}/n_{\rmn{H}}$. Generally, sdBs and sdOs have $y \la 0.1$, and He-sdBs and He-sdOs have $y \ga 10$ \citetext{\citealp{2005A&A...430..223L}; \citealp{2007A&A...462..269S}; \citealp*{2012MNRAS.427.2180N}}. The bulk of hot subdwarfs are believed to be stars with burning helium cores and inert hydrogen envelopes of low mass, their immediate progenitors or their shell-helium-burning progeny. In particular, the sdBs are thought to be largely extended (or extreme) horizontal branch (EHB) stars: stars with helium cores of about $0.47\,\Msol$ and hydrogen envelopes of low mass, $\la 20 \times 10^{-3}\,\Msol$ \citep{1986A&A...155...33H}. In the effective temperature--gravity plane the sdBs are concentrated in the region expected for the long-lived core-He burning phase between the ignition of central He (zero-age EHB, ZAEHB) and core-He exhaustion (terminal-age EHB). Core-He-burning stars with more or less massive He-rich cores ($\ga 0.33\,\Msol$) and varying H-rich envelope masses can also have the right surface properties to be hot subdwarfs \citep{2002MNRAS.336..449H}. Not all hot subdwarfs are shell- or core-He-burning stars. For example, some hot subdwarfs are pre-helium white dwarfs, shell-H-burning stars with inert degenerate helium cores and low-mass hydrogen envelopes \citep{2003A&A...411L.477H}, and others are pre-carbon--oxygen white dwarfs \citep{2003ARA&A..41..391V}. How are stars with these structures formed? A large fraction, about $2/3$, of hot subdwarfs have companions \citep{2001MNRAS.326.1391M} on short-period orbits (orbital period $P \la 10\,\rmn{d}$). Systems of this type are explained with canonical models of the evolution of binary star systems: the hot subdwarf is a star which had nearly all of its envelope removed in a common-envelope (CE) phase and also ignited helium in its core. Similarly, a few hot subdwarfs are found to have companions on long-period orbits \citep{2012MNRAS.421.2798D,2013A&A...559A..54V}. These systems are also explained in the normal evolution of binary systems, but with the hot subdwarfs having had most of their envelopes removed in a phase of Roche lobe overflow \citetext{\citealp*{1976ApJ...204..488M}; \citealp{2002MNRAS.336..449H}; \citealp{2009A&A...503..151Y}}; although their eccentricity may be a puzzle. We have more uncertain understanding of the formation of the isolated hot subdwarfs, those without apparent companions. There have been several suggested channels through which such stars could form, so the problem is to find the extent to which each of these contributes -- if at all -- to the observed population. A hot subdwarf phase is not part of the evolution of a single star, unless one assumes that mass-loss is enhanced during its first red giant phase \citep{1993ApJ...407..649C, 1996ApJ...466..359D}. On the other hand, standard evolutionary models of binary systems may produce isolated hot subdwarfs through various types of mergers of the two stars. We discuss other proposed channels in Section~\ref{sec:discussion}, but our focus in this paper is on the channel involving the merging of two helium-core white dwarfs \citep{1984ApJ...277..355W, 1986ApJ...311..753I}. The formation of a hot subdwarf through this channel begins with the formation of a short-period detached pair of helium-core white dwarfs. Such systems were predicted theoretically \citep{1984ApJ...277..355W} before they were first observed \citep*{1988ApJ...334..947S} and several tens of candidates have since been discovered \citetext{\citealp*{1995MNRAS.275..828M}; \citealp{2005A&A...440.1087N}; \citealp{2011ApJ...727....3K, 2012ApJ...751..141K}; \citealp{2015ApJ...812..167G}}. Some of the observed He+He systems have sufficiently short-period orbits ($P_{\rmn{orb}} \la 6\,\rmn{h}$) that, within a Hubble time, radiation of gravitational waves will decrease the separation to the point at which the lower mass white dwarf fills its Roche lobe and begins unstable mass transfer, ultimately resulting in the merging of the two white dwarfs to form a single He-rich star. If this merger remnant is massive enough then helium is ignited, initiating a relatively long-lived ($\sim 100\,\rmn{Myr}$) phase as a core-He-burning object \citep{1990ApJ...353..215I}. It is during this phase that the merger remnant would be a hot subdwarf. Calculations of the remnants of double-white-dwarf (DWD) mergers show that this channel works from a theoretical perspective. In fact, modelling of the outcome of DWD mergers, particularly of the mergers of two carbon--oxygen (CO) white dwarfs, has become a very active research area. The activity is driven by the possibility that these CO+CO mergers are the progenitors of Type Ia supernovae \citep{1984ApJS...54..335I, 1984ApJ...277..355W}, so few studies have focused on the He+He DWD mergers of interest to this paper. Generally, for mergers of white dwarfs of all core compositions, hydrodynamical simulations show that if detonation is avoided in the initial phases of a merger then one white dwarf is disrupted to form a disc-like structure around a cold core and hot envelope \citep{2014MNRAS.438...14D}. However, the evolution after this point, the immediate post-disruption phase, is uncertain (see Section~\ref{sec:method}). There have therefore been various methods and models used to calculate the evolution of merged white dwarfs \citep{1990ApJ...353..215I, 2000MNRAS.313..671S, 2012MNRAS.419..452Z}. While these models confirm that a He+He DWD merger can produce a He-dominated object that burns helium in its core for about $100\,\rmn{Myr}$, none of these studies has explicitly included the hydrogen which can dominate the envelopes (with masses $\la 10 \times 10^{-3}\,\Msol$) of the two white dwarfs in the start-of-merger configuration. Thus these previous models have He+He DWD mergers producing He-rich hot subdwarfs: He-sdB and He-sdO stars. The mass of hydrogen in the merger remnant is important in dictating its surface properties: its effective temperature, surface gravity and whether the surface is H- or He-rich. Generally, the addition of a hydrogen envelope to a He main-sequence (MS) star decreases its effective temperature and surface gravity \citep{1961ApJ...133..764C, 1967ZA.....65..226G}. Previous work has ignored hydrogen because the high temperatures during the merger could be expected to lead to its destruction. However, it is our intention in this paper to investigate the implications of the possibility that the hydrogen is distributed throughout the disrupted white dwarf and thus not heated to sufficiently high temperatures for burning \citep{1986ApJ...311..753I, 1990ApJ...353..215I}. In the post-merger phase diffusion would then act to bring any surviving hydrogen to the surface of the remnant and form a subdwarf with a H-rich envelope. In modelling the hot subdwarf population, \citet{2002MNRAS.336..449H, 2003MNRAS.341..669H} did include H-rich envelopes in their DWD merger remnants, although this was artificially added to H-free remnants. They assumed a uniform distribution between $0$ and $10^{-3}\,\Msol$ for the H-rich envelope mass. It was necessary to make this assumption for their model populations to show the spread in effective temperature observed. However, their choice of an upper limit of $10^{-3}\,\Msol$ for the H-rich envelope mass was not justified. It is not clear, for example, if this is reasonable for mergers of the highest mass He white dwarfs, which are expected to have lower mass envelopes \citep{1998A&A...339..123D}. There is thus ample motivation to more carefully investigate hydrogen in He+He DWD mergers, and particularly to make use of recent simulations of the disruption phase of the merger \citep{2014MNRAS.438...14D}. In this work we ask how much hydrogen can survive a He+He DWD merger to the point of core He burning. By focusing on realistic configurations at the start of a merger, we estimate the mass of hydrogen and other nuclides that are present and survive the merger. We use published simulations of the disruption of the less massive WD to estimate the mass of hydrogen remaining after this phase of the merger. With some assumptions we use these estimates to predict the range of atmospheric properties -- the effective temperature, surface gravity and composition -- of the remnants of He+He DWD mergers in long-lived burning phases. By doing so we consider the question of to what extent hot subdwarfs with H-rich surfaces can be explained as the progeny of DWD mergers. In Section~\ref{sec:tools} we discuss the 1D stellar-evolution code we use, \textsc{mesa/star}. In Section~\ref{sec:method} we describe how we model merger remnants, starting with the discussion of a particular case and ending with a general method. In Section~\ref{sec:sample} we choose a sample of isolated hot subdwarfs to compare to our models. In Section~\ref{sec:results} we present the results of applying our method to a large set of start-of-merger mass combinations. In Section~\ref{sec:discussion} we discuss the relevance of these results to the question of how isolated hot subdwarfs are formed. In Section~\ref{sec:conclusion} we give our conclusions.
\label{sec:conclusion} We have estimated the mass of hydrogen that is present in the remnants of He+He DWD mergers and thus found the region occupied by these stars in the effective temperature--surface gravity plane during the core-He-burning phase. By comparing to a sample of apparently isolated hot subdwarfs, we find that the majority of these stars can be explained as the remnants of He+He DWD mergers on this basis. We have identified several uncertainties that could either increase or decrease the extent of this region and change this conclusion. Reduction of these uncertainties will come through consideration of the following key questions. What is the mass of hydrogen in He WDs in the start-of-merger configuration? Asteroseismology of low-mass WDs and pre-WDs could constrain the envelope mass of these stars and check the models computed here. How much hydrogen survives the merger? Because hydrogen is at the surface of the WDs, the answer depends crucially on the evolution in the early stages of the merger. We hope that future work can check our assumption that hydrogen is distributed through the hot envelope and disc in the remnant; such mixing could occur by the formation of a CE in the early, currently numerically unresolved, stages of mergers. What happens to hydrogen in the post-merger phase? Which of the disc accretion or viscous view is the more appropriate description? An answer to one of the main outstanding questions on hot subdwarfs -- how are the isolated examples formed? -- would be greatly helped by a sample of such stars that have been checked for companions across a wide range of parameter space.
16
9
1609.09268
1609
1609.00349_arXiv.txt
Modeling the large-scale structure of the universe on nonlinear scales has the potential to substantially increase the science return of upcoming surveys by increasing the number of modes available for model comparisons. One way to achieve this is to model nonlinear scales perturbatively. Unfortunately, this involves high-dimensional loop integrals that are cumbersome to evaluate. Trying to simplify this, we show how two-loop (next-to-next-to-leading order) corrections to the density power spectrum can be reduced to low-dimensional, radial integrals. Many of those can be evaluated with a one-dimensional Fast Fourier Transform, which is significantly faster than the five-dimensional Monte-Carlo integrals that are needed otherwise. The general idea of this \textsf{FFT-PT} method is to switch between Fourier and position space to avoid convolutions and integrate over orientations, leaving only radial integrals. This reformulation is independent of the underlying shape of the initial linear density power spectrum and should easily accommodate features such as those from baryonic acoustic oscillations. We also discuss how to account for halo bias and redshift space distortions.
Observations of the large-scale structure (LSS) of the universe are becoming increasingly precise and abundant, with many large surveys planned in the near future, including e.g.~DES \cite{DESwhitepaper}, eBOSS \cite{eBOSSDawson}, DESI \cite{DESIwhitepaper}, Euclid \cite{EuclidWhitePaper}, WFIRST \cite{WFIRST1503}, LSST \cite{LSSTDESC}, and SPHEREx \cite{spherex1412}. It is exciting to use this observational window to study fundamental physics and the evolution and composition of the universe. This is possible because properties of the constituents of the universe leave characteristic fingerprints in the observed distribution of LSS, enabling detailed studies of e.g.~dark energy, the initial conditions from the big bang, neutrino-like particles, or modifications of general relativity. The accuracy with which we can study these fingerprints is set by the number of independent three-dimensional modes that we can model and include in data analyses. This is in turn determined by the smallest scale that we can still model. Therefore, an important aspect of large-scale structure research is to extend the validity of models to smaller, more nonlinear scales. Given the immense effort put into future surveys and the strong dependence of their science output on the smallest scale that can be modeled, any idea for improving LSS models on small scales is worth pursuing. This has therefore been an area of intense study in the literature. The two main perturbative modeling approaches are Eulerian standard perturbation theory (SPT) (e.g.~\cite{Goroff:1986ep, Jain:1993jh, 1996ApJS..105...37S, 1996ApJ...473..620S, Blas:2013bpa}) and Lagrangian perturbation theory (LPT) (e.g.~\cite{Zeldovich:1969sb, Bouchet:1994xp, Matsubara:2007wj}); see~\cite{bernardeauReview} for a review and e.g.~\cite{2006PhRvD..73f3519C,2008JCAP...10..036P,2008PhRvD..78j3521B,2009JCAP...06..017L,Taruya2010,2012JCAP...07..051B,2012JHEP...09..082C,2012JCAP...01..019P,2012PhRvD..85l3519B,2012MNRAS.427.2537C,2013JCAP...08..037P,2014PhRvD..90d3537M,2014JCAP...07..056C,2014JCAP...03..006M,2014JCAP...07..057C,2014PhRvD..90b3518C,2014JCAP...05..022P,2014JCAP...05..022P,2014JCAP...09..047M,2015JCAP...02..013S,zvonimir1410,2015PhRvD..91l3516S,2015JCAP...09..014V,2016JCAP...03..057V,2016JCAP...01..043M} for a selection of more recent developments. Higher-order perturbative corrections to these models push their validity to smaller scales. However these corrections involve high-dimensional, computationally expensive loop integrals. For example, the 2-loop power spectrum in SPT involves five-dimensional integrals at every wavenumber of interest. Accurate numerical evaluation of the 2-loop power spectrum can therefore take several CPU hours for a single set of cosmological parameter values. Reducing the computational complexity can make these 2-loop integrals more practicable for the LSS community, and simplify their use for constraining cosmological parameters from LSS surveys with Monte-Carlo chains, which often require evaluating model predictions for thousands of cosmological parameter values. Motivated by this, we recently proposed a fast method to evaluate the 1-loop, next-to-leading-order matter power spectrum from an arbitrary linear input power spectrum \cite{MarcelZvonimirPat1603}. Ref.~\cite{OSUloops1603} presented the same method for 2-2 contributions and an alternative method for 1-3 couplings. Related work that separates high-dimensional integrals into products of lower dimensional integrals includes \cite{Mccollough1202,sherwinZaldarriaga,fergusson1008,Marcel1108,Marcel1207,zvonimir1410,slepian1411,MarcelTobiasUros1411,Slepian1607} for LSS and e.g.~\cite{KSW,fergusson0912,kendrickTrispectrum1502,BoehmN32} for the CMB. Our method in \cite{MarcelZvonimirPat1603} executes 20 one-dimensional FFTs to return the 1-loop power spectrum over several decades in wavenumber at once at machine-level precision. This exploits spherical symmetry of large-scale structure formation in real space by analytically integrating over orientations. The linear input power spectrum can thereby have an arbitrary functional form as long as it can be represented on a high-resolution, one-dimensional grid that is used for one-dimensional FFTs. In particular, the method can easily resolve the imprint of baryonic acoustic oscillations, BAO, on the initial power spectrum (see Section~\ref{se:Pkshape}). This is crucial for providing state-of-the-art model predictions for the nonlinear evolution of BAO features in $\Lambda$CDM models and extensions thereof. Our goal in this paper is to generalize the \FFPT~approach introduced in \cite{MarcelZvonimirPat1603} to higher order in large-scale structure perturbation theory, specifically to the 2-loop power spectrum, corresponding to next-to-next-to-leading order in the linear mass density. This generalization is important to test the applicability of the fast \FFPT~framework of \cite{MarcelZvonimirPat1603} beyond 1-loop power spectrum integrals. It should also help to make 2-loop perturbation theory more practically useable, for example to constrain cosmological parameters from a given dataset with only little computational cost. While \FFPT~relies on exact analytical reformulations of the relevant 2-loop integrals, a viable alternative to reduce computational cost is to evaluate approximations of those integrals. As demonstrated by Refs.~\cite{RegPT1208,Foreman1606}, this can be achieved by Taylor expanding around a fiducial cosmological model, or by pre-computing integrals for a fiducial cosmology with high precision and then computing corrections for another cosmology with lower precision. The accuracy level and robustness of such approximate methods needs to be checked for every application, e.g.~when accounting for halo biasing, redshift space distortions or extensions of the basic $\Lambda$CDM model. Although we share the same motivation and goals with Refs.~\cite{RegPT1208,Foreman1606}, our exact \FFPT~method is technically completely different and therefore complementary in practice, providing a useful path for cross-checks. It would also be interesting to combine the ideas of \cite{RegPT1208,Foreman1606} and our method in the future, particularly if the goal is to compute the 2-loop power spectrum robustly for different cosmological parameters at the sub-percent level precision that is needed to realize the full scientific potential of future LSS surveys. For clarity we will focus on the standard 2-loop integrals for the matter power spectrum in SPT. However, our formalism can also handle halo bias, redshift space distortions (RSD), effects from the relative velocity between dark matter and baryons \cite{RelVelo1005}, or corrections from the effective field theory of large-scale structure \cite{2012JCAP...07..051B,2012JHEP...09..082C}, because the relevant integrals have the same form as the ones we consider here. For example, halo bias can be included simply by modifying the perturbative $F_n$ kernels that enter the loop integrals (see Section~\ref{se:bias}), while RSD effects amount to including additional velocity correlators involving velocity kernels $G_n$ (see Section~\ref{se:rsd}). In principle it should also be possible to generalize the formalism to higher-order statistics beyond the power spectrum. Our method should also work for cosmological models beyond $\Lambda$CDM as long as analytical expressions for perturbative kernels exist (see \cite{FasielloVlah1604} for recent progress in this direction). For models that do not allow for analytical perturbative kernels one instead has to resort to alternative approaches, for example computing kernels fully numerically. While this is possible for subsets of 2-loop contributions by storing kernels on grids \cite{Taruya1606}, it is not clear if fourth or fifth order kernels could be included efficiently in such an approach. Our paper is organized as follows. To get intuition, we first introduce higher-order corrections to 2-point statistics in a simple perturbative toy model in Section~\ref{se:background}. In Section~\ref{se:P2LoopNoInvLaplace} we generalize this to a sub-class of simple 2-loop SPT power spectrum corrections that do not involve inverse Laplacians. We then generalize this to account for a single inverse Laplacian in Section~\ref{se:P2LoopWITHInvLaplace}, and multiple inverse Laplacians in Section~\ref{se:P2LoopManyInvLaplacians}. In Section~\ref{se:Comments} we comment on the applicability of the method, and extensions to e.g.~biased tracers. Finally, we conclude in Section~\ref{se:conclusions}. Appendices provide background material, derivations, and show how some of the general results simplify further for the special case of scaling universes with power law initial power spectrum. \subsection*{Conventions and notation} Throughout our paper, $\vk$ and $\vq$ refer to Fourier space, whereas $\vr$ and $\vx$ refer to position space. We use the following shorthand notation for Fourier space integrals: \begin{align} \label{eq:28} \int_{\vq} \equiv \int\frac{\d^3\vq}{(2\pi)^3}. \end{align} Hats denote unit vectors, e.g.~$\hat\vq=\vq/q$, where $q=|\vq|$. $P_\lin$ denotes the linear matter density power spectrum, whereas $\mathsf{P}_\ell$ refers to Legendre polynomials. We sometimes abbreviate indices of spherical harmonics as $\vl=(\ell,m)$ and use the shorthand notation $\sum_{\vl}^{l_\mathrm{max}}=\sum_{\ell=0}^{\ell_\mathrm{max}}\sum_{m=-\ell}^\ell$. Spherical harmonics are normalized so that $\int\d\Omega_{\hat\vq}Y_{\ell m}(\hat\vq)Y^*_{\ell'm'}(\hat\vq)=\delta_{\ell\ell'}\delta_{mm'}$ and $Y_{00}(\hat\vq)=(4\pi)^{-1/2}$. We highlight the most important results of our paper in boxed equations.
\label{se:conclusions} Pushing models of the large-scale structure of the universe to nonlinear scales is a challenging problem in cosmology. An extensively studied approach to this is perturbation theory. Unfortunately, perturbative corrections come in the form of high-dimensional loop integrals that are cumbersome to evaluate. For example, the 2-loop power spectrum involves five-dimensional integrals at every wavenumber of interest. Generalizing previous work on the 1-loop matter power spectrum \cite{MarcelZvonimirPat1603,OSUloops1603}, we show in this paper how 2-loop corrections to the density power spectrum in Eulerian standard perturbation theory can be rewritten so that they involve only low-dimensional radial integrals. In absence of multiple inverse Laplacians, these take the form of one-dimensional Hankel transforms that can be evaluated very efficiently with one-dimensional FFTs using \textsf{FFTLog} \cite{hamiltonfftlog}. Contributions arising from multiple inverse Laplacians seem to require a sequence of low-dimensional radial integrals, which are computationally more challenging but may still be faster than five-dimensional integrations (see Section~\ref{se:P2LoopManyInvLaplacians}). One specific use case of this \FFPT~method is the possibility to speed up Monte-Carlo chains when fitting cosmological parameters from LSS observations. More generally, the fast expressions can be useful for anyone working with the 2-loop power spectrum or higher-order loop integrals in general. Our reformulation of 2-loop power spectrum integrals is based on avoiding convolution integrals by repeatedly changing between Fourier and position space, integrating over orientations, and performing the remaining radial integrals using one-dimensional FFTs. This is very general in the sense that it does not assume a specific shape for the linear input power spectrum. This, in turn, is important to accurately model the imprint of baryonic acoustic oscillations on LSS 2-point statistics, which is arguably the most pristine cosmological signal measured with high precision from modern surveys. The result that three-dimensional loop integrals can be reduced to one-dimensional radial integrals is not a coincidence, but can be understood from the fact that structure formation only depends on distances between objects if we assume statistical isotropy and homogeneity and the standard fluid equations of motion with their standard perturbative solution (also see \cite{MarcelZvonimirPat1603}). We show how the same method can be applied to the 2-loop power spectrum of halos or any other biased tracer of the dark matter with known bias relation. Redshift space distortions can also be handled with this method. This is straightforward to see for the distribution function approach to model redshift space distortions but also applies to many other RSD modeling approaches (see Section~\ref{se:rsd}). Our method should also apply to Lagrangian space models as shown for the 1-loop case in \cite{MarcelZvonimirPat1603}. For the special, presumably only academically interesting case of scaling universes with perfect power law initial power spectrum, the one-dimensional FFTs can be evaluated analytically so that all 2-loop power spectrum contributions reduce to simple power laws (see Appendix~\ref{se:ScalingUni}). In the future, it would be interesting to numerically implement the fast 2-loop expressions presented in our paper, extending the 1-loop implementations of \cite{MarcelZvonimirPat1603,OSUloops1603}. It would also be useful to include effective field theory corrections and generalize the method to higher-order statistics like the bispectrum or trispectrum. These possible directions of future investigation seem worthwhile pursuing given the impressive amount of upcoming data from a number of planned LSS surveys in the near future and the need to analyze and model these observations beyond the linear regime to maximize their science returns.
16
9
1609.00349
1609
1609.04739_arXiv.txt
In the hope of avoiding model dependence of the cosmological observables, phenomenological parametrizations of Cosmic Inflation have recently been proposed. Typically, they are expressed in terms of two parameters associated with an expansion of the inflationary quantities matching the belief that inflation is characterized by two numbers only, the tensor-to-scalar ratio and the scalar spectral index. We give different arguments and examples showing that these new approaches are either not generic or insufficient to make predictions at the accuracy level needed by the cosmological data. We conclude that disconnecting inflation from high energy physics and gravity might not be the most promising way to learn about the physics of the early Universe.
\label{sec:intro} With the advent of precision cosmology, it is now possible to observationally probe the early Universe and its front-runner paradigm, Cosmic Inflation~\cite{Starobinsky:1979ty, Starobinsky:1980te, Guth:1980zm, Linde:1981mu, Albrecht:1982wi, Linde:1983gd, Mukhanov:1981xt, Mukhanov:1982nu, Starobinsky:1982ee, Guth:1982ec, Hawking:1982cz, Bardeen:1983qw}. When the mechanism of inflation was discovered, only a few models~\cite{Starobinsky:1980te,Albrecht:1982wi,Linde:1983gd}, making simple predictions, were proposed. However, over time, many more scenarios, often complex, were devised. This has resulted in a situation where literally hundreds of inflationary models are a priori possible. This should not come as a surprise given that, in order to build an inflationary model, one has to extrapolate high energy physics, or gravity, by many orders of magnitude, in a regime where nothing is experimentally known. In some sense, the profusion of proposed models is due to our lack of knowledge of physics beyond the electroweak scale and not to a lack of predictability of inflation. However, with the recent release of the Planck 2013 \& 2015 data~\cite{Ade:2013ktc, Adam:2015rua, Ade:2015xua, Ade:2015lrj}, the Augean stables have started to be cleaned up. Indeed, models of inflation generating non-negligible isocurvature perturbations, large non-Gaussianities and/or significant features in the power spectrum are, for the moment, disfavored by observations. Single-field slow-roll models of inflation with a minimal kinetic term therefore appear to be preferred~\cite{Martin:2010hh, Easther:2011yq, Martin:2013tda, Martin:2013nzq, Price:2015qqb, Martin:2015dha}, even if a large number of other scenarios still remain compatible with the data~\cite{Chen:2012ja, Vennin:2015vfa, Vennin:2015egh, Chen:2016vvw}. An alternative approach to systematic model comparison consists in considering model independent parametrizations of inflation. Such parametrizations aim at embracing all models at once while avoiding difficult questions related to specifying a potential $V(\phi)$, as for instance discussing the physical values of its parameters, possible quantum corrections or interaction of the inflaton field with other sectors. Such a proposal was first implemented within the slow-roll formalism~\cite{Mukhanov:1985rz, Mukhanov:1988jd, Stewart:1993bc, Gong:2001he, Schwarz:2001vv, Leach:2002ar}. It has been successfully applied to models with non-minimal kinetic terms~\cite{Kinney:2007ag, Tzirakis:2008qy, Lorenz:2008et, Agarwal:2008ah, Martin:2013uma, Jimenez:2013xwa}, multifield inflation and modified gravity~\cite{Nakamura:1996da, Easther:2005nh, DiMarco:2005nq, Battefeld:2006sz, Chiba:2008rp, DeFelice:2011bh} while being used for non-Gaussianities~\cite{Chen:2006nt, Yokoyama:2007uu, Ichikawa:2008iq, Langlois:2008qf, Chen:2010xka} as well. Classes of inflationary models could also be devised owing to slow roll, as for instance the Schwarz and Terrero-Escalante (STE) classification~\cite{Schwarz:2004tz} where only one of the three classes survived the Planck measurements~\cite{Martin:2013nzq}. If the microphysics is considered instead, the effective theory of inflation~\cite{Creminelli:2004yq, Cheung:2007st} can also be a way to parametrize deviations from the simplest physical setups. These parametrizations yield a vast range of observable predictions, precisely because they are intended to be model independent and designed to describe many possible scenarios. However knowing a preferred range for the tensor-to-scalar ratio $r$ would greatly help the design of future missions aiming at measuring the $B$-polarization of the Cosmic Microwave Background (CMB) radiation. Similarly, the amount of non-Gaussianities expected within various classes of inflationary models is valuable information for future galactic surveys such as Euclid~\cite{Amendola:2016saw}. For these reasons and despite the existence of the slow-roll formalism, new ``simple'' parametrizations have recently been proposed, that aim at narrowing down inflationary predictions. Moreover, it has been suggested that, at the observational level, inflation can be reduced to two numbers only (the scalar power spectrum spectral index and the tensor-to-scalar ratio), which was argued to further motivate the introduction of these new frameworks. These new parametrizations include, among others, the truncated horizon-flow formalism~\cite{Hoffman:2000ue, Kinney:2002qn, Ramirez:2005cy, Chongchitnan:2005pf}, the ``universality classes''~\cite{Huang:2007qz, Roest:2013fha, Garcia-Bellido:2014gna, Binetruy:2014zya, Huang:2015cke}, and designing a simple hydrodynamical description of inflation~\cite{Mukhanov:2013tua, Creminelli:2014nqa}. In this short article, we investigate whether these new approaches can be further used to constrain the physics of the early Universe (for issues with the truncated horizon-flow approach, see Refs.~\cite{Liddle:2003py, Vennin:2014xta, Coone:2015fha}). The paper is organized as follows. In \Sec{sec:nottrueapprox}, we explain why expanding inflationary observables in the so-called ``large $N$ limit'' ($N$ being the number of e-folds) is not always consistent. We also show that the number of ``universality classes'' becomes large beyond the leading order where they thus provide a more complex classification. In \Sec{sec:notsufficient}, we show that the large $N$ limit gives insufficiently accurate predictions for the spectral index $\nS$ and the tensor-to-scalar ratio $r$. With respect to the Planck 2015 confidence intervals, these inaccuracies range from one to two sigma or more, depending on the underlying inflationary scenario. In \Sec{sec:reheat}, it is shown that this approach does not allow one to consistently incorporate reheating, nor to derive constraints on its expansion history. \Sec{sec:eosi} is dedicated to the alternative parametrization of inflation in which one specifies the equation of state parameter $w(N)$ as a function of the number of e-folds~\cite{Mukhanov:2013tua}. Such a parametrization is shown to be free of the above-mentioned issues for the simple reason that, at the background level, it ends up being equivalent to choosing a specific potential for a single scalar field. At the perturbative level, it is either incomplete because the speed of sound and the non-adiabatic pressure have to be specified (see also Ref.~\cite{Chen:2013kta}), or implicitly equivalent to a perturbed single scalar field. In \Sec{sec:stat}, we stress the fact that all these alternative approaches, independently of their internal consistencies, are not well suited to perform Bayesian statistical analysis of the cosmological data. Finally, in the conclusion, we argue that these frameworks do not allow one to connect inflation and high energy physics (modified gravity included).
\label{sec:conclusions} In this short article, we have argued that it is often too simplistic to view inflation as a framework that can be ``described by two numbers''. The goal of a model is not to predict the values of $\nS$ and $r$ only. In fact, it should first predict the amplitude of the cosmological perturbations, as some models actually do ($\si$, $\cwi$, $\osti$, $\di$). Then, even if inflation is featureless, single field, slowly rolling, with minimal kinetic terms, one can still reasonably hope to measure other numbers, such as the running $\alphaS$. But more importantly, inflation does not only consist in a phase of accelerated expansion. The mechanism that ends inflation is also of crucial importance and, as a matter of fact, can be constrained by CMB data~\cite{Martin:2010kz, Martin:2014nya, Martin:2016oyk}. The new parametrizations miss this opportunity. They can never be as informative as an approach rooted in field theory, or some specific modified gravity framework~\cite{DeFelice:2010aj, DeFelice:2011uc}, when it comes to a phenomenon that could have taken place at an energy scale as high as $10^{16}\,\GeV$~\cite{Bezrukov:2014ipa}. At last, specifying a model in the hope of comparing it with some data also means giving the priors on its free parameters to ensure its internal consistency. This is usually much more than specifying two numbers. The price to pay is that some predictions do depend on the underlying model, but not all of them. For instance, a generic prediction of inflation is the presence of Doppler peaks in the CMB which makes inflation a falsifiable scenario. On the other hand, there is no generic prediction for $r$, except that it must be such that the energy scale of inflation is higher than the one of BBN, leading to a ridiculously small lower bound, $r \gtrsim 10^{-75}$, a value which is unobservable as smaller than backreaction effects~\cite{Martineau:2007dj}. But this does not necessarily mean that the situation is not interesting, models do predict different ranges of tensor-to-scalar ratio values and measuring $r$ provides information about the underlying inflationary scenario. One of the goals of phenomenological parametrizations was to narrow down these ranges and yield ``typical'' inflationary predictions. For instance, it is often argued that while $\epsstar{1}=\order{1}/\Delta\Nstar$ (yielding $r\simeq 0.26$ for $\Delta\Nstar = 60$) is now excluded by the data, the next target according to \Eq{eq:expeps1} would be to try and detect the next order in $1/\Delta\Nstar$, namely $\epsstar{1}=\order{1}/\Delta\Nstar^2$ (yielding $r\simeq 0.004$ for $\Delta\Nstar = 60$). However, nothing guarantees that the overall constant is indeed of order one. For instance, as can be seen in \Eq{eq:epsLI}, this is the case for the model $\li$ since $\alphali \ll 1$. In fact, a value less than $0.25$ is already sufficient to reestablish the agreement between the prediction $\epsstar{1}=\order{1}/\Nstar$ and the data. In conclusion, it seems to us that even if the phenomenological parametrizations discussed in the present work may provide useful rule-of thumb classifications, the most promising method to learn about the physics of inflation is to build models based on high energy physics and (modified) gravity, since this is a priori the way Nature has realized inflation in practice. At the time when the Planck data tell us that the Higgs field of Particle Physics, some low energy String compactifications, or the $R^2$ corrections to General Relativity~\cite{Martin:2013nzq}, could explain the large scale structure of the Universe, it seems that phenomenological parametrizations are not sufficient to tackle the physical questions we now have to address. The fact that some predictions are model dependent is not a shortcoming but actually a virtue of inflation since it can be used to learn about Physics in a regime hardly achievable with current technology.
16
9
1609.04739
1609
1609.04814_arXiv.txt
We present results on the variation of 7.7\,{\um} Polycyclic Aromatic Hydrocarbon (PAH) emission in galaxies spanning a wide range in metallicity at $z\sim 2$. For this analysis, we use rest-frame optical spectra of 476 galaxies at $1.37\leq z\leq 2.61$ from the MOSFIRE Deep Evolution Field (MOSDEF) survey to infer metallicities and ionization states. {\em Spitzer}/MIPS 24{\um} and {\em Herschel}/PACS 100 and 160\,{\um} observations are used to derive rest-frame 7.7\,{\um} luminosities ($L_{7.7}$) and total IR luminosities ({\lir}), respectively. We find significant trends between the ratio of $L_{7.7}$ to {\lir} (and to dust-corrected SFR) and both metallicity and [O{\sc III}]/[O{\sc II}] ({\o32}) emission-line ratio. The latter is an empirical proxy for the ionization parameter. These trends indicate a paucity of PAH emission in low metallicity environments with harder and more intense radiation fields. Additionally, {\ratio} is significantly lower in the youngest quartile of our sample (ages of $\lesssim 500$\,Myr) compared to older galaxies, which may be a result of the delayed production of PAHs by AGB stars. The relative strength of $L_{7.7}$ to {\lir} is also lower by a factor of $\sim 2$ for galaxies with masses $M_*<10^{10}\msun$, compared to the more massive ones. We demonstrate that commonly-used conversions of $L_{7.7}$ (or 24\,{\um} flux density; $f_{24}$) to {\lir} underestimate the IR luminosity by more than a factor of 2 at $M_*\sim 10^{9.6-10.0}\,\msun$. We adopt a mass-dependent conversion of $L_{7.7}$ to {\lir} with {\ratio}$=0.09$ and 0.22 for $M_*\leq 10^{10}$ and $> 10^{10}\,\msun$, respectively. Based on the new scaling, the SFR-$M_*$ relation has a shallower slope than previously derived. Our results also suggest a higher IR luminosity density at $z\sim 2$ than previously measured, corresponding to a $\sim 30\%$ increase in the SFR density.
Emission from single-photon, stochastically-heated Polycyclic Aromatic Hydrocarbon (PAH) molecules dominates the mid-infrared (mid-IR) spectra of star-forming galaxies. The origin and properties of these molecules have been the subject of many studies in the literature, most of which have focused on local galaxies \citep[][and references therin]{tielens08}. It is crucial to fully understand how PAH emission depends on the physical conditions of the interstellar media (ISM) as the PAH emission contribution to total IR emission of galaxy may vary significantly from $\sim 1$ to 20\% in different environments \citep[][among many others]{smith07,dale09}. There is evidence that PAHs are less abundant in metal-poor environments in the local universe \citep[e.g.,][]{normand95,calzetti07,draine07b,smith07,engelbracht05,hunt10,cook14}. The physical explanation of this observation is a subject of much debate -- whether the decrease in the PAH emission at low metallicity is directly driven by metallicity or some other property of such environments is still unknown. The most favored explanation is destruction of PAH molecules by hard UV radiation in low-metallicity environments due to reduced shielding by dust grains \citep[e.g.,][]{voit92,madden06,hunt10,sales10,khramtsova13,magdis13}. Other possibilities have also been discussed in the literature: that small PAH carriers are destroyed in low-metallicity environments through sputtering \citep[][and references therein]{hunt11}, that PAH formation and destruction mechanisms depend on dust masses, which in turn correlate with metallicity \citep{seok14}; and that the PAH-metallicity trend is a consequence of the PAH-age correlation \citep{galliano08}. The latter scenario is offered based on the assumption that the contribution of AGB stars -- as the purported primary origin of PAH molecules -- to ISM chemical enrichment increases with age. The PAH emission features span from 3\,{\um} to 17\,{\um}, with the one at 7.7{\um} being the strongest \citep[contributing $\sim$ 40--50\% of the total PAH luminosity;][]{tielens08,hunt10}. At $z\sim2$, the 24\,{\um} filter of the {\em Spitzer}/MIPS instrument traces this feature. Due to the high sensitivity of MIPS, many high-redshift studies adopt the 24\,{\um} flux as an indicator of total IR luminosity ($L(8-1000\mu{\rm m})\equiv L_{\rm IR}$) and star-formation rate \citep[SFRs; e.g.,][]{ce01,reddy06a,daddi07a,wuyts08,reddy10,shivaei15a}. However, the metallicity and ionization state dependence of the PAH-to-{\lir} ratio of distant galaxies have not been studied in detail. In high-redshift studies, a single conversion from 24\,{\um} flux (or rest-8\,{\um} luminosity) to {\lir} is typically assumed for galaxies with a range of different metallicities and stellar masses \citep[e.g.,][]{wuyts08,wuyts11a,elbaz11,reddy12b,whitaker14b}. Possible variations of the relative strength of PAH emission to {\lir} with metallicity (and as a consequence with stellar mass) can potentially alter the results of studies that rely on 24\,{\um} flux to infer {\lir} or SFR -- for example, those that investigate dust attenuation parameterized by {\lir}$/L_{{\rm UV}}$ \citep[e.g.,][]{reddy12b,whitaker14b}, or those that utilize bolometric SFRs (i.e., SFR$_{\rm IR}+$SFR$_{\rm UV}$) to explore relations such as the SFR-$M_*$ relation \citep[e.g.,][]{daddi07a,wuyts11b,fumagalli14,tomczak16}. With the large and representative dataset of the MOSFIRE Deep Evolution Field (MOSDEF) survey \citep{kriek15}, we are in a unique position to investigate, for the first time, the dependence of PAH intensity (defined as the ratio of 7.7\,{\um} luminosity to SFR or {\lir}) on the ISM properties of high-redshift galaxies. The MOSDEF survey provides us with near-IR spectra of galaxies at $1.37\leq z\leq 2.61$, from which we calculate spectroscopic redshifts and estimate gas-phase metallicities and ionization states. We use mid- and far-IR photometric data from {\em Spitzer}/MIPS 24\,{\um} and {\em Herschel}/PACS 100 and 160\,{\um} to measure PAH emission and total IR luminosities, respectively. Our study includes galaxies over a broad range of stellar masses ($M_*\sim 10^9-10^{11.5}\,\msun$), SFRs ($\sim 1-200\,\msun\,{\rm yr^{-1}}$), and metallicities ($\sim 0.2-1\,Z_{\odot}$). Ultimately, we quantify how the conversions between rest-frame 7.7\,{\um} and both SFR and {\lir} depend on metallicity and stellar mass. These scaling relations are important for deriving unbiased estimates of total IR luminosities and obscured SFRs based on observations of the PAH emission in distant galaxies -- such observations will be possible for larger numbers of high-redshift galaxies with {\em JWST}/MIRI \citep{shipley16}. The outline of this paper is as follows. In Section~\ref{sec:data}, we introduce the MOSDEF survey and describe our measurements including line fluxes, stellar masses, SFRs, IR photometry, and the IR stacking method. In Section~\ref{sec:pah_ism}, we constrain the dependence of $L_{7.7}/{\rm SFR}$ and {\ratio} on metallicity and ionization state. The PAH intensity as a function of age is explored in Section~\ref{sec:pah_age}. Implications of our results for the studies of the SFR-$M_*$ relation and the IR luminosity density at $z\sim 2$ are discussed in Section~\ref{sec:implications}. In Section~\ref{sec:discussion}, we briefly discuss the possible physical mechanisms driving the PAH-metallicity correlation. Finally, the results are summarized in Section~\ref{sec:conclusion}. Throughout this paper, line wavelengths are in vacuum and we assume a \citet{chabrier03} initial mass function (IMF). A cosmology with $H_0=70\,{\rm km\,s^{-1}\,Mpc^{-1}}, \Omega_{\Lambda}=0.7, \Omega_{{\rm m}}=0.3$ is adopted.
The correlation between PAH intensity and metallicity has been observed and studied extensively in the local universe (\citealt{engelbracht05} and \citealt{madden06} were among the first). Several scenarios have been proposed to explain the observed trend, which involve the production mechanisms of PAH molecules or various ways of destroying them. The origin of PAHs in galaxies is not completely understood, but evolved carbon stars are undoubtedly one of the main sources \citep{latter91,tielens08}. The presence of PAHs in the outflows of carbon-rich AGB stars is confirmed both in theoretical \citep{frenklach89,cau02,cherchneff06} and observational \citep{beintema96,boersma06} studies. As a consequence, it is expected that chemically young systems should have reduced PAH abundances \citep{dwek05,galliano08}. Also, PAH molecules are not as efficiently produced in low metallicity environments because fewer carbon atoms are available in the ISM. On the other hand, PAHs can be destroyed in hostile environments such as supernovae shocks \citep{ohalloran06}. PAHs can also be effectively destroyed in low-metallicity environments with hard ionization fields, due to reduced shielding by dust grains \citep{madden06,smith07,hunt10}. Some authors have explored the size distribution of PAHs and suggested that there is a deficit of {\em small} PAH molecules at low metallicites \citep{hunt10,sandstrom10}. The nature of the PAH-metallicity correlation is likely a combination of the production and destruction scenarios mentioned above. In our sample of galaxies at $z\sim 2$, we find strong trends between the PAH intensity and metallicity, {\o32}, and age, in agreement with both the formation and destruction scenarios. However, the significant increase in the fraction of undetected 24\,{\um} sources with increasing {\o32} (from 38\% to 91\%, Figure~\ref{fig:pah_ism}c) suggests that the destruction of PAHs in intense radiation fields may be the dominant physical mechanism. The low PAH emission could also be caused by the absorption of stellar UV photos by dust in the HII region before reaching the PAHs in the photodissociation region (PDR, \citealt{peeters04}). In this case, PAHs exist, but they are not exposed to the UV photons, and hence, do not emit in the mid-IR. However, in the case of a substantially dusty HII region, the recombination lines would also be systematically suppressed, thus resulting in systematic discrepancies between the UV- and {\halpha}-inferred SFRs. Such discrepancies are not observed in our sample \citep[see][]{reddy15,shivaei16a}. Nevertheless, it is not trivial to completely differentiate between the various scenarios with our current data set. A potentially important trend observed in Figure~\ref{fig:pah_mass} is that above $M_*\sim 10^{10}$ the {\ratio} (or $L_{7.7}$/SFR) reaches an almost constant value. This ``flattening'' is also seen in {\ratio} below ${\rm O}_{32}\sim 2.2$ (above $12+\log({\rm O/H)_{O_{32}}}\sim 8.2$, Figure~\ref{fig:pah_ism}c) and above the age of $\sim 900$\,Myr. We speculate that the observed metallicity (and mass) threshold reflects the point above which enough dust particles are produced to shield the ionizing photons and prevent the PAH destruction. This suggests that the constant value of {\ratio} at high metallicities is the ``equilibrium'' {\ratio} ratio. The suppressed {\ratio} at lower metallicities is indicative of preferential destruction of PAHs in environments with lower dust opacity, and hence, with harder and more intense radiation fields. The paucity of PAH emission in young galaxies can also be attributed to the delayed production of PAHs by AGB stars and/or to the higher intensity of the radiation field in these young systems with high sSFR. The threshold at $12+\log({\rm O/H)_{O_{32}}}\sim 8.2$ is the same as the value found by \citet{engelbracht05} and close to the value 8.1 found by \citet{draine07b} at $z\sim 0$.
16
9
1609.04814
1609
1609.04025_arXiv.txt
{We demonstrate that Chromo-Natural Inflation can be made consistent with observational data if the SU(2) gauge symmetry is spontaneously broken. Working in the Stueckelberg limit, we show that isocurvature is negligible, and the resulting adiabatic fluctuations can match current observational constraints. Observable levels of chirally-polarized gravitational radiation ($r\sim 10^{-3}$) can be produced while the evolution of all background fields is sub-Planckian. The gravitational wave spectrum is amplified via linear mixing with the gauge field fluctuations, and its amplitude is not simply set by the Hubble rate during inflation. This allows observable gravitational waves to be produced for an inflationary energy scale below the GUT scale. The tilt of the resulting gravitational wave spectrum can be either blue or red.} \begin{document}
Inflation \cite{Guth:1980zm, Linde:1981mu, Albrecht:1982wi} remains a remarkably successful paradigm for describing the initial conditions of our Universe. As well as solving the flatness and horizon problems, inflation provides a mechanism for generating primordial fluctuations with the right amplitude and scale dependence to seed structure formation \cite{Mukhanov:1981xt,Chibisov:1982nx}, as well as possibly producing primordial gravitational waves \cite{Starobinsky:1979ty}. While there exist many models of inflation in the current literature, and many that fit the data well, most existing models of inflation rely on scalar fields slowly rolling on flat potentials to drive the inflationary epoch. Chromo-Natural Inflation \cite{Adshead:2012kp} is a model for the inflationary epoch where non-Abelian gauge fields in classical, color-locked configurations generate an attractor solution which decouples the motion of the inflaton, in this case a pseudo-scalar, from the gradient flow of the potential. While Chromo-Natural Inflation can successfully generate long periods of inflation at the classical level, it fails to provide the seeds for structure formation consistent with current observations \cite{Dimastrogiovanni:2012ew, Adshead:2013qp, Adshead:2013nka}. Furthermore, related models such as Gauge-flation \cite{Maleknejad:2011jw, Maleknejad:2011sq}, are also inconsistent with current observations once the fluctuations are taken into account \cite{Namba:2013kia}. In this paper we demonstrate that Chromo-Natural Inflation \cite{Adshead:2012kp} can potentially be made a viable candidate for the generation of primordial curvature perturbations by introducing an additional mass for the gauge field fluctuations via spontaneous symmetry breaking. While we dub the resulting theory Higgsed Chromo-Natural Inflation, in this work we restrict consideration to the Goldstone sector of the resulting broken gauge symmetry, and work with the action in Stueckelberg form. We ignore the possible existence of a Higgs boson, and assume its mass is large compared with the Hubble rate during inflation. The resulting theory can generate large levels of gravitational radiation of a single (helical) polarization only, while all background fields roll over sub-Planckian distances. Further, the amplitude of the resulting gravitational wave spectrum is not simply set by the Hubble rate, and as a result observable gravitational waves can be produced while the inflationary energy density is somewhat below the energy scale associated with grand unification. Despite consisting of several fields, isocurvature perturbations are suppressed relative to adiabatic modes. Admittedly, the addition of more fields is a little distasteful, an epicycle on an already speculative idea. However, it is worth pointing out that the only SU(2) gauge theory which appears to describe nature, that associated with the electroweak sector, exists in a broken phase \cite{Yang:1954ek,Anderson:1963pc,Englert:1964et,Higgs:1964ia,Guralnik:1964eu,Migdal:1966tq,Weinberg:1967tq,Glashow:1961tr}. Further, the model we describe in this work provides an explicit counter-example to the standard inflationary lore, that the detection of tensor modes implies that inflation happened near the GUT scale and requires super-Planckian field excursions \cite{Lyth:1996im}. For a more sophisticated analyses of the field range bound see ref.\ \cite{Baumann:2011ws}, and for a more general discussion of the relation between the energy scale of inflation and the gravitational wave spectra see ref.\ \cite{Mirbabayi:2014jqa}. Classical non-Abelian gauge fields lead to striking phenomenology in cosmological settings, most notably chiral gravitational waves \cite{Adshead:2013nka, Maleknejad:2014wsa, Obata:2014loa, Bielefeld:2014nza, Bielefeld:2015daa, Obata:2016tmo, Maleknejad:2016qjz, Caldwell:2016sut,Alexander:2016moy, Obata:2016xcr, Dimastrogiovanni:2016fuu}. These chiral gravitational waves may be responsible for the matter-antimatter asymmetry via the gravitational anomaly \cite{Alexander:2004us, Noorbala:2012fh, Maleknejad:2014wsa, Maleknejad:2016dci}. Chiral gravitational waves also arise in other models involving axially coupled gauge fields \cite{Anber:2012du, Barnaby:2012xt} and axially coupled fermions \cite{Anber:2016yqr}. For a recent review of axion inflation see ref.\ \cite{Pajer:2013fsa} and for a recent review of gauge fields and inflation see ref.\ \cite{Maleknejad:2012fw}. Throughout this work, we use natural units where the speed of light and the reduced Planck constant, $c = \hbar = 1$ and the reduced Planck mass $1/\sqrt{8\pi G} = M_{\rm pl}= 1$.
\label{sec:conclusions} In this work we have shown that Chromo-Natural Inflation can be potentially made compatible with existing limits from Planck data by introducing an additional mass term for the gauge field fluctuations. In this work, we assume that the symmetry is spontaneously broken by a Higgs sector and the resulting Higgs boson is much heavier than the Hubble scale, and is thus irrelevant. We therefore work with the theory in the Stueckelberg form. While the addition of the Stueckelberg symmetry breaking sector was initially motivated to provide a stabilization mechanism for the spin-2 modes of the gauge field by giving it an additional mass, this does not in fact happen. The reason is that such a mass term also contributes to the equations of motion at the background level, leading to larger values of the axion velocity which sources the tensor instability. However, the Goldstone modes contribute additional scalar and vector degrees of freedom at the level of the fluctuations. The interaction of the additional scalar degree of freedom boosts the curvature fluctuation relative to the tensor fluctuations. This consequently lowers the tensor-to-scalar ratio into the region allowed by BICEP, the Keck Array, and the Planck satellite \cite{Ade:2015tva, Ade:2015lrj}. Observable gravitational waves ($r \gtrsim 10^{-3}$) may be produced in this model, despite inflation occurring below the GUT scale, and all fields evolving over sub-Planckian distances in field space. The model therefore violates some formulations of the Lyth bound. The gravitational waves in this model predominantly arise from linear mixing with the gauge field fluctuations. These gauge field modes are enhanced by their interactions with the rolling axion and subsequently oscillate into gravitational waves. The form of the gravitational wave spectra produced in this model is therefore significantly altered from the usual form assumed in formulations of the Lyth bound. In contrast to standard inflationary scenarios which uniformly predict red tilted gravitational wave spectra (see, however, \cite{Baumann:2015xxa}), these gravitational waves can have either red- or blue-tilted spectra on CMB scales. Furthermore, these gravitational waves have the distinct characteristic that they are chirally polarized and, to a very good approximation, consist only of a single helicity. Unfortunately, it seems that future CMB experiments will be unable to distinguish between unpolarized and chirally polarized gravitational waves \cite{Gerbino:2016mqb}. The equations of motion for the field fluctuations that result in this system are complicated, but are fairly simple to solve numerically. Of the four normal modes of the system, only the mode with the smallest frequency (the slow, or magnetic drift mode) results in fluctuations which attain significant superhorizon amplitude. At first glance, one may worry that the presence of multiple large-amplitude scalar modes on superhorizon scales may lead to pathological effects, such as isocurvature or entropy fluctuations which cause the curvature perturbation to evolve. However, we have shown that entropy or isocurvature fluctuations are suppressed relative to adiabatic curvature fluctuations, and contribute only at the sub-percent level. The dominant contribution to the perturbed stress-energy tensor is due to the axion's fluctuations along its potential, and since this term dominates the evolution of the background, the non-adiabatic pressure is small. We have demonstrated that the parameters of the theory can be chosen to produce fluctuations that, near horizon crossing, match the required amplitude and tilt of the scalar spectrum as determined by the Planck satellite \cite{Ade:2015lrj}. While we have neglected the contributions of the metric fluctuations (in the form of the perturbed lapse and shift) in this work, we expect that including these will alter our results at the level of slow roll corrections. The fluctuations in the gauge field and axion, as well as the Goldstone modes, depend exponentially on the Higgs VEV, which makes some level of fine-tuning necessary in order to match observations. For parameters leading to large values of the tensor-to-scalar ratio, $r > 0.1$, our estimates suggest that the linear approximation used in deriving the equations of motion for the fluctuations likely fails. For $r\gtrsim 10^{-2}$ and $r\gtrsim 10^{-3}$, we estimates that the linearity fails on the level of $10\%$ and $1\%$, respectively. A potentially significant restriction on this model comes from the running of the spectral index. While this running is negligible for parameters leading to $r\gtrsim 10^{-3}$, as $r$ decreases we have found that the running increases. Furthermore, the running of the tilt in this model is positive, in contrast to many single field models that predict negative running at the $\mathcal{O}\bigl((n_s \!-\!1)^2\bigr)$ level (see, e.g.\ \cite{Adshead:2010mc}). This is also in contrast to the slight preference for negative running observed in the CMB data \cite{Ade:2015lrj}. For $r\simeq 10^{-5}$ the running of the spectral index remains within the observational bounds set by the Planck mission, however, for very low values of $r$ this model will likely be ruled out. Throughout this work, we have neglected the contribution of metric fluctuations as well as slow-roll corrections to the equations of motion. For an initial investigation this is most likely a good approximation, at least until after horizon crossing where we evaluate the spectra. However, computation of the full evolution of the modes outside the horizon requires a more careful analysis that includes the contributions from the gravitational constraints and the slow-roll corrections due to the evolution of the background. We leave this, as well as detailed investigations of non-Gaussianity to future work. Finally, given that adding Higgs sector to Chromo-Natural inflation potentially yields viable cosmologies, it would be interesting to check whether the related model of Gauge-flation can be made viable in the same fashion. {\bf Acknowledgements:} This work was supported in part by DOE grants DE-FG02-90ER-40560, DE-SC0009924, DE-SC0015655 and by the Kavli Institute for Cosmological Physics at the University of Chicago through grants NSF PHY-1125897 and an endowment from the Kavli Foundation and its founder Fred Kavli. P.A. gratefully acknowledges support from a Starting Grant of the European Research Council (ERC STG grant 279617), and the hospitality of DAMTP and the University of Cambridge where some of this work was completed. EIS gratefully acknowledges support from a Fortner Fellowship at the University of Illinois at Urbana-Champaign. \appendix
16
9
1609.04025
1609
1609.03269_arXiv.txt
{We present a high sensitivity radio continuum survey at 6 and 1.3$\,$cm using the Karl G. Jansky Very Large Array towards a sample of 58 high-mass star forming regions. Our sample was chosen from dust clumps within infrared dark clouds with and without IR sources (CMC--IRs, CMCs, respectively), and hot molecular cores (HMCs), with no previous, or relatively weak radio continuum detection at the $1\,$mJy level. Due to the improvement in the continuum sensitivity of the VLA, this survey achieved map rms levels of $\sim$ 3--10 $\mu$Jy beam$^{-1}$ at sub-arcsecond angular resolution. We extracted 70 centimeter continuum sources associated with $1.2\,$mm dust clumps. Most sources are weak, compact, and are prime candidates for high-mass protostars. Detection rates of radio sources associated with the mm dust clumps for CMCs, CMC--IRs and HMCs are 6$\%$, 53$\%$ and 100$\%$, respectively. This result is consistent with increasing high-mass star formation activity from CMCs to HMCs. The radio sources located within HMCs and CMC--IRs occur close to the dust clump centers with a median offset from it of 12,000\,AU and $4,000\,$AU, respectively. We calculated $5 - 25\,$GHz spectral indices using power law fits and obtain a median value of 0.5 (i.e., flux increasing with frequency), suggestive of thermal emission from ionized jets. In this paper we describe the sample, observations, and detections. The analysis and discussion will be presented in Paper II.}
High-mass stars (M ${\scriptstyle\gtrsim}\,$ 8 M$_\odot$) are rare, they evolve on short timescales, and they are usually born in clusters located in highly obscured, and distant (typically ${\scriptstyle>}$\,1 kpc) regions. These observational challenges have made the study of their earliest evolutionary phases difficult. The population of high-mass stars in the Galaxy has been probed by cm-wavelength radio continuum emission at least since the early single-dish compact H{\small II} regions surveys \citep[e.g.,][]{1967ApJ...147..471M}. Subsequent interferometric surveys \citep[e.g.,][]{1989ApJ...340..265W, 1993ApJ...418..368G, 1994ApJS...91..659K} discovered even more compact emission (the so-called ultracompact, or UCH{\small II} regions), or yet denser regions of ionized gas called hypercompact H{\small II} regions (HCH{\small II}, e.g., \citealp{2003ApJ...597..434W, 2011ApJ...739L...9S}). All these regions detected in the past -- whether compact or ultra/hypercompact -- share two important characteristics. First, they are fairly bright at radio wavelengths, with cm flux densities ranging from a few mJy to a few Jy. Second, they arise from a relatively late stage in the star formation process when nuclear burning likely has already begun, thus producing copious amounts of UV radiation that photoionize the gas surrounding the young star. Subsequently, with the goal of finding candidates of earlier evolutionary phases of high-mass star formation, researchers turned to the study of dense condensations in molecular clouds. In this paper we will refer to molecular (or dust) {\it clumps} as structures of $\sim 1\,$pc, which are typically probed by mm single dish studies \citep[e.g.,][]{2002ApJ...566..945B, 2006ApJ...641..389R}. In addition, we will refer to {\it cores} as substructures of size $\sim$0.1 pc that are found within clumps and are typically probed by radio/mm interferometers \citep[e.g.,][]{2000prpl.conf..299K, 2010A&A...509A..50C}. Several studies of such condensations in the 1990s led to the identification of hot molecular cores (HMCs). HMCs have a large amount of hot molecular gas ($\sim$ 10$^{2}$\,M$_\odot$, T $>$ 100$\,$K; see Section \ref{Sample_desc}). HMCs are presumably heated by one or more embedded high-mass protostars, and in general have weak or previously undetectable radio continuum emission at sensitivities of tens of $\mu$Jy \citep[e.g.,][]{1986ApJ...303L..11G, 1991A&A...252..278C, 1992A&A...256..618C, 1993A&A...276..489O, 2000prpl.conf..299K, 2005IAUS..227...59C, 2010A&A...509A..50C}. They are thus thought to represent an evolutionary phase prior to UC/HC H{\small II} regions. Candidates for an even earlier, possibly pre-stellar, phase were discovered by mm and submm continuum studies of the so-called infrared dark clouds (IRDCs). IRDCs harbor molecular structures of similar masses and densities as HMCs {(see Table 1 of \citealp{2006ApJ...641..389R}). However, the temperatures are much lower (T$\sim$10--20 K; \citealp{2006A&A...450..569P, 2010A&A...518L..98P}), which suggests an earlier evolutionary phase than HMCs. While clearly not all IRDCs are presently forming stars, the most massive and opaque condensations within IRDCs might form OB stars \citep{2010ApJ...723L...7K}. Several studies have attempted to understand the molecular condensations where high-mass stars form and to establish an evolutionary sequence for them \citep[e.g.,][]{1996A&A...308..573M, 1998A&A...336..339M, 2000A&A...355..617M, 2008A&A...487.1119M}. Based on recent VLA NH$_3$ \citep{2013MNRAS.432.3288S}, and ATCA H$_2$O maser and cm continuum \citep{2013A&A...550A..21S} observations of a large number of high-mass star forming candidates, these authors have suggested an evolutionary sequence from quiescent starless cores, with relatively narrow NH$_3$ lines and low temperatures to protostellar cores which already contain IR point sources, and show larger linewidths and temperatures, as well as the presence of ionized gas. While this classification is reasonable, we still lack an evolutionary scheme for the earliest phases of high-mass star formation. The present work is an attempt to characterize the earliest phases (prior to UC/HC H{\small II} regions) of high-mass star formation making use of the high continuum sensitivity of the Karl G. Jansky Very Large Array (VLA)\footnote{The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc.}. A number of physical processes which cause cm continuum emission have been suggested to be present during early high-mass star formation. Most of these relate to the disk/flow systems which are expected around high-mass protostars. For instance, \citet{2007ApJ...664..950R} proposed an ionized disk around the Orion I protostar, \citet{1996ApJ...471L..45N} predict weak cm continuum emission from disk accretion shocks, and \citet{1995ApJ...443..238R} detected a synchrotron jet in the W3(H$_{2}$O) protostar. Most low-mass protostars show molecular flows which are driven by ionized -- mostly thermal -- jets (see \citealp{2010ApJ...725..734G} for a summary), and since molecular outflows are also prevalent in regions where high-mass protostellar candidates are found \citep[e.g.,][]{1996ApJ...457..267S, 2001ApJ...552L.167Z, 2002A&A...383..892B}, ionized jets associated with high-mass protostars are also expected. Furthermore, \citet{2007MNRAS.380..246G} interpreted weak and compact continuum sources in S106 and S140 as equatorial ionized winds from high-mass protostars, and \citet{2016ApJ...818...52T} have predicted the existence of weak outflow-confined H{\small II} regions in the earliest phases of high-mass star formation. At the typical distances of several kpc for high-mass star forming regions, these processes predict flux densities in the $\mu$Jy range and are now accessible to observations with the upgraded VLA. We have thus carried out a high sensitivity (rms$\,\sim$ 3--10 $\mu$Jy beam$^{-1}$) VLA survey to search for radio continuum emission at 6 and 1.3 cm with sub-arcsecond angular resolution towards candidates of high-mass star forming sites at evolutionary phases earlier than UC and HCH{\small II} regions. In this paper we present our VLA radio continuum survey of 58 regions selected from the literature which had no previous, or relatively weak radio continuum detection at the 1 mJy level. Below we describe the sample and the observations, present the detections and describe their physical properties. The analysis, interpretation and conclusions of this survey will be presented in Rosero et al. (in preparation; hereafter Paper II). \\
16
9
1609.03269
1609
1609.03575_arXiv.txt
{\it Gaia}'s Radial Velocity Spectrometer (RVS) has been operating in routine phase for over one year since initial commissioning. RVS continues to work well but the higher than expected levels of straylight reduce the limiting magnitude. The end-of-mission radial-velocity (RV) performance requirement for G2V stars was 15 km~s$^{-1}$ at $V = 16.5$ mag. Instead, 15 km~s$^{-1}$ precision is achieved at $15 < V < 16$ mag, consistent with simulations that predict a loss of 1.4 mag. Simulations also suggest that changes to {\it Gaia}'s onboard software could recover $\sim$0.14 mag of this loss. Consequently {\it Gaia}'s onboard software was upgraded in April 2015. The status of this new commissioning period is presented, as well as the latest scientific performance of the on-ground processing of RVS spectra. We illustrate the implications of the RVS limiting magnitude on {\it Gaia}'s view of the Milky Way's halo in 6D using the {\it Gaia} Universe Model Snapshot (GUMS).
's Radial Velocity Spectrometer performance} \label{sec1} {\it Gaia}'s Radial Velocity Spectrometer \cite{cropper2011} data are processed on-ground by the Data Processing and Analysis Consortium (DPAC) Co-ordination Unit (CU) 6 Spectroscopic Processing pipeline \cite{katz2011}. The pipeline formally runs at the CU6 Data Processing Centre. We present offline tests of the CU6 pipeline running at the Mullard Space Science Laboratory. As already presented in \cite{cropper2014}, Fig. \ref{fig1} (left) presents tentative evidence that the CU6 pipeline is able to achieve the end-of-mission RV precision predicted by simulations that include the straylight. The majority of the stars in Fig. \ref{fig1} (left) are G dwarfs so the measured $\sigma_{RV} \sim 15$ km~s$^{-1}$ at $15 < V < 16$ mag is consistent with simulations that predict a loss of 1.4 mag.\footnote{\url{http://www.cosmos.esa.int/web/gaia/science-performance}} \vspace*{-0.5 cm}
\begin{figure}[b] \begin{center} \includegraphics[width=0.49\textwidth]{mta_performance.eps} \includegraphics[width=0.41\textwidth]{gums_halo_only_grvs_lt16_2_x_y_erv_K1IIMP_proc_cro.eps} \caption{{\it Left}: Measured end-of-mission (40 transits) RV performance as a function of magnitude. {\it Right}: GUMS simulation of Milky Way halo stars with $G_{RVS} < 16.2$ mag looking face on: X and Y are in the Galactic plane with the Sun at the origin (square), where the cross is the Galactic centre. Each star is colour-coded according to its predicted end-of-mission $\sigma_{RV}$, assuming every star is a metal-poor K1 giant: $\sigma_{RV} < 15$ km~s$^{-1}$ (red/dark grey), $15 < \sigma_{RV} < 30$ km~s$^{-1}$ (green/light grey), $\sigma_{RV} > 30$ km~s$^{-1}$ (black).} \label{fig1} \end{center} \end{figure} {\it Gaia}-RVS is already the largest ever spectroscopic survey (5.4 billion spectra observed in its first year). This means it will also be the largest ever survey of the Milky Way's halo. It will provide $\sim$1 km~s$^{-1}$ precision radial velocities for $V < 12$ mag, which is the planned CU6 contribution to the second {\it Gaia} data release (2017, to be confirmed). With {\it Gaia} astrometry, this will provide $\sim$10,000 {\it local field} halo stars (all Galactic components $\sim$2 million) in 6D. RVS spectra will also be used by DPAC CU8 to derive abundances (Fe, Ca, Ti, Si), $T_{eff}$, log $g$, [M/H] and $A_{0}$ for $V < 12$ mag, ready for later data releases. CU6-measured $v$sin$i$ for these stars means that {\it Gaia} will measure these $\sim$10,000 local field halo stars and the $\sim$2 million all-Galactic-component stars in a total of 15 dimensions. We present preliminary evidence that {\it Gaia}-RVS can provide $\sim$15 km~s$^{-1}$ precision end-of-mission radial velocities at $15 < V < 16$ mag. At this precision and with {\it Gaia} astrometry, {\it Gaia} will provide $\sim$800,000-900,000 halo stars (all Galactic components $\sim$75-100 million) in 6D in the final {\it Gaia} catalogue (2022, to be decided). When analysed with CU8-derived [M/H] from {\it Gaia}'s BP/RP spectra of these stars, {\it Gaia}'s chemo-6D-kinematic mapping out to $\sim$30-50 kpc from the Sun will revolutionise our understanding of the Milky Way halo's structure, origin and evolution. \vspace*{-0.6 cm}
16
9
1609.03575
1609
1609.08792_arXiv.txt
{Various lines of evidence suggest that the cores of a large portion of early-type galaxies (ETGs) are virtually evacuated of warm ionised gas. This implies that the Lyman-continuum (LyC) radiation produced by an assumed active galactic nucleus (AGN) can escape from the nuclei of these systems without being locally reprocessed into nebular emission, which would prevent their reliable spectroscopic classification as Seyfert galaxies with standard diagnostic emission-line ratios. The spectral energy distribution (SED) of these ETGs would then lack nebular emission and be essentially composed of an old stellar component and the featureless power-law (PL) continuum from the AGN. A question that arises in this context is whether the AGN component can be detected with current spectral population synthesis in the optical, specifically, whether these techniques effectively place an AGN detection threshold in LyC-leaking galaxies. To quantitatively address this question, we took a combined approach that involves spectral fitting with \Starlight\ of synthetic SEDs composed of stellar emission that characterises a 10 Gyr old ETG and an AGN power-law component that contributes a fraction $0\leq x_{\mathrm{AGN}} < 1$ of the monochromatic luminosity at $\lambda_0=$ 4020 \AA. In addition to a set of fits for PL distributions $F_{\nu} \propto \nu^{-\alpha}$ with the canonical $\alpha=1.5$, we used a base of multiple PLs with $0.5 \leq \alpha \leq 2$ for a grid of synthetic SEDs with a signal-to-noise ratio of 5--$10^3$. Our analysis indicates an effective AGN detection threshold at $x_{\mathrm{AGN}}\simeq 0.26$, which suggests that a considerable fraction of ETGs hosting significant accretion-powered nuclear activity may be missing in the AGN demographics. }
\label{Sec:Introduction} The build-up of super-massive black holes (SMBHs) and co-evolution with their galaxy hosts is a central question in extragalactic astronomy. In this regard, comprehensive multi-wavelength studies of matter accretion onto SMBHs and the associated active galactic nuclei (AGN) phenomenon are fundamental for elucidating the interplay between SMBH growth and galaxy assembly history. Accretion-powered activity in galaxies is mostly being addressed in the optical through emission-line ratio diagnostics, which allow distinguishing between, for instance, gas photoionisation by an AGN and OB stars (e.g. \citealt{Baldwin_Phillips_Terlevich_1981, Veilleux_Osterbrock_1987}). The diagnostic power of such methods presumably vanishes, however, when galactic nuclei are mostly evacuated of gas (e.g. permeated by hot and tenuous plasma), thereby allowing the bulk of the Lyman continuum (LyC) output from an AGN to escape without being locally reprocessed as nebular emission. In those cases the optical spectral energy distribution (SED) is expected to essentially consist of stellar emission and a featureless AGN continuum. In the limiting case of a dominant AGN and high LyC-photon escape fraction, the optical SED of a galaxy is therefore expected to entirely lack both nebular (continuum and line) emission and stellar absorption features and superficially somewhat resemble that of a BL Lac (e.g. \citealt{Oke_Gunn_1974, Marcha_etal_1996, Kugler_etal_2014}). The situation of an AGN leaking LyC radiation envisaged here may apply to many early-type galaxy (ETG) nuclei. \cite{Papaderos_etal_2013} have first estimated the LyC escape fraction from ETG nuclei to be 70--95\% (see also \citealt{Gomes_etal_2016}) and pointed out that LyC photon escape may constitute a key element in understanding why many of these systems, despite evidence of a strong AGN energy source in radio or X-ray wavelengths, show only weak optical emission-lines typical of low-ionization nuclear emission-line regions (LINERs, \citealt{Heckman_1980}). Several questions naturally arise from such considerations. For instance, to which extent can an AGN be recovered from the optical SED of such LyC-leaking ETGs using currently available spectral fitting techniques? More specifically, do these techniques effectively place a detection threshold on the featureless AGN continuum, and are they capable of recovering its main characteristics (e.g. luminosity contribution, spectral slope) simultaneously with those of the stellar component in the host galaxy? Spectral population synthesis (SPS, also referred to as the inverse or semi-empirical approach; \citealt{Faber_1972}) has been applied over the past decades to optical Seyfert galaxy spectra with the goal of evaluating the AGN power-law (PL) featureless continuum and how this might affect the retrieved star formation history (e.g. \citealt{Koski_1978, Schmitt_StorchiBergmann_CidFernandes_1999, CidFernandes_etal_2004, Benitez_etal_2013}). However, none of these works have attempted to quantitatively infer an AGN detectability threshold for LyC-leaking ETGs, which is the goal of this study.
\label{Sec:Summary} The aim of this study was to examine whether there is an AGN detectability threshold in optical spectral synthesis studies of LyC-leaking early-type galaxies (ETGs) hosting accretion-powered nuclear activity. To address this question, we took a combined approach that involved spectral fitting with \Starlight\ of synthetic SEDs that are composed of an instantaneously formed 10 Gyr old stellar component \emph{plus} an AGN power-law component providing a fraction $0\leq x_{\mathrm{AGN}} < 1$ of the monochromatic luminosity at $\lambda_0=$ 4020 \AA. For this set of models, we find that \Starlight\ recovers the stellar mass to within $\sim$10\%, but systematically overestimates the intrinsic extinction and underestimates the luminosity-weighted stellar age and $x_{\mathrm{AGN}}$. Our analysis indicates an effective AGN detection threshold at $x_{\mathrm{AGN}}\simeq 0.26$, which suggests that a considerable fraction of ETGs hosting significant accretion-powered nuclear activity may be missing in the AGN demographics. \begin{figure} [t!] \begin{center} \includegraphics[width=0.47\textwidth]{Figures/NN_Tau1_AGN_DETECT_PMPL_-_M=100_-_tL_diff_vs_xagn_regression} \caption{Difference between the output and input luminosity-weighted mean stellar age $\langle t_{\star} \rangle_L$ as a function of $x_{\mathrm{AGN}}^{\rm in}$ for \spl\ and \mpl\ models (upper and lower panels, respectively). Thick black lines have the same meaning as in Fig.~\ref{Fig:M_ratio_as_a_function_of_xagn}. } \label{Fig:age_L_difference_as_a_function_of_xagn} \end{center} \end{figure} \begin{figure}[t!] \begin{center} \includegraphics[width=0.47\textwidth]{Figures/NN_Tau1_AGN_DETECT_PMPL_-_M=100_-_Av_vs_xagn_regression} \caption{Inferred intrinsic extinction $A_V$ (mag) in the $V$ band as a function of $x_{\mathrm{AGN}}^{\rm in}$ for \spl\ and \mpl\ fits (upper and lower panels, respectively). The input SEDs are computed for an $A_V=0$. Thick black lines have the same meaning as in Fig.~\ref{Fig:M_ratio_as_a_function_of_xagn}.} \label{Fig:extinction_as_a_function_of_xagn} \end{center} \end{figure}
16
9
1609.08792
1609
1609.08271_arXiv.txt
We extend a two-component model for the evolution of fluctuations in the solar wind plasma so that it is fully three-dimensional (3D) and also coupled self-consistently to the large-scale magnetohydrodynamic (MHD) equations describing the background solar wind. The two classes of fluctuations considered are a high-frequency parallel-propagating wave-like piece and a low-frequency quasi-two-dimensional component. For both components, the nonlinear dynamics is dominanted by quasi-perpendicular spectral cascades of energy. Driving of the fluctuations, by, for example, velocity shear and pickup ions, is included. Numerical solutions to the new model are obtained using the \textsc{Cronos} framework, and validated against previous simpler models. Comparing results from the new model with spacecraft measurements, we find improved agreement relative to earlier models that employ prescribed background solar wind fields. Finally, the new results for the wave-like and quasi-two-dimensional fluctuations are used to calculate \emph{ab initio} diffusion mean free paths and drift lengthscales for the transport of cosmic rays in the turbulent solar wind.
The explicit consideration and self-consistent implementation of the evolution of turbulence in expanding plasma flows is a focus of contemporary modeling of astrophysical flow phenomena. This is particularly so for the solar wind; see the review-like introductions in \citet{Usmanov-etal-2011,Usmanov-etal-2014}, \citet{Zank-etal-2012}, and \citet{Wiengarten-etal-2015}. This considerable improvement, relative to non-self-consistent modeling, is, on the one hand, necessary in order to fully understand the transport of charged energetic particles in the heliosphere \citep[e.g.,][]{Engelbrecht-Burger-2013}, and via this to explore the physics of their interactions with the plasma turbulence \citep[e.g.,][]{Schlickeiser-2002,Shalchi-2009}. On the other hand, the correct description of the transport of cosmic rays in other astrophysical systems is also of great interest. For example, in astrospheres, i.e., circumstellar regions occupied by stellar winds, it is of high relevance in the context of exoplanet research \citep[e.g.,][]{Scalo-etal-2007, Grenfell-etal-2012, Griessmeier-etal-2015} and potentially for an understanding of cosmic ray anisotropy at high energy \citep{Scherer-etal-2015}. Another example is the, at least partly diffusive, cosmic ray transport in galactic halos \citep[e.g.,][]{Heesen-etal-2009, Mao-etal-2015}. Modeling of the transport of solar wind turbulence has advanced considerably since the early model of \citet{Tu-etal-1984}, which was itself a major step forward from WKB transport theory \citep[e.g.,][]{Parker65-wkb,Hollweg73-wkb}. Improved inertial range models \citep[e.g.,][]{Zhou-Matthaeus-1990, MarschTu90a} and energy-containing range models \citep[e.g.,][]{Matthaeus-etal-1994, Matthaeus-etal-1996, Zank-etal-1996, Zank-etal-2012} have been presented. These have often included additional effects, such as heating of the solar wind \citep[e.g.,][]{Zank-etal-1996, Matthaeus-etal-1999}, non-zero cross helicity \citep[e.g.,][]{MattEA04-Hc, BreechEA05, Breech-etal-2008}, non-constant difference in velocity $\vv$ and magnetic field $\vb$ fluctuation energy \citep[sometimes called residual energy,][]{Matthaeus-etal-1994,Zank-etal-2012, Adhikari-etal-2015}, and different correlation lengths for $\vv$ and $\vb$ as well as for the Elsasser fluctuations \citep{Zank-etal-2012,Dosch-etal-2013,Adhikari-etal-2015}. See \citet{Zank-etal-2012} and \citet{Zank-2014} for reviews of this progress. Another extension concerns the nature of the fluctuations. Models like those mentioned above typically treat the fluctuations as being of a single kind, typically either waves or some form of turbulence. \citet{Oughton-etal-2011} developed a model where propagating high-frequency wave-like fluctuations and low-frequency, perpendicularly cascading, thus quasi-two-dimensional (quasi-2D) turbulent fluctuations are both supported \citep[see also][]{Oughton-etal-2006,Isenberg-etal-2010}. This approach, referred to as \emph{two-component} turbulence modeling, explicitly acknowledges the presence of both turbulence and wave-like fluctuations and has distinct advantages compared to the `traditional' one-component modeling. First, it is commonly agreed that there are at least two turbulence drivers, namely stream shear at low frequencies and unstable pick-up ion velocity distributions at high frequencies. Clearly, the separation of the turbulence into two corresponding frequency components allows for a more `natural' quantitative formulation and modeling of the distinct driving processes. Second, this decomposition permits a fairly detailed treatment of nonlinear interactions of wave-like and quasi-2D components with each other and amongst themselves \citep{Oughton-etal-2006,Oughton-etal-2011}. And, third, assuming these two components to determine with sufficient accuracy the slab and 2D turbulence quantities required in contemporary cosmic ray transport theory, they form the basis of so-called \emph{ab initio} modeling of cosmic ray modulation \citep{Engelbrecht-Burger-2013}. In order to self-consistently couple turbulence transport models to those of the large-scale structure of the heliosphere \citep[e.g.,][]{Zank-2015} or astrospheres \citep[e.g.,][]{Scherer-etal-2015} the former must be formulated in three spatial dimensions. This has been done for the one-component model by \citet{Usmanov-etal-2011}. Another generalization concerns the removal of the limitation of the model's validity for the super-Alfv\'enic solar/stellar wind regimes, which---again for the one-component model---has been achieved recently in a non-self-consistent fashion by \citet{Adhikari-etal-2015} and fully self-consistently by \citet{Wiengarten-etal-2015}. Naturally, it is desirable to make both extensions also for the two-component turbulence model. This is the objective of the present paper, whose structure we now outline. We formulate the basic equations of the two-component phenomenology and its coupling to the large-scale MHD equations in Section~\ref{sec:model}. The implementation in the \textsc{Cronos} numerical framework is presented in Section~\ref{sec:results}, along with numerical results. These include a computational validation with respect to the simpler \citet{Oughton-etal-2011} model, and results from the new two-component model with its more realistic background solar wind. A comparison with spacecraft data is also presented. Then, in Section~\ref{sec:mfps}, the findings are used to calculate diffusion and drift coefficients for the transport of cosmic rays in the heliosphere. We conclude with a summary and an outlook on future improvements in Section~\ref{sec:conclusions}.
\label{sec:conclusions} We have generalized the two-component turbulence model developed by \citet{Oughton-etal-2006} and \citet{Oughton-etal-2011} to a self-consistent treatement with respect to the solar wind plasma. This generalization consists, first, in a fully three-dimensional formulation of the evolution equations of the two-component phenomenology, i.e., the high-frequency parallel propagating wave-like and the low-frequency perpendicularly cascading quasi-2D turbulent fluctuations. This includes both a discussion of the most suitable way to formulate the evolution equations for the corresponding correlation lengthscales in order to obtain a closed system for all large-scale and small-scale quantities and a discussion of the correct choice for the structural similarity parameters that implies the occurrence of \citep[in comparison to earlier work, see, e.g.,][] {Zank-etal-2012} additional mixing terms in the equations for the energies (per unit mass) and cross helicities. Second, we have extended the previous modeling by (i) not neglecting transport and mixing terms involving the Alfv\'en velocity, (ii) taking into account the solar wind stream shear, and (iii) using a state-of-the-art formulation of the efficiency of the so-called pick-up ion driving \citep{Isenberg-2005}. After an implementation in the MHD modeling framework \textsc{Cronos} \citep[e.g.,][]{Wiengarten-etal-2015}, the new model, consisting of the generalized turbulence evolution equations self-consistently coupled with those for the large-scale expansion of the solar wind, was validated against the spherically symmetric results obtained earlier by \citet{Oughton-etal-2011} for a prescribed background solar wind. As a first application we have compared the new three-dimensional, self-consistent simulation data with turbulence quantities derived from measurements made with different spacecraft and demonstrated an improvement with respect to earlier models. These improvements comprise the inclusion and improved reproduction of off-ecliptic Ulyssses results and, due to the additional Alfv\'en velocity terms, a better agreement of the computed energy-weighted cross helicity with that derived from observations. As a second application we have used the new results for the wave-like and quasi-2D fluctuations to calculate \emph{ab initio} diffusion mean free paths and drifts lengthscales of energetic particles in the turbulent solar wind. Using a well-established result for the quasi-linear parallel mean free path \citep{Teufel-Schlickeiser-2003, Burger-etal-2008} and a novel expression for the proton perpendicular mean free path \citep{Ruff2012} derived from the random ballistic decorrelation (RBD) interpretation of the nonlinear guiding center (NLGC) theory \citep{Matthaeus-etal-2003}, we computed values for both quantities that are above the famous Palmer consensus \citep{Palmer-1982,Bieber-etal-1994}. Given that the simulations were carried out for solar minimum conditions, this result is in accordance with earlier findings \citep[e.g.,][]{Chen-Bieber-1993}. With respect to the particle drifts we employed state-of-the-art expressions derived by \citet{bv2010} and \citet{ts2012} for turbulence-reduced drift scales via fits to simulations of the drift coefficient for various turbulence conditions. While, interestingly, both drift scenarios predict larger scales above the Sun's poles than in the ecliptic plane, they yield rather different results, in general. On the one hand the drift scale of \citet{ts2012} is considerably larger than that of \citet{bv2010}. On the other hand the latter exhibits a comparatively complex spatial dependence as a consequence of its additional dependences on the various correlation lengthscales. In view of the sensitivity of the solution of the cosmic ray transport equation to the diffusion and drift coefficients, the modeling of their dependence on the underlying turbulence as studied in the present work can be expected to lead to new insights in the field of cosmic ray modulation, both within and beyond the termination shock. With the new, generalized two-component model of solar wind turbulence we have demonstrated the feasibility to self-consistently take into account all terms containing the Alfv\'en velocity. The explicit incorporation of the latter allowed not only for the extension of the model to all heliographic latitudes and longitudes but will particularly allow quantitative studies of the sub-Alfv\'enic solar wind regions in the inner heliosphere \citep[as in][]{Wiengarten-etal-2015} close to the Sun and is also a pre-requisite for applications to the heliosheath whose turbulent structure is as yet unmodelled.
16
9
1609.08271
1609
1609.04743_arXiv.txt
{Type III bursts and hard X-rays are both produced by flare energetic electron beams. The link between both emissions has been investigated in many previous studies, but no statistical studies have compared both coronal and interplanetary type III bursts with X-ray flares.} {Using events where the coronal radio emission above 100~MHz is exclusively from type III bursts, we revisited some long-standing questions regarding the relation between type III bursts and X-ray flares: Do all coronal type III bursts have X-ray counterparts? What correlation, if any, occurs between radio and X-ray intensities? What X-ray and radio signatures above 100~MHz occur in connection with interplanetary type III bursts below 14 MHz?} {We analysed ten years of data from 2002 to 2011 starting with a selection of coronal type III bursts above 100~MHz. We used X-ray flare information from RHESSI $>6$~keV to make a list of 321 events that have associated type III bursts and X-ray flares, encompassing at least 28\% of the initial sample of type III events. We then examined the timings, intensities, associated GOES class, and whether there was an associated interplanetary radio signature in both radio and X-rays.} {For our 321 events with radio and X-ray signatures, the X-ray emission at 6~keV usually lasted much longer than the groups of type III bursts at frequencies $> 100$~MHz. The selected events were mostly associated with GOES B and C class flares. A weak correlation was found between the type III radio flux at frequencies below 327 MHz and the X-ray intensity at 25-50~keV, with an absence of events at high X-ray intensity and low type III radio flux. The weakness of the correlation is related to the coherent emission mechanism of radio type IIIs which can produce high radio fluxes by low density electron beams. Interplanetary type III bursts ($< 14$~MHz) were observed for 54\% of the events. The percentage of association increased when events were observed with 25-50~keV X-rays. A stronger interplanetary association was present when $25-50$~keV RHESSI count rates were above 250~counts/s or radio fluxes of around 170 MHz were large ($> 10^3$~SFU), relating to electron beams with more energetic electrons above 25~keV and events where magnetic flux tubes extend into the high corona. We also find that whilst on average type III bursts increase in flux with decreasing frequency, the rate of this increase varies from event to event.} {}
\label{sec:intro} During solar flares, the Sun releases magnetic energy stored in non-potential structures into plasma heating, bulk motions, and particle acceleration. Energetic particles play a key role in the energy release of solar flares since a significant fraction of the released energy is thought to be going into such particles \citep[e.g][]{Emslie:2012aa}. Energetic particles are also a major driver of solar-terrestrial physics and space weather applications. In particular, radio emission from electron beams produced in association with solar flares provides crucial information on the relationship and connections between energetic electrons in the corona and electrons measured in situ. The most direct and quantitative signature of energetic electrons interacting at the Sun is provided by hard X-ray (HXR) emissions that allow an estimation of electron energy spectra and number density. Although studied for many years, the connections between HXRs and radio type IIIs are not fully understood. We statistically investigated the connection between events that show type III bursts in the corona and X-ray flares to further understand their connection. We also examined the occurrence of the interplanetary counterparts of the `coronal 'type III bursts to explore what electron beam properties and coronal conditions are favourable for continued radio emission in the heliosphere. Type III radio emissions can be observed over decades of frequency ($1$~GHz to $10$~kHz). They are characterized by fast frequency drifts (around 100 MHz/s in the metric range) and are believed to be produced by high-energy ($0.05$c-$0.3$c) electron beams streaming through the corona and potentially through the interplanetary space \citep[see e.g.][for reviews]{Suzuki:1985aa,Reid:2014ab}. Type III bursts are one of the most frequent forms of solar system radio emission. They are used to diagnose electron acceleration during flares and to get information on the magnetic field configuration along which the electron beams propagate. The bump-in-tail instability produced during the propagation of energetic electrons induces high levels of Langmuir waves in the background plasma \citep{Ginzburg:1958aa}. Non-linear wave-wave interaction then converts some of the energy contained in the Langmuir waves into electromagnetic emission near the local plasma frequency or at its harmonic \citep[e.g.][]{Melrose:1980aa,Li:2014aa,Ratcliffe:2014aa}, producing radio emission that drifts from high to low frequencies as the electrons propagate through the corona and into interplanetary space. In recent years, several numerical simulations have been performed to simulate the radio coronal type III emissions from energetic electrons and investigate the effects of beam and coronal parameters on the emission \citep[e.g.][]{Li:2008ab,Li:2009aa,Li:2011ab,Tsiklauri:2011aa,Li:2014aa,Ratcliffe:2014aa} Since the discovery of type III bursts and the advent of continuous and regular HXR observations, many studies have analysed the relationship between type III bursts and hard X-ray emissions. The first studies of the temporal correlations between metric (coronal) type III bursts and HXRs above 10 keV were achieved by \citet{Kane:1972aa,Kane:1981aa}. It was found that while about 20\% of the impulsive HXR bursts were correlated with type III radio bursts, only 3\% of the reported type III bursts were associated with HXR emissions above 10 keV. This showed that groups of metric type III bursts were more frequently detected than HXR emissions above 10 keV. For 70\% of the correlated bursts, it was also shown that the times of X-ray and radio maxima agree within $\pm9$~s. Moreover, the association rate was found to increase when the type III bursts are more intense or when the type III had a larger starting frequency. On the other hand, it was also found that HXR emissions are more often associated with type III bursts when their flux above 20 keV is larger and their spectra are harder. In a further study based on a larger number of events, \citet{Hamilton:1990aa} confirmed that the association of hard X-ray and type III bursts slightly increases for harder X-ray spectra. They also found that the intensity distribution of hard X-ray bursts associated with type III bursts is significantly different from the distribution of all hard X-ray bursts showing that both kinds of emissions are statistically dependent. They confirmed on a larger selection of events than \citet{Kane:1981aa} that higher flux radio bursts are more likely to be associated with hard X-ray bursts and vice versa. They also examined whether there is a correlation between the peak count rates of the hard X-ray burst and of the peak flux density at 237 MHz for the associated type III burst. They find no apparent correlation between these two quantities and that there is a large dispersion in the ratio of peak X-ray to radio intensities. Other studies have confirmed that type III generating electrons can be part of the same population as the HXR generating electrons. Indeed, it was shown in different papers that there is a correlation between the characteristics of the HXR emitting electrons (non-thermal spectral index or electron temperature) and the starting frequencies of type III bursts \citep[see][for recent studies]{Benz:1983aa,Raoult:1985aa,Reid:2011aa,Reid:2014aa}. Correlations on sub-second timescales found between HXR pulses and type III radio bursts also strongly support attributing the causal relationship between HXR and radio type III emissions to a common acceleration mechanism \citep[see][]{Kane:1982aa,Dennis:1984aa,Aschwanden:1990aa,Aschwanden:1995aa}. More statistical studies were performed recently using RHESSI X-ray observations and radio observations from Phoenix-2 in the $100$~MHz to $4$~GHz range. The association between X-ray emissions for flares larger than GOES C5.0 and radio emission (all types of events) was investigated for 201 events \citep{Benz:2005aa,Benz:2007aa}. At the peak phase of hard X-rays, different types of decimetric emissions (type IIIs but also pulsations, continua and narrowband spikes) were found in a large proportion of the events. For only 17\% of the HXR flares, no coherent emission in the decimetric/metric band was indeed found and all these flares had either radio emission at lower frequencies or were limb events. Classic meter wave type III bursts were found in 33\% of all X-ray flares they were the only radio emission in only 4\% of the events. The strong association but loose correlation between HXR and radio coherent emission could be explained in the context of multiple reconnection sites that are connected by common magnetic field lines. While reconnection sites at high altitudes could produce the energetic electrons responsible for type III bursts, the reconnection sites at lower altitude could be linked to the production of HXR emitting electrons and some high frequency radio emissions. The correlation between hard X-ray and type III emissions can also be examined by combining spatially resolved observations. In some cases, such observations support that type III generating electrons are produced by the same acceleration mechanism as HXR emitting electrons. A close link has been found between the evolution of X-ray and radio sources on a timescale of a few seconds \citep[e.g.][]{Vilmer:2002aa}. However, in other cases the link between HXR emissions and radio decimetric/metric emissions is more complicated. The radio emissions can originate from several locations, one very close to the X-ray emitting site and the other further away from the active region and more linked to the radio burst at lower frequencies \citep[e.g.][]{Vilmer:2003aa}. Such observations are more consistent with the cartoon discussed in \citet{Benz:2005aa}, in which the electrons responsible for type III emissions at low frequencies are produced in a secondary reconnection site at higher altitudes than the main site responsible for X-rays. In a recent study, \citet{Reid:2014aa} investigated the proportion of events that are consistent with the simple scenario in which the type III generating electrons originate through the same acceleration mechanism than the HXR producing electrons, compared to a more complicated scenario of multiple acceleration regions. In order to accomplish this, they compared the evolution of the type III starting frequency and the HXR spectral index, signatures of the acceleration region characteristics. They found that that on a sample of 30 events, 50\% of the events showed a significant anti-correlation. This was interpreted as evidence that, for these events, there is strong link between type III emitting electrons and hard X-ray emitting electrons. Such a close relationship was furthermore used to deduce the spatial characteristics of the electron acceleration sites. Radio emissions from energetic electron beams propagating in the interplanetary medium have been observed at frequencies below 10 MHz with satellite based experiments since the 1970s \citep[e.g.][]{Fainberg:1972aa,Fainberg:1974aa}. The most detailed study between coronal (metric) and interplanetary bursts (IP) was performed by \citet{Poquerusse:1996aa} who studied the association between type III groups observed by the ARTEMIS spectrograph on the ground (100-500 MHz) and the URAP radio receiver on the Ulysses spacecraft (1-940 kHz). They found that when there is an association, one single type III burst at low frequencies usually comes from a group of 10 to 100 type III bursts at higher frequencies. Based on 200 events, they found that 50\% of the events produced both strong coronal and interplanetary type III emissions. They found however that not every coronal type III burst (even if strong) produces an IP type III burst. In this paper, we address again, using ten years of data from 2002 to 2011, the long-lasting questions concerning the link between type III emitting electrons in the corona and HXR emitting electrons, and the probability that coronal type III bursts have interplanetary counterparts. Section \ref{sec:eventlist} presents the observations and the methodology used in the study. Section \ref{sec:T3XR} presents results on the link between coronal type III bursts and hard X-ray emissions and Section \ref{sec:T3I3T} discusses the link between coronal and interplanetary type III bursts. Section \ref{sec:discussion} discusses the results in the context of previous but also future observations. Comparisons with predictions of numerical simulations of type III emissions are also tentatively presented.
\label{sec:discussion} In this paper, we performed a new statistical analysis on the link between coronal type III bursts and X-ray flares, and on the occurrence of the interplanetary counterparts of coronal type III bursts. The study was based on ten years of observations from 2002 to 2011 using radio spectra in the 4000-100 MHz range for coronal bursts, X-ray data $>6$~keV and RAD2 observations on WIND for the interplanetary type III counterparts. Based on the RHESSI catalogue of X-ray flares above 6 keV and radio catalogues of pure coronal type III bursts (without other types of radio busts), we investigated the connection between groups of type III bursts and X-ray flares in ten years of data and the link with interplanetary bursts. We summarize the important results of our study below: \begin{itemize} \item The automatic search provided one order of magnitude more X-ray flares at 6 keV than coronal type III bursts. \item The large majority of the events in our sample are associated with C and B class flares. \item Type III bursts above 100 MHz usually start after the X-ray flare is detected at 6 keV and end before the 6 keV emission ceases. \item A lower bound of 28\% of radio events with only coronal type III bursts had associated X-ray emissions detected with RHESSI above 6 keV. \item High 25-50 keV X-ray intensities were correlated with high radio peak fluxes below 327 MHz but the opposite was not true. \item The peak radio flux tends to increase from 450~MHz to 150~MHz but the amount varies significantly from event to event. \item Interplanetary type III bursts are observed with WIND/WAVES below 14 MHz for 54\% of the coronal type III bursts in our sample. \item Events with $>250$~counts/s at 25-50~keV and/or $>1000$~SFU at 170 MHz had a high chance for the coronal type III to become interplanetary. \end{itemize} The finding that one order of magnitude more X-ray flares were detected by RHESSI above 6 keV than pure coronal type III bursts in the same period can be understood from the emission mechanisms. X-ray emission in the 6-12 keV range is largely produced by electrons with a thermal distribution that is different from the non-thermal electrons that are believed to be the primary cause of type III bursts. The strict selection of `type III flags' in the catalogue also reduces the number of radio events since it excludes events in which coronal type III bursts would be observed in association with other kinds of coherent radio emissions at decimetric/metric wavelengths. The thermal origin of the 6 keV X-rays naturally explains the shorter duration of coronal type IIIs as thermal emission is usually observed during the rise and decay of flares in contrast to the non-thermal emission that is localized to the impulsive phase of the flare. The very small proportion of large GOES class flares in our sample is an effect of the data selection of `pure' type III events. Indeed, as was shown by \citet{Benz:2005aa,Benz:2007aa}, large GOES class flares (i.e. higher than C5) are almost systematically associated with coherent radio emissions above 100~MHz, but classical type III bursts are the only radio signature in only 4\% of these events. \subsection{Coronal type III and hard X-ray flare connection} Our results enhance the strong connection already found between electrons that drive coronal type III emission and X-ray flare emission. We found an association rate of at least 28\% between coronal type III bursts and X-ray flares, which are larger than the 3\% of \citet{Kane:1981aa} and 15\% of \citet{Hamilton:1990aa}. The larger association rate that we found could be related to our selection of type IIIs as the only radio emission present, considering 6~keV X-rays compared to 10~keV, and instrument sensitivity particularly for X-rays. The association rate of at least 28\% is further indication that the non-thermal electrons responsible for radio and X-rays are part of a common acceleration process. However, not all coronal type IIIs have associated X-ray flares. The lack of simultaneous observations can be due to instrument sensitivity preventing detection of weak simultaneous X-ray emissions associated with type III production or a lack of magnetic connectivity preventing electron transport either up or down in the solar corona from the acceleration site. The weak log-log correlation between the peak flux of radio emissions at 327-164~MHz and the X-ray count rate at 25-50~keV arises from the notable absence of events with a high X-ray intensity and a low type III radio flux. This implies that when flares with high X-ray count rates above 25~keV produce coronal type III bursts, the radio fluxes are likely to be high. However, the correlation is only weak because the converse is not true. `Coronal' type III bursts with high radio fluxes are also often associated with low X-ray count rates above 25 keV. This results in a large scatter between X-ray and radio fluxes; this large scatter was already found by \citet{Kane:1981aa,Hamilton:1990aa}. The large scatter is explained by the increased efficiency of producing a type III radio burst even by a low density electron beam via the amplification of coherent waves. Conversely the incoherent nature of Bremsstrahlung for producing X-rays is less efficient and the number of high-energy electrons is the primary parameter for producing high rates of X-ray emissions \citep[e.g.][]{Holman:2011aa,Kontar:2011aa}. The dependency of hard X-rays on the number of high-energy electrons naturally explains the notable absence of events with high X-ray intensity and low type III radio flux. X-ray flares associated with coronal type IIIs ($>100$~MHz) were found, of which 56\% had $>25$~keV X-rays, which is a much higher ratio than is present for all flares \citep[e.g.][]{Hannah:2011aa}. A similar result was found by \citet{Kane:1981aa} who showed an increase in the X-ray to type III correlation with the peak flux of X-ray emission. `Coronal' type III bursts observed in our study need to have a starting frequency well above 100 MHz. The result reinforces a previously observed property that starting frequencies of type III bursts are linked to the spectral index of the accelerated electron beams \citep{Reid:2011aa,Reid:2013aa,Reid:2014aa}. The other studies showed that for an electron beam with an injection function of the following form: \begin{equation}\label{eqn:source} S(v,r,t) = A v^{-\alpha}\exp\left(-\frac{|r|}{d}\right)\exp\left(-\frac{|t-t_{inj}|}{\tau}\right), \end{equation} with an initial characteristic size $d$, injection time $\tau$, and velocity spectral index $\alpha$, the initial height of type III emission depended upon \begin{equation} h_{typeIII}=(d+v_{av}\tau)\alpha +h_{acc}, \end{equation} with $v_{av}$ as the average significant velocity of the electron beam and $h_{acc},~(r=0)$ the starting height. In the case of a harder electron spectrum (lower spectral index), the electron beam becomes unstable to Langmuir wave production after a shorter propagation distance. The starting frequency of the type III emission is thus found to be higher. `Coronal' type IIIs usually occurred during the impulsive phase of the flare, starting after the 6~keV X-ray rise and ending before the 6~keV X-rays decay. A similar result was found by \citet{Aschwanden:1985aa} for decimetric emission between 300-1000~MHz, many of which included type III bursts. Additionally, the peak flux in emission between X-rays and type IIIs is usually closely related in time. For the events of our sample for which $>25$~keV emission was detected, around two-thirds of the events have peak fluxes within 40 seconds, decreasing to around half of the events for peak fluxes within 15 seconds, which is just above the limit of time cadence used in our analysis. \subsection{Peak radio flux verses frequency} We investigated the evolution in frequency of the type III log-mean peak radio flux. Although there is a very large dispersion in the spectral index found (see Figure \ref{fig:nrh_peak_flux}c), the peak radio flux is found as a mean (in log-space) to increase with decreasing frequency with a mean slope of -1.78. The evolution of the log-mean radio flux with frequency furthermore depends on the association of the event with either significant hard X-ray emission above 25 keV (slope of -1.57) or with an interplanetary burst (slope of -1.97). The large scatter we observed shows that the frequency evolution of type III bursts is varied from event to event. The slopes are significantly different from the slope of -2.9 found on a survey of 10,000 type III bursts observed with the Nan\c{c}ay Radioheliograph \citep{Saint-Hilaire:2013aa}. The discrepancy could be related to the fact that our events only consider bursts that emit in all NRH frequencies as opposed to a statistical average over all bursts. Our results can be taken as further evidence that, in general, type III bursts are more numerous and stronger at low frequencies than at high frequencies. Physical reasons for the increase include the onset and increase of the bump-in-tail instability with velocity dispersion, the lower background plasma density reducing collisional damping and increasing the Langmuir wave growth rate, and the lower frequency radio waves escaping the corona more easily. The increase in radio flux with decreasing frequency is consistent with what was observed by \citet{Dulk:1998aa} who found that the spectrum of a type III burst in the $3-50$~MHz range has a negative spectral index. More recently, the statistical study performed by \citet{Krupar:2014aa} on 152 type III bursts at long wavelengths observed by STEREO/SWAVES also showed that the mean type III radio flux increases significantly from 10 MHz to 1 MHz. \subsection{Interplanetary bursts and $>25$ keV electrons} As discussed in Section \ref{sec:T3I3T} and summarized in Table \ref{tab:HXRIT3}, the coronal type III bursts in our sample are more often associated with IP type III bursts when they are also associated with detectable 25-50 keV emission. Figure \ref{fig:xray_hist} furthermore shows that for the HXR events in this study that produce more than 250 counts/s in the 25-50 keV channel, there is a much higher chance of detecting an IP burst in connection with the coronal burst. The production of high X-ray count rates at 25-50~keV requires a high number of injected electrons above 25 keV. Increasing the number of injected electrons above 25~keV can be achieved by increasing the number density of electrons at all energies or by hardening the energy distribution (smaller spectral index) of the accelerated electron beam. A larger number of high-energy electrons increases the Langmuir wave energy obtained from the unstable electron beam. From quasilinear theory \citep{Vedenov:1963aa,Drummond:1964aa}, the growth rate of Langmuir waves is \begin{equation}\label{eqn:growthrate} \gamma_{ql}=\frac{\pi\omega_{pe}v^2}{n_e}\frac{\partial f}{\partial{v}}. \end{equation} for an electron distribution $f(v)$. An increase in the density of beam electrons or an increase in the velocity of the electrons increases $\gamma_{ql}$. Increasing Langmuir wave energy density likely increases type III radio flux \citep[e.g.][]{Melrose:1986aa} and increases the probability of generating a detectable type III burst. Other factors can inhibit interplanetary type III production (e.g. magnetic connectivity), so we do not expect all beams with a large electron flux above 25~keV to produce interplanetary type III bursts. The increase in type III flux when the number of electrons above 25~keV is increased was demonstrated numerically by \citet{Li:2008ac,Li:2009aa,Li:2011ab} using a hot, propagating Maxwellian. They showed that increasing the temperature of the initial Maxwellian beam increases the type III radio flux and increases the bandwidth; the burst starts at higher frequencies and stops at lower frequencies. The simulations were restricted to frequencies above 150~MHz and the type III flux peaked above 200 MHz, which is inconsistent to the general trend of increasing flux with decreasing frequency reported in this and other studies. A further study by \citet{Li:2013aa} demonstrated via an initial power-law electron beam that decreasing the power-law spectral index increased the fundamental radio flux emitted at high and low radio frequencies. Both sets of simulations can explain why a coronal burst produced by an electron beam with a smaller (harder) power-law spectral index, i.e. associated with a larger 25-50 keV count rate, would have a higher probability to be associated with an interplanetary type III burst than if the electron beam had a larger (softer) initial power-law spectral index. Electron beams can also be diluted as the cross-section of the guiding magnetic flux tube increases with height. A high density of electrons above 25 keV is more likely to still produce detectable type III radio flux at altitudes related to 14 MHz, even in the case of a diverging magnetic flux tube. The effect of flux tube radial expansion has been shown recently by \citet{Reid:2015aa} on type III stopping frequencies. The numerical simulations of beam electrons, and their corresponding resonant interaction with Langmuir waves in diverging magnetic flux tubes, are used to compute the type III stopping frequency; this is defined as the frequency in which the beam is no longer able to produce a sufficient level of Langmuir wave energy density as compared to the thermal level. Denser electron beams and harder electron beams (low spectral index) are more likely to produce significant Langmuir wave energy densities further away from the injections site than sparse or softer electron beams, and therefore produce type III bursts with lower stopping frequencies. This result provides further understanding as to why coronal bursts associated with stronger HXR bursts are more likely associated with IP type III bursts. \subsection{Interplanetary bursts and magnetic connectivity} We found the detection of a strong radio flux at frequencies around 170 MHz (Figure \ref{fig:radio_hist}) to be a very good indication that the type III burst will become an interplanetary type III burst at lower frequencies. Previously it has not been clear whether the absence of radio emission below 14 MHz is from a weak beam or unfavourable magnetic connectivity. When electron beams contain enough density to produce high flux type IIIs around 170 MHz we often observe them below 14 MHz. We deduce that magnetic connectivity plays less of an effect on the transport of electrons from the high corona into interplanetary space. A strong radio flux at frequencies around 170 MHz indicates the electrons \emph{do} have access to the high corona and subsequently are more likely to access the interplanetary medium. We did not find the same trend that strong radio flux is a good indication of interplanetary type III bursts at frequencies at and above 237~MHz. The magnetic connectivity of bursts with a large radio flux at high frequencies, which are produced low in the solar atmosphere, is not necessarily favourable for the electron beam exciter to escape into the upper corona and interplanetary space (see e.g.\ the example in \citet{Vilmer:2003aa}), even if the radio flux is high. The access of particles to open field lines and then the possible association between coronal and interplanetary type III bursts may evolve during flares due to, for example, processes of interchange reconnection in which newly emerging flux tubes can reconnect with previously open field lines \citep[see e.g.][]{Masson:2012aa,Krucker:2011aa}. The statistical results presented in this paper were based on radio spectra and flux time profiles but did not include spatially resolved observations. This latter aspect will be considered in a following study, which will examine in detail the combination of HXR images provided by RHESSI with the multi-frequency images of the radio bursts produced in the decimeter/meter wavelengths by the Nan\c{c}ay Radioheliograph. Tracing the magnetic connectivity between the solar surface, the corona and the interplanetary medium will be one of the key questions of the Solar Orbiter mission. As shown in the present paper, X-ray and radio emissions from energetic electron beams can be used to trace the electron acceleration and propagation sites from the solar surface to the interplanetary medium. The combination of ground-based radio spectrographs and imagers with the radio, X-ray, and in-situ electron measurements aboard Solar Orbiter will undoubtedly largely contribute in the next decade to a better understanding of the magnetic connectivity between the Sun and the interplanetary medium, and on the release and distribution in space and time of the energetic particles from the Sun.
16
9
1609.04743
1609
1609.04269_arXiv.txt
{The European Space Agency spacecraft {\it Gaia} is expected to observe about 10,000 Galactic Cepheids and over 100,000 Milky Way RR Lyrae stars (a large fraction of which will be new discoveries), during the five-year nominal lifetime spent scanning the whole sky to a faint limit of $G$ = 20.7 mag, sampling their light variation on average about 70 times.} {We present an overview of the Specific Objects Study (SOS) pipeline developed within the Coordination Unit 7 (CU7) of the Data Processing and Analysis Consortium (DPAC), the coordination unit charged with the processing and analysis of variable sources observed by {\it Gaia}, to validate and fully characterise Cepheids and RR Lyrae stars observed by the spacecraft. The algorithms developed to classify and extract information such as the pulsation period, mode of pulsation, mean magnitude, peak-to-peak amplitude of the light variation, sub-classification in type, multiplicity, secondary periodicities, light curve Fourier decomposition parameters, as well as physical parameters such as mass, metallicity, reddening and, for classical Cepheids, age, are briefly described.} {The full chain of the CU7 pipeline was run on the time-series photometry collected by {\it Gaia} during 28 days of Ecliptic Pole Scanning Law (EPSL) and over a year of Nominal Scanning Law (NSL), starting from the general Variability Detection, general Characterisation, proceeding through the global Classification and ending with the detailed checks and typecasting of the SOS for Cepheids and RR Lyrae stars (SOS Cep\&RRL). We describe in more detail how the SOS Cep\&RRL pipeline was specifically tailored to analyse {\it Gaia}'s $G$-band photometric time-series with a South Ecliptic Pole (SEP) footprint, which covers an external region of the Large Magellanic Cloud (LMC), and to produce results for confirmed RR Lyrae stars and Cepheids to be published in {\it Gaia} Data Release 1 ({\it Gaia} DR1).} {$G$-band time-series photometry and characterization by the % SOS Cep\&RRL pipeline (mean magnitude and pulsation characteristics) are published in {\it Gaia} DR1 for a total sample of 3,194 variable stars, 599 Cepheids and 2,595 RR Lyrae stars, of which 386 (43 Cepheids and 343 RR Lyrae stars) are new discoveries by {\it Gaia}. All 3,194 stars are distributed over an area extending 38 degrees on either side from a point offset from the centre of the LMC by about 3 degrees to the north and 4 degrees to the east. The vast majority, but not all, are located within the LMC. The published sample also includes a few bright RR Lyrae stars that trace the outer halo of the Milky Way in front of the LMC.} {}
Easy to recognise thanks to their characteristic light variation, Cepheids and RR Lyrae stars are radial pulsating variables that trace stellar populations with different age and chemical composition: classical Cepheids (hereinafter, DCEPs) trace a young ($t \lesssim <$300 Myr) stellar component; anomalous Cepheids (ACEPs) can trace stars of intermediate age ($t \sim$1-5 Gyr) and metal poor content ([Fe/H]$< -1.5$ dex), although it is still matter of debate whether they might also arise from coalescence of binary stars as old as about 10 Gyr; finally, the RR Lyrae stars and the Type II Cepheids (T2CEPs) trace an old ($t >$10 Gyr) stellar population. % They are primary standard candles in establishing the cosmic distance ladder, because Cepheids conform to period--luminosity ($PL$), period--luminosity--colour ($PLC$) and period -- Wesenheit ($PW$) relations, whereas the RR Lyrae stars follow a luminosity--metallicity relation in the visual band ($M_V$--[Fe/H]) and a period--luminosity--metallicity ($PLZ$) relation in the infrared. With its multi-epoch monitoring of the full sky, {\it Gaia} will discover and measure position, parallax, proper motion and time-series photometry of thousands of Cepheids and RR~Lyrae stars in the Milky Way (MW) and its surroundings, down to a faint magnitude limit of $G\sim$ 20.7 mag. The spacecraft is expected to observe from 2,000 to 9,000 MW Cepheids, about 70,000 RR~Lyrae stars in the Galactic halo, from 15,000 to 40,000 RR~Lyrae in the MW bulge, (see table~3 in \citealt{eyer12}, and references therein) and according to the most recent estimates (\citealt{soszynski15b, soszynski16}) over 45,500 RR Lyrae stars and 9,500 Cepheids in the Magellanic Clouds. {\it Gaia} will revise upwards these statistics, as ongoing surveys such as OGLE-IV (\citealt{soszynski15a,soszynski15b,soszynski15c,soszynski16}), Catalina Sky Survey (CSS, \citealt{drake13,torrealba15}), Pan-STARRS (\citealt{hernitschek16}), LINEAR (\citealt{sesar13}) and PTF (\citealt{cohen15}), are constantly reporting new discoveries and increased numbers of RR Lyrae stars and Cepheids both in the MW and in its neighbour companions. {\it Gaia}'s complete census of the Galactic Cepheids and RR~Lyrae will allow a breakthrough in our understanding of the MW structure by tracing young and old variable stars all the way through from the Galactic bulge, to the disk, to the halo, and likely revealing new streams and faint satellites that bear witness of the MW hierarchical build-up (see e.g. \citealt{clementini16}). But most importantly, {\it Gaia} will measure the parallax of tens of thousands of Galactic Cepheids and RR Lyrae stars, along with milli-mag optical spectrophotometry ($G$ broad-band white-light magnitude, blue and red spectro-photometry) and radial velocities and chemistry for those within reach of the Radial Velocity Spectrometer (RVS; $G\lesssim$17 mag). The unprecedented accuracy of {\it Gaia} measurements for local Cepheids and RR~Lyrae stars will allow the absolute calibration via parallax of the Cepheid $PL$, $PLC$, $PW$ and of the $M_{V} -$[Fe/H] and infrared $PLZ$ relations for RR Lyrae stars, along with a test of the metallicity effects through simultaneous abundance measurements. This will enable re-calibration of ``secondary'' distance indicators probing distances far into the unperturbed Hubble flow and a total re-assessment of the whole cosmic distance ladder, from local to cosmological distances, in turn significantly improving our knowledge of the Hubble constant. The physical parameters of Cepheids and RR Lyrae stars will be constrained by {\it Gaia} photometry, parallax, metallicity and radial velocity (RV) measurements, that will also constrain the input physics of theoretical pulsation models. This will further improve the use of Cepheids and RR Lyrae stars as standard candles and stellar population tracers. In this paper we describe the Specific Objects Study (SOS) pipeline developed within the Coordination Unit 7 (CU7) of the Data Processing and Analysis Consortium (DPAC), the coordination unit in charge of the processing and analysis of variable sources observed by {\it Gaia}, to validate and fully characterise Cepheids and RR Lyrae stars observed by the spacecraft. A detailed description of the {\it Gaia} mission, its scientific goals and performance, as well as a comprehensive illustration of the {\it Gaia} DPAC structure and activities can be found in \citet{gaiacol-prusti}. A summary of the astrometric, photometric and survey properties of {\it Gaia} Data Release 1 ({\it Gaia} DR1) and a description of scientific quality and limitations of this first data release are provided in \citet{gaiacol-brown}. The photometric data set and the processing of the $G$-band photometry released in {\it Gaia} DR1 % are thoroughly discussed in \citet{vanleeuwen16}, \citet{carrasco16}, \citet{riello16}, and \citet{evans16}. We note that a rather strict policy is adopted within DPAC for what concerns the processing and dissemination of {\it Gaia} data. Specifically, it was decided to be consistent in how DPAC does the processing and what it is published in {\it Gaia} releases. That is, we do not release results which are based on {\it Gaia} data that is not published. For instance, since no $G_\mathrm{BP}$, $G_\mathrm{RP}$ photometry, is released in {\it Gaia} DR1, only the $G$-band time series photometry was used for processing and classification of the variable sources released in {\it Gaia} DR1. Furthermore, characterisation and classification of {\it Gaia} variable sources rely only on {\it Gaia} data. That is, we do not complement {\it Gaia}'s time series with external non-{\it Gaia} data to increase the number of data points or the time-span of the {\it Gaia} observations. Literature published data are used once the processing is completed, only to validate results (i.e. characterisation and classification of the variable sources) which are, however, purely and exclusively based on {\it Gaia} data. If and how this may have limited the efficiency of the {\it Gaia} pipeline for Cepheids and RR Lyrae stars is extensively discussed in the paper and most specifically in Section~\ref{tailoring}. On the other hand, we are also sure that the next {\it Gaia} data releases will significantly improve both census and results for variable stars, and will also emend misclassifications if/where they have occurred. The % SOS pipeline for Cepheids and RR Lyrae stars, hereinafter referred to as SOS Cep\&RRL pipeline, is one of the latest stages of the general variable star analysis pipeline. Steps of the processing prior the SOS Cep\&RRL pipeline are fully described in \citet{eyer16}, to which the reader is referred to for details. Validation of the classification provided by the % general variable star analysis pipeline is necessary, since Cepheids and RR Lyrae stars overlap in period with other types of variables (e.g. binary systems, long period variables, etc.). SOS Cep\&RRL uses specific features such as the parameters of the light curve Fourier decomposition and diagnostic tools like the {\it Gaia} colour-magnitude diagram (CMD), the amplitude ratios, the period-amplitude ($PA$), $PL$, $PLC$ and $PW$ relations, and the Petersen diagram \citep{petersen73}, to check the classification, to derive periods and pulsation modes, and to identify multimode pulsators. RV measurements obtained by the RVS are also planned for use as soon as they become available, to identify binary/multiple systems. The main tasks of the SOS Cep\&RRL pipeline are: i) to validate and refine the detection and classification of Cepheids and RR Lyrae stars in the {\it Gaia} data base, provided by the % general variable star analysis pipeline, by cleaning the sample from contaminating objects, i.e. other types of variables falling into the same period domain; ii) to check and improve the period determination and the light curve modelling; iii) to identify the pulsation modes and the objects with secondary and multiple periodicities; iv) to classify RR Lyrae stars and Cepheids into sub-types (fundamental mode -- RRab, first overtone -- RRc, and double mode -- RRd) according to the pulsation mode for the RR Lyrae stars, and DCEPs, ACEPs and T2CEPs for the Cepheids, along with identification of pulsation modes for the former two, and sub-classification into W Virginis (WVIR), BL Herculis (BLHER) and RV Tauri (RVTAU) types for the latter; v) to identify and flag variables showing modulations of the light curve, due to a binary companion or to the Blazhko effect \citep{blazhko}, that may falsify both the star brightness and the derived trigonometric parallax; vi) to use the pulsation properties and derive physical parameters (luminosity, mass, radius, effective temperature, metallicity, reddening, etc.) of confirmed {\it bona fide} RR Lyrae stars and Cepheids to be ingested into the {\it Gaia} main data base, by means of a variety of methods specifically tailored to these types of variables. The paper is organised as follows: Section~\ref{s2} provides a description of the whole architecture of the SOS Cep\&RRL pipeline, its diagnostic tools and their definition in the {\it Gaia} pass-bands. Section~\ref{s3new} presents the dataset and source selection on which the SOS Cep\&RRL pipeline was run and % describes how the pipeline was specifically tailored and simplified to analyse the {\it Gaia} SEP $G$-band time series data of candidate Cepheids and RR Lyrae stars. Section~\ref{results2} presents results of the SOS Cep\&RRL analysis that are published in {\it Gaia} DR1. Finally, the main results and future developments of the pipeline are summarised in Section~\ref{conclusions}.
We have presented an overview of the Specific Objects Study (SOS) pipeline SOS Cep\&RRL, developed in the context of Coordination Unit 7 (CU7) of the {\it Gaia} Data Processing and Analysis Consortium (DPAC), to validate and fully characterise Cepheids and RR Lyrae variables observed by the spacecraft. The SOS Cep\&RRL pipeline was specifically tailored to analyse the {\it Gaia} $G$-band time-series photometry of sources in the South Ecliptic Pole (SEP) footprint, which covers an external region of the LMC, and to produce results for confirmed RR Lyrae stars and Cepheids to be released with {\it Gaia} Data Release 1 ({\it Gaia} DR1). Results presented in this paper have been obtained applying the whole variable star analysis pipeline on the time-series photometry collected by {\it Gaia} during 28 days of Ecliptic Pole Scanning Law (EPSL) and 13 months Nominal Scanning Law (NSL). In addition to positions and $G$-band time-series photometry, for confirmed Cepheids and RR Lyrae stars in the {\it Gaia} SEP, {\it Gaia} DR1 contains the following outputs of the % SOS Cep\&RRL pipeline: period of pulsation, classification in type and pulsation mode, intensity-averaged mean magnitude, peak-to-peak amplitude and Fourier decomposition parameters $R_{21}$ and $\phi_{21}$. All quantities are provided with related uncertainties. The variable star inventory of {\it Gaia} DR1 includes 3,194 variables which comprise 599 Cepheids and 2,595 RR Lyrae stars, 386 of them (43 Cepheids and 343 RR Lyrae stars) are new discoveries by {\it Gaia}. The published sources are distributed over an area extending 38 degrees on either side from a point offset from the centre of the LMC by about 3 degrees to the north and 4 degrees to the east. The vast majority, but not all, are located within the LMC. The sample also includes 63 bright RR Lyrae stars that belong to the MW halo, of which 24 are new {\it Gaia} discoveries. A number of improvements of the SOS Cep\&RRL pipeline are planned in view of {\it Gaia} forthcoming releases, of which the next one, {\it Gaia} DR2, is foreseen in 2017. They include: \begin{enumerate} \item Check of the pass-band transformations. In preparation of {\it Gaia} DR2, all tools and relations used by SOS Cep\&RRL to classify and characterise the {\it Gaia} sources will be re-derived directly from Cepheids and RR Lyrae stars released in DR1, overcoming the need for transforming to {\it Gaia} pass-bands quantities that are generally known in the Johnson-Cousins system. This will definitely allow the $PL$ relationships adopted to identify and classify different types of Cepheids to be refined. On the other hand, as shown by Fig.~\ref{color-equation} in Section~\ref{con-for}, the $G$-band transformation (eq. (A.1)) worked rather well for the colour range spanned by the Cepheids and RR Lyrae stars in {\it Gaia} DR1. \item {\it Gaia} $G_\mathrm{BP}$, $G_\mathrm{RP}$ % colours will become available with {\it Gaia} DR2. This will allow to use $PW$ relations, whose reduced scatter, compared to the $PL$ relations, will allow to further improve the classification of Cepheids. \item Double-mode RR Lyrae stars and multimode classical Cepheids (F/1O, 1O/2O, etc.) will be identified and fully characterised for {\it Gaia} DR2 by improving the detection algorithm to properly take into account the scatter in the folded light curve, thus reducing the number of false positives. \item Estimate of the error in period, mean magnitude, peak-to-peak amplitude, etc. will be refined. In particular, errors in the Fourier parameters $\phi_{ij}$ and $R_{ij}$, which are currently computed by propagation of the errors in $\phi_i$, $\phi_j$, $R_i$ and $R_j$, will be entirely computed via Monte Carlo simulations. \item A classifier will be developed to optimise the type and subtype classification of Cepheids and RR Lyrae stars performed by the SOS Cep\&RRL pipeline. \end{enumerate} The results for Cepheids and RR Lyrae stars shown in this paper demonstrate the excellent quality of {\it Gaia} photometry notwithstanding all limitations in the dataset and processing for {\it Gaia} Data Release 1. They nicely showcase the potential of {\it Gaia} and the promise of {\it Gaia} Cepheids and RR Lyrae stars for all areas of the sky in which an appropriate light curve sampling will be achieved.
16
9
1609.04269
1609
1609.03480_arXiv.txt
\noindent It has been conjectured that the speed of sound in holographic models with UV fixed points has an upper bound set by the value of the quantity in conformal field theory. If true, this would set stringent constraints for the presence of strongly coupled quark matter in the cores of physical neutron stars, as the existence of two-solar-mass stars appears to demand a very stiff Equation of State. In this article, we present a family of counterexamples to the speed of sound conjecture, consisting of strongly coupled theories at finite density. The theories we consider include $\cN=4$ super Yang-Mills at finite R-charge density and non-zero gaugino masses, while the holographic duals are Einstein-Maxwell theories with a minimally coupled scalar in a charged black hole geometry. We show that for a small breaking of conformal invariance, the speed of sound approaches the conformal value from above at large chemical potentials.
Quantitatively understanding the properties of strongly interacting matter in the cold and extremely dense region realized in the cores of neutron stars constitutes a longstanding problem in nuclear physics \cite{Lattimer:2004pg,Brambilla:2014jmp}. The situation is complicated by a lack of first principles field theory tools: perturbative QCD is only applicable at extremely high densities \cite{Kurkela:2009gj,Kurkela:2016was}, while lattice Monte Carlo simulations are altogether prohibited due to the sign problem \cite{deForcrand:2010ys}. Add to this the fact that robust nuclear physics methods---including their modern formulations, such as the Chiral Effective Theory \cite{Machleidt:2011zz}---are only reliable below the nuclear saturation density $n_s\approx 0.16/\text{fm}^3$ \cite{Tews:2012fj}. It becomes clear that fundamentally new approaches to the problem are urgently needed. In this context, a highly promising avenue is the application of the holographic duality \cite{Maldacena:1997re}, which has indeed been lately applied to the description of both the nuclear \cite{Bergman:2007wp,Rozali:2007rx,Kim:2007vd,Kim:2011da,Kaplunovsky:2012gb,Ghoroku:2013gja,Li:2015uea,Elliot-Ripley:2016uwb} and quark matter \cite{Burikham:2010sw,Kim:2014pva,Hoyos:2016zke} phases inside a neutron star. The most fundamental quantity that governs the thermodynamic behavior of neutron star matter is its equation of state (EoS), i.e.~the functional dependence of its energy density $\varepsilon$ on the pressure $p$. Oftentimes, it is, however more illuminating to inspect the derivative $\partial p/\partial\varepsilon$, which equals the speed of sound squared in the system, $v_s^2$. This quantity namely describes the stiffness of the matter---a property needed to build massive stars capable of resisting gravitational collapse into a black hole. Causality restricts this parameter to obey the relation $v_s^2<1$ (and thermodynamic stability guarantees that $v_s^2>0$), but it has been widely speculated that a more restrictive bound might exist as well. In particular, the lack of known physical systems in a deconfined phase with a speed of sound exceeding the conformal value $v_s^2=1/3$ has prompted a conjecture that this might represent a theoretical upper limit for the quantity \cite{Hohler:2009tv,Cherman:2009tw} in the same spirit that $\eta/s=1/(4\pi)$ was initially thought to represent a lower limit for the shear viscosity to entropy ratio in any strongly coupled fluid \cite{Kovtun:2004de}. Some support for this argument comes from the fact that both the inclusion of a nonzero mass to a conformal system as well as the introduction of perturbatively weak interactions in an asymptotically free theory are known to lead to a speed of sound below the conformal limit. The speed of sound conjecture has been widely discussed in the context of neutron star physics, and it has been shown to be in rather strong tension with the known existence of two-solar-mass neutron stars \cite{Demorest:2010bx,Antoniadis:2013pzd}, which requires a very stiff EoS \cite{Bedaque:2014sqa}. This points toward a highly nontrivial behavior of $v_s$ as a function of the baryon chemical potential. Namely, at low densities the speed of sound is known to have a very small value, while its behavior at asymptotically large $\mu_B$ is a logarithmic \textit{rise} toward $v_s^2=1/3$. This implies that should the speed of sound bound be violated somewhere, $v_s$ needs to possess at least two extrema, a maximum and a minimum, between which the quantity may either behave continuously or jump from the maximum to the minimum value. Recalling the success of holographic methods in the description of strongly coupled quark gluon plasma produced in heavy ion collisions \cite{CasalderreySolana:2011us,Brambilla:2014jmp}, it is clearly worthwhile to study the behavior of the speed of sound in holographic models of quark matter. Here, one, however, quickly realizes that the speed of sound bound is not easily violated: all known examples of asymptotically AdS${}_5$ geometries predict $v_s^2\leq 1/3$ \cite{Cherman:2009tw}.\footnote{For two exceptions to this that are, however, dynamically unstable, see \cite{Buchel:2009ge,Buchel:2010wk}.} The known violations of the bound occur in theories that do not flow to a four-dimensional conformal field theory (CFT) in the UV, and thus do not correspond to ordinary renormalizable field theories in four dimensions. Such examples include the $3+1$-dimensional brane intersections $D4-D6$, $D5-D5$, and $D4-D8$ (the Sakai-Sugimoto model \cite{Sakai:2004cn}), corresponding to the respective speeds of sound $v_s^2=1/2,1,2/5$ \cite{Jokela:2015aha,Itsios:2016ffv,Kulaxizi:2008jx}. It is well known that even after a compactification to $3+1$ dimensions, it is not possible to disentangle four-dimensional dynamics from the additional degrees of freedom that live on the higher-dimensional color branes, and thus the thermodynamic properties may be very different from a {\em bona fide} four-dimensional theory. Another class of examples that violate the bound are nonrelativistic geometries, such as the Lifshitz ones \cite{Kachru:2008yh,HoyosBadajoz:2010kd,Jokela:2016nsv,Taylor:2015glc}, for which a scaling symmetry fixes $\partial p/\partial \varepsilon=z/3$, with $z$ the dynamical exponent of the dual nonrelativistic theory. For any $z>1$, the EoS of the nonrelativistic theory is stiffer than the conformal one. In this case, the violation of the bound is in some sense trivial, since the dual theory is nonrelativistic. For the reasons listed above, it is very challenging to build a holographic description for dense strongly interacting quark matter that would allow for the existence of deconfined matter inside even the heaviest neutron stars observed. Indeed, in a recent study of hybrid neutron stars by the present authors \cite{Hoyos:2016zke}, where a holographic EoS was constructed for the quark matter phase, it was discovered that the stars became unstable as soon as even a microscopic amount of quark matter was present in their cores. This was attributed to a very strong first order deconfining phase transition in the model, which was ultimately due to the relatively low stiffness of the conformal quark matter EoS, corresponding to $v_s^2=1/3$. These findings are in line with the analysis of \cite{Kurkela:2014vha}, where it was seen that large speeds of sound were necessary to obtain stars containing nonzero amounts of quark matter in their cores. One possible resolution to the speed of sound puzzle is clearly that the quantity rises to a value $v_s>1/\sqrt{3}$ in the nuclear matter phase, then discontinuously jumps to a low value at a first order deconfinement phase transition, and finally slowly rises toward the conformal limit in the deconfined phase. In the paper at hand, we propose another viable scenario, involving a violation of the speed of sound bound in the deconfined phase and thereby paving the way to the existence of quark matter in neutron star cores. We do this by constructing a holographic EoS for dense deconfined matter that not only exhibits a speed of sound above the conformal limit, but in addition involves an asymptotically anti-de Sitter (AdS) spacetime. This provides an explicit counterexample to the common lore that asymptotically AdS spacetimes necessitate $v_s<1/\sqrt{3}$, \footnote{Note, however, that there is no apparent reason on the field theory side to suspect that the speeds of sound in theories that are UV complete would exhibit a universal upper bound. % } and suggests that there may exist a large class of realistic holographic models for dense deconfined QCD matter that involve speeds of sound significantly above this bound. Our paper is organized as follows. In Sec. \ref{sec:speed}, we consider a class of models involving an Einstein-Maxwell action with a minimally coupled charged bulk scalar, and show that the speed of sound bound is violated in it for different values of the charge and the mass of the scalar. In Sec. \ref{sec:Rcharge}, we consider a top-down string theory setup in this class, dual to ${\mathcal N}=4$ super--Yang-Mills theory at non-zero $R$-charge density --and confirm that the speed of sound is larger than the conformal value before the system becomes unstable toward the formation of a homogeneous condensate. In Sec. \ref{sec:conclusions}, we finally discuss the implications of our findings, while Appendixes \ref{app:nearext}, \ref{app:formulas}, and \ref{app:holoren} are devoted to a closer look at some technical details of our computation.
\label{sec:conclusions} According to current understanding, it appears likely that the speed of sound in neutron star matter exceeds the conformal value $v_s=1/\sqrt{3}$ in the dense nuclear matter phase \cite{Bedaque:2014sqa}. Knowing that for ultradense quark matter, the speed of sound approaches this value from below, we are in practice left with two possibilities: either the quantity exhibits a discontinuous jump in a first order phase transition or it must continuously first decrease and then increase its value in the quark matter phase. The latter scenario has, however, been disfavored due to the lack of first principles calculations exhibiting speeds of sound larger than the conformal value in deconfined matter. In addition to perturbative calculations, this statement holds true for all known holographic setups that flow to a four-dimensional CFT in the UV, which has prompted speculation of a more fundamental speed of sound bound \cite{Hohler:2009tv,Cherman:2009tw}. In the paper at hand, we have shown that the conjectured bound on the speed of sound in holographic models with UV fixed points is violated for a simple class of models involving RG flows triggered by relevant scalar operators charged under a global Abelian symmetry. Within this class, we were able to find a string theory example: a charged black hole dual to $\cN=4$ theory at finite $R$-charge density, deformed by a gaugino mass term. Since at very large densities an instability toward the formation of a homogeneous condensate will develop, we made sure that the violation occurs in the stable regime. We may conclude that \textit{there is no universal bound for the speed of sound in holographic models dual to ordinary four-dimensional relativistic field theories}. This comes as good news for everyone wishing to build realistic holographic models for high-density nuclear or quark matter. A natural extension of our work is clearly to go beyond the approximation of a small breaking of conformal invariance and study how high values the speed of sound can be maximally obtained. Of particular interest is to investigate whether large enough speeds can be obtained that would allow the building of stable hybrid stars with holographic quark matter in their cores. Continuing along these lines, it might be interesting to study the behavior of bottom-up models designed to match the properties of QCD at zero \cite{Gursoy:2008bu,Gursoy:2009jd,DeWolfe:2010he,Rougemont:2015wca,Rougemont:2015ona} and finite density \cite{Alho:2013hsa} and to see if they give phenomenologically sensible results at very small temperatures. A different but equally interesting challenge to pursue would be to find a top-down model with finite baryon (rather than $R$-charge) density. Quark matter is typically introduced by embedding probe branes in the geometry, and in the known examples where the theory is truly ($3+1$)-dimensional the bound is satisfied even at finite density. This might change upon considering the backreaction of the branes. Solutions with backreacted flavors at finite density have been recently constructed in \cite{Bigazzi:2011it,Bigazzi:2013jqa,Faedo:2014ana,Faedo:2015urf}. Another possibility is to take an alternative large-$N$ limit where (anti) fundamental fields are extrapolated to two-index antisymmetric representations \cite{HoyosBadajoz:2009hb} and where operators with baryon charge map to gravitational modes. We leave these investigations for future work. \vspace{5mm} \paragraph
16
9
1609.03480
1609
1609.07740_arXiv.txt
We present an analysis of late-time Hubble Space Telescope Wide Field Camera 3 and Wide Field Planetary Camera 2 observations of the site of the Type Ic SN 2007gr in NGC 1058. The SN is barely recovered in the late-time WFPC2 observations, while a possible detection in the later WFC3 data is debatable. These observations were used to conduct a multiwavelength study of the surrounding stellar population. We fit spatial profiles to a nearby bright source that was previously proposed to be a host cluster. We find that, rather than being an extended cluster, it is consistent with a single point-like object. Fitting stellar models to the observed spectral energy distribution of this source, we conclude it is A1-A3 Yellow Supergiant, possibly corresponding to a star with $M_{ZAMS} = 40M_{\odot}$. SN 2007gr is situated in a massive star association, with diameter of $\approx 300\,\mathrm{pc}$. We present a Bayesian scheme to determine the properties of the surrounding massive star population, in conjunction with the Padova isochrones. We find that the stellar population, as observed in either the WFC3 and WFPC2 observations, can be well fit by two age distributions with mean ages: $\sim 6.3\,$Myr and $\sim 50\,$Myr. The stellar population is clearly dominated by the younger age solution (by factors of 3.5 and 5.7 from the WFPC2 and WFC3 observations, respectively), which corresponds to the lifetime of a star with $M_{ZAMS} \sim 30M_{\odot}$. This is strong evidence in favour of the hypothesis that SN 2007gr arose from a massive progenitor star, possibly capable of becoming a Wolf-Rayet star.
\label{sec:intro} All stars with initial masses $>8M_{\odot}$ are expected to end their lives as core-collapse Supernovae (SNe) or, possibly, disappearance due to direct infall onto black holes during core-collapse \citep{2008ApJ...684.1336K}. Recent efforts to detect SN progenitors have been successful for Type IIP SNe, confirming that the progenitors of these particular SNe to be red supergiants (RSGs) with initial masses $8<M_{init}<16M_{\odot}$ \citep{2008arXiv0809.0403S, 2009Sci...324..486M}. The progenitors of the H-deficient Type Ibc SNe have, however, proved difficult to detect. Stellar evolution models predict that the progenitors for these types of SNe are either very massive single stars ($M_{init} \gtrsim 30-40M_{\odot}$), which undergo extreme levels of mass loss during phases as Luminous Blue Variables and Wolf-Rayet (WR) stars (removing their outer hydrogen layers), and lower mass stars in binaries, which are stripped of their outer hydrogen layers by mass transfer onto a binary companion \citep[see e.g.][]{eld04, 2013A&A...558A.131G,izzgrb,2004ApJ...612.1044P}. The progenitors for these types of SNe are expected to be blue, compact stars with high effective temperatures. Such stars lie on the blue-side of the Hertzsprung-Russell (HR) diagram, whereas the RSG progenitors of Type IIP SNe occupy the low effective temperature, red side of the HR diagram. The review of pre-explosion observations of 12 Type Ibc SNe presented by \citet{2013MNRAS.436..774E} concluded that, statistically, it was unlikely that all could have arisen from single massive WR stars; suggesting a significant contribution to the Type Ibc SN rate from the binary progenitor channel. For the study by \citeauthor{2013MNRAS.436..774E} the case of SN~2002ap was particularly important, with the deep pre-explosion images effectively ruling out all massive single stars \citep{2007MNRAS.381..835C}. Similar conclusions, based on the ratio of Type Ibc SN to H-rich SNe and the slope of the Initial Mass Function (IMF), suggested that Type Ibc SNe were too common to just arise from single massive stars \citep{2011MNRAS.412.1522S}. More recently, the progenitor of the Type Ibc SN iPTF 13bvn has been authoritatively detected in pre-explosion observations. While some analyses of the properties of the pre-explosion progenitor candidate suggest a possible single massive progenitor star, other studies of the pre-explosion source and models of the properties of the SN light curve are all consistent with a lower mass progenitor that was stripped through interaction with a binary companion \citep{2013ApJ...775L...7C,2013A&A...558L...1G,2014A&A...565A.114F,2014AJ....148...68B,2015MNRAS.446.2689E}. There have been some suggestions of bright, massive progenitor stars detected in pre-explosion observations of interacting Type IIn SNe, such as 2005gl \citep{2007ApJ...656..372G, galyam05gl}, 2010jl \citep{2011ApJ...732...63S} and the possible SN 2009ip \citep{2011ApJ...732...32F}. Limitations in the nature of the pre-explosion imaging for these SNe, however, make it difficult to draw firm conclusions about the nature of their progenitors. Such Type IIn SNe are, however, rare and the fate of the majority of massive stars with $M_{init} > 20M_{\odot}$ remains uncertain. Here we report the results of late-time observations ($\sim 2.4$ years post-explosion) of the site of the Type Ic SN 2007gr in the galaxy NGC 1058 in an effort to gather further information about the nature of the progenitor star. Pre-explosion observations of the site of SN 2007gr, in two Hubble Space Telescope (HST) Wide Field Planetary Camera 2 (WFPC2) bands, were reported by \citet{2008ApJ...672L..99C}. \citeauthor{2008ApJ...672L..99C} did not find a source coincident with the SN position, but instead identified a nearby bright source as a possible host cluster and determined a mass estimate for the progenitor based on the age of such cluster in comparison with the lifetimes of possible progenitors. The properties of this candidate host cluster were further analysed with additional HST WFPC2 observations by \citet{2014ApJ...790..120C}. We adopt a distance modulus to NGC 1058 of $\mu = 30.13\pm0.35$ \citep{1994ApJ...432...42S}, which was also used in the previous analyses of \citet{2008ApJ...672L..99C} and \citet{2014ApJ...790..120C}, and an explosion epoch of 2007 Aug 13 (JD $2 454 325.5 \pm 2.5$; \citealt{2009A&A...508..371H}). Applying the O3N2 metallicity measure \citep{2004MNRAS.348L..59P} to the measured strengths of emission lines of {\sc H ii} regions in NGC 1058 \citep{1998AJ....116..673F}, we estimate the oxygen abundance at the deprojected radius of SN 2007gr of $R/R_{25} = 0.487$ to be approximately solar \citep{2004A&A...417..751A}.
\label{sec:disc} Despite the lack of a detection of a progenitor in pre-explosion images, the nature of the progenitor of SN 2007gr is particularly intriguing given the classification of the SN as a Type Ic SN and the location of the SN at almost the centre of a dense young, massive star association and in close proximity to a star as massive as $40M_{\odot}$. From the WFC3 and WFPC2 observations there is immediate support for the hypothesis that SN 2007gr resulted from a Wolf-Rayet progenitor that was an initially massive star. SN 2007gr is an important Type Ic SN, principally because of the proximity of its host galaxy which resulted in a detailed photometric and spectroscopic monitoring campaign \citep{2008ApJ...673L.155V, 2009A&A...508..371H}. Studies by \citet{2010Natur.463..516P} and \citet{2011ApJ...735....3X} reported that SN 2007gr also hosted a relativistic outflow, that might imply SN 2007gr had a connection with the Gamma Ray Burst phenomenon. Using further X-ray and radio observations of SN 2007gr, \citet{2010ApJ...725..922S} robustly disagree with the previous conclusions and find SN 2007gr to only be expanding at trans-relativistic ($\approx 0.2 $c) velocities. SN 2007gr is extraordinary for its very prompt radio peak in luminosity which, in comparison with the compilation of observations of Type II (including IIP, IIn and IIL) and Type Ib/c SNe compiled by \citet{2015arXiv150407988K}, implies high blast wave speeds that are only compatible with compact progenitors such as WR stars. Based on previous HST observations of the site of SN 2007gr, \citet{2008ApJ...672L..99C} and \citet{2014ApJ...790..120C} have presented mass estimates based on ages determined from the SED of Source A under the assumption that it was a cluster. Based on the high-resolution HST images presented here, we find Source A is consistent with a point stellar-like source, and not a cluster; hence, the previously derived masses for the progenitor are based on an assumption that we propose to be flawed. From a census of the ejecta, conducted using nebular phase spectroscopy of SN 2007gr, \citet{2010MNRAS.408...87M} calculated that the supernova resulted from the explosion of a relatively low mass star with $M_{ZAMS} \approx 15M_{\odot}$. This is at the lower limit for the masses we estimate for the progenitor from the age of the surrounding stellar population. Given the radically different nature of the two approaches, the source of the discrepancy between our mass estimate for the progenitor and the estimate presented by \citet{2010MNRAS.408...87M} is unclear. \citet{2013AJ....146...30K} used integral field spectroscopy of knots of $H\alpha$ emission surrounding the site of SN 2007gr to estimate an age for the progenitor in the range of $\tau = 6.3-7.8$ using the equivalent width of $H\alpha$ as a proxy for the age. The reported age range is consistent with the age for the stellar population found here. \citeauthor{2013AJ....146...30K} note, however, that given the limitations of the seeing conditions of the observations, individual sources in the vicinity of the SN were not resolved. Previously \citet{2008ApJ...672L..99C} discussed a continuum subtracted $H\alpha$ image of the site of SN 2007gr acquired on 2005 January 13 with the Wide Field Camera (WFC) of the Isaac Newton Telescope (INT). On Figure \ref{fig:disc:halpha}, we show contours corresponding to $H\alpha$ emission over the late-time WFC3/UVIS F336W observation. Despite the significant difference in spatial resolution between the two images, it is clear that the site of SN 2007gr and the centre of the massive star association are not associated with significant $H\alpha$ emission. This has important consequences for the use of the $H\alpha$ equivalent width as an age proxy, since the SN position may itself not be associated with any $H\alpha$ emission. The use of pixel statistics to assess the proximity of SN positions with tracers of star formation (e.g. $H\alpha$ emission; for a review see \citealt{2015PASA...32...19A}) may also be undermined by the use of low resolution ground based imaging which, as in the case of SN 2007gr, might correlate $H\alpha$ emission with a SN, where there is actually no association; further reinforcing the conclusions of \citet{2013MNRAS.428.1927C}. This highlights the fact that weak association with $H\alpha$ emission does not immediately imply a low mass progenitor.\\ Here, we have used observations of the surrounding stellar population covering a large wavelength range from the ultraviolet to the near-infrared. The observations, as presented in Fig. \ref{fig:cmd:wfc3} and \ref{fig:cmd:wfpc2}, probe a large section of the main sequence of massive stars, as well as a small number of stars that have evolved off the main sequence. The only requirement of this analysis is that the unseen progenitor is drawn from the same underlying distributions as the observed population of stars. Previously, studies of the ages of resolved stellar populations associated with the sites of CCSNe have concentrated on the nearby population within a radius of $\mathrm{\sim 50\,pc}$, under the assumption that stars in such proximity will be effectively co-eval with the progenitor, while at the same time limiting contamination from the background population \citep{2009ApJ...703..300G}. We have not placed a strict radius constraint on the size of the region associated with SN 2007gr, except by eye constraining the size of the massive star association to $\sim 300\,\mathrm{pc}$. The Bayesian analysis presented here permits us to include as many additional populations as required to fit the observed data. In addition, a background stellar population could also be treated as an additional mixture component, described by stars away from the central region of interest. We have approximately estimated the spatial extent of the massive stellar population with which SN 2007gr is associated to have a diameter of $\sim 300\mathrm{pc}$. The ages we have derived from the WFPC2 and WFC3 observations are significantly shorter than the expected sound crossing for a single cloud of this size \citep[$\sim 1 \,\mathrm{Myr\,pc^{-1}}$][]{2000ApJ...530..277E}. There are, however, massive star complexes that cover large spatial scales in the Galaxy, that have small age spreads that are also incompatible with the expected sound crossing times. By comparison \citet[][and references therein]{2002AJ....124..404P} studied the OB association U Sco, which has an approximate spatial extent of $\sim \mathrm{50 - 70\,pc}$, and found the stellar population could be described by a single age of 5 Myr, without any significant age spread, implying a single burst of star formation. \citeauthor{2002AJ....124..404P} suggested that star formation was triggered by a SN shock wave. Although U Sco has smaller dimensions than the massive star association in NGC 1058, it does illustrate that such large structures with small age spreads can exist. Conversely, if the constituent stars of the association were formed in a single location, velocity dispersions of only $10 - 20\,\mathrm{km\,s^{-1}}$ would be required to reproduce the observed spatial distribution of stars in the lifetime of the young age solution. It is also worth contrasting the dense environment in which the, apparently, massive progenitor of SN~2007gr resides, compared to the relatively sparse neighbourhood of SN 2002ap, for which deep pre-explosion images ruled out all single massive progenitor scenarios \citep{2007MNRAS.381..835C,2013MNRAS.436..774E}. An important factor not included in our Bayesian treatment is the possibility of differential extinction. We have assumed that the stars that make up the two observed populations share the same, single valued reddening. We note that, similarly to the differences in age inferred from the WFC3 and WFPC2 observations, the difference in wavelength coverage may be behind the difference in extinction; with observations in the UV more heavily affected by reddening, than the corresponding redder WFPC2 observations. If the extinction for each star arises from a distribution of extinctions, rather than a single value, this could also be implemented using an hierarchical scheme. Care, however, would need to be taken to avoid any biases introduced due to the requirement that $A_{V} \geq 0$. It is expected that the inclusion of differential reddening would make the age distributions narrower; as a single isochrone could accommodate a larger range of colours, than under the assumption of a single value for the extinction. Such analysis of the host environments of SNe, as presented here, has been previously employed by \citet{2009ApJ...703..300G}, \citet{2011ApJ...742L...4M}, and \citet{2014ApJ...795..170J} for estimating the progenitor masses for CCSNe. \citet{2014ApJ...791..105W} found that for the Type IIP SNe in their sample they were able to achieve similar estimates for the median mass as those derived directly for the progenitor in pre-explosion images; as was also achieved by \citet{2011ApJ...742L...4M} from their analysis of the population surrounding the Type IIb SN 2011dh. Although, as evidenced by Figure \ref{fig:res:mass}, there are large uncertainties associated with this approach, the case of SN 2007gr does highlight that the lack of a detection of a progenitor in pre-explosion images does not imply a low mass. \citet{2015PASA...32...16S} argues that the lack of detection of progenitors of high mass stars ($\gtrsim 18M_{\odot}$) suggests that these stars may not be exploding at all as SNe, but rather collapsing to form black holes. Caution is required in drawing such a conclusion as there are a number of major issues concerning the analysis of the pre-explosion detections of the progenitors of CCSNe that have yet to be resolved. A fundamental problem in the consideration of the ensemble of progenitor detections and non-detections in a statistical context is that it is unclear how the, generally poor, quality of available pre-explosion observations effectively censors the underlying distribution of SN progenitors. In the case of the pre-explosion observations of SN 2007gr, consisting of shallow WFPC2 F450W and F814W images, it was not surprising that a possible Wolf-Rayet progenitor might not be detected \citep{2008ApJ...672L..99C}. We used the SEDs of \citet{2012A&A...540A.144S}\footnote{http://www.astro.physik.uni-potsdam.de/$\sim$wrh/PoWR/powrgrid1.html} (over a temperature range of $45\,000 \leq T \leq 200\,000\,\mathrm{K}$ and transformed radius of $-0.5\leq \log (R_{t}/R_{\odot}) \leq 1.6$) to probe the sensitivity of the pre-explosion observations (see Section \ref{sec:res:location}) to a WC progenitor. Using the extinction estimate for the host stellar population determined here, the minimum luminosity that might yield a detectable progenitor ($p(\mathrm{detect}) = 0.5$) in the pre-explosion observations is $\log (L/L_{\odot}) \sim 6.5$. This limit is significantly higher than the luminosities predicted for stars that will end their lives as WR stars \citep[$\log (L/L_{\odot}) < 6.0$;][]{eld04,2012A&A...544L..11Y} and the luminosities derived by \citeauthor{2012A&A...540A.144S} for a sample of Galactic WC stars. Based on stellar populations associated with SN remnants in M31 and M33, \citet{2014ApJ...795..170J} conclude that the maximum mass for a star to explode may be as high as $\sim 35-45M_{\odot}$. In the absence of pre-explosion data of sufficient quality to detect the progenitors of certain types of SNe, in particular those from higher mass stars, this analysis could play an important role in establishing the fate of those stars that do not lead to progenitors as easy to detect as RSGs. It is clear from analyses such as this that contextual information about the locations of SN explosions can be just as important as direct detection of progenitors. Indeed, if a progenitor is detected a significant amount of information might be missed by ignoring the stars nearest to it. In studying population of stars associated with a SN, we may potentially directly probe the physical conditions in which the progenitor formed and in which its entire evolution took place; to which some observational proxies (such as $H\alpha$ emission equivalent width) might be insensitive in individual cases, only providing insight for a large statistical sample. \begin{figure*} \includegraphics[height=6.5cm, angle=270]{2007gr_halpha.pdf} \caption{Late-time WFC3/UVIS observation of the site of SN 2007gr with contours overlaid corresponding to a continuum-subtracted INT WFC narrow band $H\alpha$ image.} \label{fig:disc:halpha} \end{figure*}
16
9
1609.07740
1609
1609.02023_arXiv.txt
{}{We present and compare the distribution of two shock tracers, SiO and HNCO, in the Circumnuclear Disk (CND) of NGC 1068. We aim to determine the causes of the variation in emission across the CND.}{SiO$(3-2)$ and HNCO$(6-5)$ emission has been imaged in NGC 1068 with the Plateau de Bure Interferometer (PdBI). We perform an LTE and RADEX analysis to determine the column densities and physical characteristics of the gas emitting these two lines. We then use a chemical model to determine the origin of the emission.}{There is a strong SiO peak to the East of the AGN, with weak detections to the West. This distribution contrasts that of HNCO, which is detected more strongly to the West. The SiO emission peak in the East is similar to the peak of the molecular gas mass traced by CO. HNCO emission is offset from this peak by as much as $\sim$80 pc ($\leqslant$1$''$). We compare velocity integrated line ratios in the East and West. We confirm that SiO emission strongly dominates in the East, while the reverse is true in the West. We use RADEX to analyse the possible gas conditions that could produce such emission. We find that, in both East and West, we cannot constrain a single temperature for the gas. We run a grid of chemical models of potential shock processes in the CND and find that SiO is significantly enhanced during a fast (60 km s$^{-1}$) shock but not during a slow (20 km s$^{-1}$) shock, nor in a gas not subjected to shocks at all. We find the inverse for HNCO, whose abundance increases during slow shocks and in warm non-shocked gas.}{High SiO and low HNCO indicated a fast shock, while high HNCO and low SiO indicates either a slow shock or warm, dense, non-shocked gas. The East Knot is therefore likely to contain gas that is heavily shocked. From chemical modelling, gas in the West Knot may be non-shocked, or maybe undergoing a much milder shock event. When taking into account RADEX results, the milder shock event is the more likely of the two scenarios.}
Molecular emission is now routinely used to probe and trace the physical and chemical processes in external galaxies. Over the past 20 years or so, different molecules have been found to trace different gas components within a galaxy (for example HCO, HOC$^+$ in PDRs (e.g. \citealt{2004ApJ...616..966S, 2002ApJ...575L..55G}), HCN and CS for dense gas (e.g. \citealp{2004ApJ...606..271G, 2008ApJ...685L..35B})). However, it is seldom possible to identify one molecular species with one gas component only, as often energetics play a key role in shaping the spectral energy distribution of the molecular ladders. Of particular interest to this study are the molecules SiO and HNCO, which are both well known tracers of shocks \citep{1997ApJ...482L..45M, 2010A&A...516A..98R}. Both these molecules have been observed and used as shock tracers in external galaxies \citep{2005ApJ...618..259M, 2006A&A...448..457U, 2009ApJ...694..610M, 2010A&A...519A...2G}. HNCO may be formed mainly on dust grain mantles \citep{2015MNRAS.446..439F} or possibly in the gas phase, followed by freeze out to the icy mantles \citep{2015MNRAS.449.2438L}. In either cases its location on the outer regions of the dust grain means that it is easily sublimated even in weakly shocked regions; hence HNCO may be a particularly good tracer of low velocity shocks. Silicon, on the other hand, is partially depleted from the gas to make up the dust grain itself. This extra silicon is only released into the gas phase in higher velocity shocked regions through sputtering. Once it is in the gas phase, it can react with molecular oxygen or a hydroxyl radical to form SiO \citep{1997A&A...321..293S}, which can then be used to trace more heavily shocked regions. Thus HNCO and SiO concomitant detection in a galaxy where shocks are believed to take place may be able to give us a fuller picture of the shock history of the gas. NGC 1068 is a well-studied nearby (D = 14 Mpc \citep{1997Ap&SS.248....9B}, 1$''$ $\approx$ 70 pc) Seyfert 2 galaxy. Its molecular gas is distributed over three regions \citep{2000ApJ...533..850S}: a starburst ring with a radius $\sim$1.5 kpc, a $\sim$2 kpc stellar bar running north East from a circumnuclear disk (CND) of radius $\sim$200 pc. {\citet{2010A&A...519A...2G} used the Plateau de Bure Interferometer (PdBI) to map the galaxy and found strong detections of SiO$(2-1)$ in the East and West of the CND. The SiO kinematics of the CND point to an overall rotating structure, distorted by non-circular and/or non-coplanar motions. The authors concluded that this could be due to large scale shocks through cloud-cloud collisions, or through a jet-ISM interaction. However, due to strong detections of CN not easily explained by shocks, they also suggest that the CND could be one large X-ray dominated region (XDR)}. More recently, the CND has been mapped at very high resolution with ALMA in several molecular transitions \citep{2014A&A...567A.125G, 2015ASPC..499..109N, 2016ApJ...823L..12G}. In Garcia-Burillo et al. (2014), five chemically distinct regions were found to be present within the CND: the AGN, the East Knot, West Knot and regions to the north and south of the AGN (CND-N and CND-S). Viti et al. (2014) combined these ALMA data with PdBI data and determined the physical and chemical properties of each region using a combination of CO rotation diagrams, LVG models and chemical modelling. It was found that a pronounced chemical differentiation is present across the CND and that each sub-region could be characterised by a three-phase component interstellar medium: one of these components comprises shocked gas and seems to be traced by { a high--J (7--6) CS} line. We now resolve the CND in both SiO$(3-2)$ and HNCO$(6-5)$ and couple these with previous lower-J observations. In Section 2 we describe the observations, while in Section 3 we present the molecular maps. In Section 4 we present the spectra at each location. In Section 5 we perform an LTE and RADEX analysis in order to constrain the physical conditions of the gas, as well as chemical modelling to determine its origin. We briefly summarise our findings in Section 6.
We have used the Plateau de Bure Interferometer to map two shock tracers, SiO and HNCO. SiO$(3-2)$ is detected strongly to the East of the AGN and to some extent to the West. HNCO$(6-5)$ is detected more strongly to the West, but is also detected in the East. The emission of the two lines is slightly offset from one another. We extracted spectra to analyse from the four peak emission locations of both lines. This allowed us to complete a RADEX radiative transfer modelling using our observations and SiO$(2-1)$ and HNCO$(5-4)$ from literature. We used parameters for the East and West Knot obtained by \citet{2014A&A...570A..28V} through a modelling of HCN, CS, CO and HCO$^{+}$. We found that in order to obtain a fit to our observations, the gas density, n(H$_2$) must be higher than \num{e4} cm$^{-3}$. We also found that, in general over the four locations, it was very hard to constrain a temperature. This may indicate that the gas as traced by HNCO and SiO is not at a constant temperature, consistent with a shocked region's varying temperature. Although this could also be due to the gas being of higher temperature than the upper energy levels of our transitions. In order to further investigate the origin of the SiO and HNCO emission we completed chemical modelling. We modelled a representative fast shock (60 km s$^{-1}$), slow shock (20 km s$^{-1}$) and no shock. We found that SiO is significantly enhanced during the fast shocks, due to grain core sputtering of Si. It was slightly enhanced during the slow shock and was also produced to some extent in the no shock models. We found that HNCO actually decreased in the fast shock models due to the destruction of its precursor, NO. This occurs through reaction with atomic hydrogen and only proceeds at the very high temperatures found during the fast shock. To confirm this, during the slow shock, HNCO abundance significantly increases. HNCO also increased in abundance without need for a shock, in warm dense gas. This leads us to conclude that a high SiO but low HNCO abundance are indicative of a fast shock, whereas a low SiO and high HNCO abundance may indicate the presence of a slow shock, or of warm dense non shocked gas. Observations of the East Knot seem to therefore suggest gas in the region is heavily shocked. The offset of the HNCO peak to the SiO peak suggests that there may be regions in the East Knot away from the main shock that are undergoing a milder shock (particularly around our East Knot 2). The weak SiO emission and stronger HNCO emission in the West Knot suggests that there are not fast shocks occurring. There may be slower shocks, or the gas may be warm, dense and non-shocked. The results of our RADEX analysis, where we struggle to constrain temperature in the West Knot, point to the milder shocks as the more likely solution. \FloatBarrier
16
9
1609.02023
1609
1609.02215_arXiv.txt
We report follow-up studies of 35 recently-discovered cataclysmic variables (CVs), 32 of which were found in large, automated synoptic sky surveys. The objects were selected for observational tractability. For 34 of the objects we present mean spectra and spectroscopic orbital periods, and for one more we give an eclipse-based period. Thirty-two of the period determinations are new, and three of these refine published estimates based on superhump periods. The remaining three of our determinations confirm previously published periods. Twenty of the stars are confirmed or suspected dwarf novae with periods shorter than 3 hours, but we also find three apparent polars (AM Her stars), and six systems with $P > 5$ h, five of which have secondary stars visible in their spectra, from which we estimate distances when possible. The orbital period distribution of this sample is very similar to that of previously discovered CVs.
Cataclysmic variable stars (CVs) are a broad class of binaries that include a white dwarf primary star accreting via Roche lobe overflow from a close companion, which usually resembles a cool main-sequence star; \citet{warner95} gives a useful introduction. CVs are compact, low-luminosity systems with typical orbital periods $P_{\rm orb}$ of only a few hours. Their evolution is driven largely by angular momentum loss, which over most of a CV's lifetime causes the orbit to gradually shrink. However, at a period near 75 min, the slope of the secondary's mass-radius relation reverses (at least for hydrogen-burning secondaries) which causes the orbital period to increase as angular momentum is lost. Single white dwarfs evolve from red giants, which attain radii much larger than typical CV systems. Unless a white dwarf is formed elsewhere and later captured -- an unlikely event in the field -- a CV must therefore have passed through common-envelope evolution at some point, during which the secondary was engulfed by the primary, leading to rapid loss of angular momentum. Modeling the common-envelope stage with {\it a priori} physics is extremely difficult, so the usual practice is to use simple parameterizations to describe the process, and match model outputs to the observed population (see, e.g., \citealt{goliaschnelson15}). Constraining the calculations requires an accurate accounting of the CV population. CVs are rather rare, low-luminosity systems, so this requires consideration of the channels through which they are discovered. The first CVs were discovered because they were variable stars, most conspicuously classical novae, driven by thermonuclear explosions of hydrogen-rich material accreted onto the white dwarf, and also dwarf novae, in which the accretion disk around the white dwarf transitions from time to time into a much brighter state. Other CVs with less dramatic variability were found by color selection, especially ultraviolet excesses. The Palomar-Green survey found a sizable number of CVs \citep{green82, ringwald93}, and more recently the Sloan Digital Sky Survey (SDSS) obtained spectra of 285 color-selected CVs and CV candidates, most of which were new discoveries (see \citealt{szkodyviii} and previous papers in that series). Finally, many CVs emit X-rays; X-ray surveys especially tend to find CVs in which the white-dwarf primary is strongly magnetized (see, e.g., \citealt{thorhalpern13, halpernthor15}). In recent years several synoptic sky surveys have generated a torrent of new CV discoveries. The Catalina Real Time Transient Surveys (CRTS; \citealt{drakecrtts, breedt14, drake14}) have discovered over 1000 CVs on the basis of variability, most of them being discovered by the Catalina telescope and denoted as ``CSS''. The Russian MASTER project \citep{lipunov10} has also produced a large number of new discoveries, and most recently the ASAS-SN project \citep{shappeecurtain}, has come into its own. ASAS-SN uses 14-cm aperture lenses and reaches a limiting magnitude of only $\sim 16$, but is now covering $\sim 20,000$ square degrees per night under favorable conditions (K. Stanek, private communication). Many transient sources fade back to magnitudes beyond the reach of spectroscopic followup on 2-m class telescopes, but a substantial minority remain tractable. ASAS-SN's fast cadence and relatively shallow limiting magnitude make it an especially rich source of CVs amenable to further study. CVs also continue to be discovered through color selection. As examples, \citet{carter13} searched the SDSS for AM CVn stars (ultra-short period CVs in which both components are degenerate dwarfs), and \citet{kepler16} searched for hot white dwarfs and turned up twelve CVs as a byproduct. To place the many newly-discovered CVs in context, it is necessary to follow them up, in particular to find orbital periods. Periods can often be found from photometry, but in non-eclipsing systems radial-velocity spectroscopy gives more definitive results. Spectroscopy also provides a wealth of ancillary information. We present here follow-up observations, mostly spectroscopic, of 35 newly-discovered CVs, 32 of which were found by CRTS, ASAS-SN, and/or MASTER. For all, we determine orbital periods. Thirty-two of the 35 periods are apparently are not published elsewhere, though superhump periods were available for three of of the objects. For three other objects we confirm periods that have appeared in the literature. The objects studied here were selected mostly on the basis of their observational tractability, and the lack of a published orbital period (although in three cases we became aware of published periods before submitting this paper). We mostly avoided systems with known superhump periods, since their orbital periods can be deduced fairly well from this information (see e.g. \citealt{unveils}). JRT maintains a master list of CVs and updates it regularly with new discoveries from the ASAS-SN, MASTER, and CSS lists, and other discoveries as they come to his attention. Before every observing run, this list is searched and targets that might yield useful spectra -- in practice, targets that are $\sim 19.5$ mag or brighter at minimum -- are culled out. Final selection is made at the telescope using quick-look spectra and radial velocities measured in real time. In practice, the selection process favored objects with strong H$\alpha$ emission, and disfavored objects with strong continua and weak emission such as dwarf novae in outburst and some novalike variables.
In the last column of Table~\ref{tab:star_info}, we give the subtype of each of the objects, and briefly note other notable characteristics. The majority of the stars were found in variability surveys, so it is not surprising that most are dwarf novae, or U Geminorum stars; a few are listed as `UG:' because their outburst history is not known. Twenty of the known or suspected dwarf novae have $P_{\rm orb}$ shorter than $\sim 3$ hr, the period range in which superoutbursts and superhumps are found; however, only four of the 20 have known superhump periods. A superhump period would be especially desirable for SDSS1029+48, which appears to be a low mass-transfer rate system, with an apparent white dwarf component in its spectrum, and broad, double-peaked lines that show little orbital motion. Its orbital period of 91 min, (a cycle-count alias is unlikely) is significantly longward of the $\sim 75$-min period minimum for CVs with `normal' hydrogen-rich secondaries, so it is a candidate `post-bounce' system. If this is the case, it should have a superhump period excess that is anomalously small for its orbital period. While most of the sample are either dwarf novae or resemble them spectroscopically, a few are not. Three objects appear to be polars, or AM Her stars -- CSS0357+10 (already classified by \citealt{schwope12}), DDE23, and CSS2335+12. Two others, SDSS1429+00 and CSS2319+33, appear to be novalike variables in the period range frequented by SW Sex stars. We find eclipses in four of the objects. The most unusual of these is ASAS-SN 15aa, which has a relatively long $P_{\rm orb}$ of 9.01 hr, a large secondary-star contribution, a sizeable ellipsoidal variation, and a distinctive, asymmetric, shallow eclipse. Another, ASAS-SN 15cw, eclipses deeply on a 114-min period and shows a spectacular rotational disturbance in its emission line velocities. In six of the stars we detect late-type secondaries well enough to classify them and estimate distances. In most of them the secondary's spectral type is as expected given $P_{\rm orb}$, but the secondary in ASAS-SN 15cm is anomalously warm for its 5.0-hr period. That system also shows a strong photometric modulation that appears to be mostly ellipsoidal. Two objects, ASAS-SN 14dx and ASAS-SN 14ag, show significant proper motions and appear to be relatively nearby CVs that have escaped attention until their recent discovery. \citet{unveils} showed that the sample of CVs discovered by SDSS finally revealed the long-sought `spike' in the period distribution just above the minimum period for hydrogen-burning CVs. It is interesting to ask whether, in an analogous fashion, the present sample might show any systematic difference from the sample of previously-known CVs. For the period distribution, the answer to this is evidently ``no''. Fig.~\ref{fig:cdf} shows a comparison of the cumulative distribution functions (CDFs) of the periods presented in this paper, most of which appear here for the first time, and the periods of 999 CVs and related objects with well-determined periods listed in version 7.23 of the Ritter-Kolb catalog (\citealt{rkcat}; hereafter RKcat). The distributions are remarkably similar, except for the small tail of very short period systems (mainly AM CVn systems) present in RKcat. Both CDFs show a predominance of systems below the 2-3 hr `gap', and show a distinct change of slope in the gap. The present sample does have three systems with $P_{\rm orb} > 8$ hr, whereas fewer might be expected given the RKcat distribution, but the differences between the two distributions are not statistically significant by the Kolmogorov-Smirnov test. It is interesting that so many new CVs are still being turned up. \citet{breedt14} argue that the Catalina survey has already discovered most of the high-accretion rate dwarf novae within its survey area. In our sample, 30 of the 35 objects were discovered by the ASAS-SN, Catalina, or MASTER surveys, and therefore have well-defined discovery dates. Of these, 5 were discovered in 2015, 8 in 2014, 6 in 2013, and 11 before that. These numbers are small, but do not show an obvious decline in the discovery rate. It may take some time before we can be confident that even the dwarf nova sample is close to complete.
16
9
1609.02215
1609
1609.09325_arXiv.txt
In this work we report on high-resolution IR absorption studies that provide a detailed view on how the peripheral structure of irregular polycyclic aromatic hydrocarbons (PAHs) affects the shape and position of their 3-$\mu$m absorption band. To this purpose we present mass-selected, high-resolution absorption spectra of cold and isolated phenanthrene, pyrene, benz[a]antracene, chrysene, triphenylene, and perylene molecules in the 2950-3150 cm$^{-1}$ range. The experimental spectra are compared with standard harmonic calculations, and anharmonic calculations using a modified version of the SPECTRO program that incorporates a Fermi resonance treatment utilizing intensity redistribution. We show that the 3-$\mu$m region is dominated by the effects of anharmonicity, resulting in many more bands than would have been expected in a purely harmonic approximation. Importantly, we find that anharmonic spectra as calculated by SPECTRO are in good agreement with the experimental spectra. Together with previously reported high-resolution spectra of linear acenes, the present spectra provide us with an extensive dataset of spectra of PAHs with a varying number of aromatic rings, with geometries that range from open to highly-condensed structures, and featuring CH groups in all possible edge configurations. We discuss the astrophysical implications of the comparison of these spectra on the interpretation of the appearance of the aromatic infrared 3-$\mu$m band, and on features such as the two-component emission character of this band and the 3-$\mu$m emission plateau.
Polycyclic Aromatic Hydrocarbons (PAHs) are a family of molecules consisting of carbon and hydrogen atoms combined into fused benzenoid rings. From a chemical and physical point of view they have properties that have led to exciting applications in novel materials \citep{Wan2012, Sullivan2008}, but at the same time also to quite a cautious use on account of their health-related impact \citep{Kim2013, Boffetta}. For astrophysics they play a particularly important role since PAHs have been proposed as main candidates for carriers of the so-called aromatic infrared bands (AIBs), a series of infrared emission features that are ubiquitously observed across a wide variety of interstellar objects. These emission features are thought to be nonthermal in nature and arising from radiative cooling of isolated PAHs that have been excited by UV radiation \citep{Sellgren1984}. Since they offer such a powerful probe for carbon evolution in space, these bands have been subject to extensive experimental and theoretical research with the ultimate aim being a rigorous identification of the molecular structure of the AIB carriers. Significant progress has been made in this respect with infrared (IR) studies on PAH species deposited in a cold (10 K) rare-gas matrix (for example, \citet{Hudgins1998,Hudgins1998b,Hudgins2000}. Although such cooling conditions allow for an increase in spectral resolution as compared to room-temperature experiments, they lead at the same time to matrix-induced effects that are not well understood and hard to predict. Gas-phase studies are much preferred but have so far predominantly been restricted to IR absorption studies of hot (1000 K) vaporized PAHs \citep{Joblin1995a,Joblin1994b} or at best under room-temperature conditions (e.g., \citet{Pirali2009,Cane1997}). Due to their low volatility, high-resolution studies of low-temperature, isolated PAH molecules has for a long time remained out of reach with as a notable exception the cavity ring down spectroscopy (CRDS) studies of \citet{Huneycutt2004} on small PAHs, although contaminations originating from isotopologues and other PAH species or impurities remained a point of concern. Recently, we have applied IR-UV double resonance laser spectroscopic techniques on PAHs seeded in supersonic molecular beams. In combination with mass-resolved ion detection these techniques allow for recording of mass- and conformation-selected IR absorption spectra with resonance band widths down to 1 cm$^{-1}$ \citep{Maltseva2015}. Under such high-resolution conditions, IR absorption spectra of PAHs in the 3-$\mu$m region turn out to display an unexpected large number of strong bands, and certainly many more than expected on the basis of a simple harmonic vibrational analysis. Such a conclusion is more pertinent as theoretical studies of IR spectra of PAHs are typically performed at the Density Functional Theory (DFT) level, using the harmonic approximation for vibrational frequencies and double harmonic approximation for intensities, neglecting the effects of anharmonicity. In our previous papers we demonstrated that a proper treatment of anharmonicity and Fermi resonances indeed leads to predicted spectra that are in near-quantitative agreement with the experimental spectra, both with respect to the frequencies of vibrational bands and their intensities. The shape of the 3-$\mu$m band recorded in astronomical observations has been found to vary within the same astronomical object and between different astronomical objects. To account for these differences, several explanations have been put forward \citep{Sellgren1990, Tokunaga1991, VanDiedenhoven2004}. One of the suggestions for classifying the shape of this band is to interpret it as being associated with emission from two components \citep{Song2003a, Candian2012} in which there are contributions from different groups of carriers at 3.28 and 3.30 $\mu$m. \citet{Song2007} propose that these two components originate from groups of PAHs with different sizes, finding support for this in the laboratory high-temperature gas-phase studies of pyrene ($C_{16}H_{10}$) and ovalene ($C_{32}H_{14}$) for which a blue shift of the 3-$\mu$m band was observed upon increasing the size of the PAH \citep{Joblin1995a}. Another factor that has been suggested to contribute to the apparent two-component appearance of the emission are differences in the peripheral structure of different PAHs, in particular steric effects as occurring for hydrogen atoms at so-called bay-sites \citep{Candian2012}. The influence of the edge structure has been investigated by means of harmonic DFT calculations for large species \citep{Bauschlicher2009,BauschlicherCharlesW2008} but systematic experimental high-resolution studies are notoriously lacking. Similarly, it has been found \citep{Geballe1989} that next to the prominent emission band at 3.29 $\mu$m, a broad plateau is present that spans the 3.1-3.7 $\mu$m region and has been indicated as the 3-$\mu$m plateau. It has been speculated that this plateau might in part derive from anharmonic couplings to vibrational combination levels \citep{Allamandola1989a}. However, to what extent this explanation can account for the appearance of the entire plateau is still far from clear. Our previous study aimed at recording spectra under the highest resolution conditions possible and applying the appropriate theoretical treatment including anharmonic effects and resonances. For that reason, we focused on the spectra of the linear PAHs naphthalene (C$_{10}$H$_{8}$), anthracene (C$_{14}$H$_{10}$), and tetracene (C$_{18}$H$_{12}$). As discussed above, astronomical spectra likely comprise the contributions of a much larger variety of PAHs. To advance the interpretation and characterization of these data, we therefore extend our experimental and theoretical studies to a wider variety of condensed and irregular isomers containing up to five rings (phenanthrene C$_{14}$H$_{10}$, benz[a]antracene C$_{18}$H$_{12}$, chrysene C$_{18}$H$_{12}$, triphenylene C$_{18}$H$_{12}$, pyrene C$_{16}$H$_{10}$ and perylene C$_{20}$H$_{12}$). The goals of these studies are twofold. Firstly, we aim to understand how the effects of anharmonicity observed for the linear PAHs are affected by geometrical structure and how this in turn affects the appearance of the IR absorption spectra in the 3-$\mu$m region. Secondly, we aim to uncover general trends in band shapes that could provide spectral signatures that would allow for a much more detailed description of the contribution of different PAHs.
In this work we have presented molecular beam IR absorption spectra of six condensed PAHs in the 3-$\mu$m region using IR-UV ion dip spectroscopy. The present results and the results on linear acenes \citep{Maltseva2015, Mackie2015} show that anharmonicity indeed rules the 3-$\mu$m region, with the fraction of intensity not associated with fundamental transitions easily exceeding 50$\%$. Anharmonicity-induced transitions are more the rule than the exception and should explicitly be taken into account in the interpretation of astronomical data in the 3-$\mu$m region. A proper incorporation of resonances has been shown to yield predicted spectra that are in semi-quantitative agreement with the experiments. Such calculations may therefore lead the way for furthering our understanding on the influence of larger PAHs which are not amenable to similar experimental high-resolution studies. Our work shows that the observed abundance of combination bands is mostly concentrated in the low-energy part ($\leq$ 3100 cm$^{-1}$) of the CH-stretch region and originates from combinations of CC-stretch and CH in-plane bending modes. This anharmonic activity can be partly responsible for the 3-$\mu$m plateau observed by astronomers although both experiment and theory put into question a scenario in which the plateau is solely attributed to Fermi resonances between fundamental modes and such combination bands. In this respect, a more important role than assumed so far of hydrogenated and alkylated PAHs in combination with anharmonic effects appears to provide a highly-interesting alternative to pursue. Activity at the high-energy part of the 3-$\mu$m band has been demonstrated to derive from the presence of bay-hydrogens, but not solely as also anharmonicity in PAHs without bay-hydrogens induces activity in this region. Further studies on larger compact PAHs should provide further means to distinguish between the relative importance of the two effects. Such studies are presently underway. Our studies show that the vibrational activity in the CH-stretch fingerprint region is strongly linked to details of the molecular structure. It is therefore not only different for molecules with different chemical structures, but also for different isomers. We have demonstrated that the peripheral structure of the molecules plays a dominant role in the appearance of the 3-$\mu$m band. Studies in which the 3-$\mu$m region is correlated with the "periphery-sensitive" 9-15 $\mu$m region are thus of significant interest and could further elucidate the exact composition of the carriers of these bands. \\ The experimental work was supported by The Netherlands Organization for Scientific Research (NWO). AP acknowledges NWO for a VIDI grant (723.014.007). Studies of interstellar PAHs at Leiden Observatory have been supported through the advanced European Research Council Grant 246976 and a Spinoza award. Computing time has been made available by NWO Exacte Wetenschappen (project MP-270-13 and MP-264-14) and calculations were performed at the LISA Linux cluster and Cartesius supercomputer (SurfSARA, Almere, NL). AC acknowledges NWO for a VENI grant (639.041.543). XH and TJL gratefully acknowledge support from the NASA 12-APRA12-0107 grant. XH acknowledges the support from NASA/SETI Co-op Agreement NNX15AF45A. Some of this material is based upon work supported by the National Aeronautics and Space Administration through the NASA Astrobiology Institute under Cooperative Agreement Notice NNH13ZDA017C issued through the Science Mission Directorate
16
9
1609.09325
1609
1609.02201_arXiv.txt
{We present the stellar parameters and elemental abundances of a set of A--F-type supergiant stars HD\,45674, HD\,180028, HD\,194951 and HD\,224893 using high resolution ($R$\,$\sim$\,42,000) spectra taken from ELODIE library. We present the first results of the abundance analysis for HD\,45674 and HD\,224893. We reaffirm the abundances for HD\,180028 and HD\,194951 studied previously by Luck (2014) respectively. Alpha-elements indicates that objects belong to the thin disc population. From their abundances and its location on the Hertzsprung-Russell diagram seems point out that HD\,45675, HD\,194951 and HD\,224893 are in the post-first dredge-up (post-1DUP) phase and they are moving in the red-blue loop region. HD~180028, on the contary, shows typical abundances of the population I but its evolutionary status could not be satisfactorily defined.} \resumen{Hemos efectuado un an\'alisis detallado de las abundancias qu\'{i}micas de cuatro objetos supergigantes HD\,45674, HD\,180028, HD\,194951 y HD\,224893, usando espectros de alta resoluci\'on ($R$\,$\sim$\,42,000) tomados de la libreria ELODIE. Se presentan los primeros resultados de las abundancias para HD\,45674 y HD\,224893, y se reafirman las abundancias para HD\,180028 y HD\,194951 calculadas por Luck (2014). Los elementos alfa nos indican que todos los objetos estudiados pertenecen a la poblaci\'on del disco gal\'actico. A partir de sus abundancias y de su localizaci\' on en el diagrama Hertzsprung-Russell parece indicarnos que HD\,45675, HD\,194951 y HD\,224893 evolutivamente se encuentran en la fase posterior al primer dragado (post-1DUP) y se mueven en la regi\'on del red-blue loop. HD~180028 muestra abundancias t\'{i}picas de la poblaci\'on I pero su estado evolutivo no puede ser definido satisfactoriamente.} \addkeyword{Stars: atmospheric parameters} \addkeyword{Stars: abundances} \addkeyword{Stars: evolution} \addkeyword{Stars: supergiant} \begin{document}
\label{sec:introd} The process of chemical evolution in the Galaxy can be understood from its massive stars. These objects in their rapid evolution undergo changes through the process of nucleosynthesis over time and return their chemical elements into the interstellar medium from stellar winds and supernova events. It is not surprise the existence of massive young objects in the galactic plane since it is an area of star formation. These objects are visually luminous in galaxies and, in general, suitable candidates for studies of stellar and chemical evolution (Luck et al.\@ 1998; Smiljanic et al.\@ 2006; Venn et al.\@ 2000, 2001, 2003; Kaufer et al.\@ 2004). In the galactic disc, some of these massive objects have been classified as supergiant stars with masses between 5 to 20\,M$_{\odot}$, with A-and-F spectral type, which are moderately evolved and where the chemical abundances of the light elements CNO have been key to discriminate their evolutionary states (Lyubimkov et al.\@ 2011, Venn 1995a,1995b and internal references). For massive stars when H is exhausted, the post-He core burning phase can be affected in several ways. Stellar evolutionary models constructed at solar metallicities predict that massive supergiants ($M$\,$\geq$10\,M$_{\odot}$) are in the phase of helium core burning (Schaller et al.\@ 1992; Stothers \& Chin 1991). These objects have already left the main sequence on the Hertzsprung-Russell diagram (HRD) and begins the ignition of He in the blue supergiant region but thermal instabilities causes in the star a rapid expansion towards the red supergiant one. In this last phase, the A--F supergiants are able to resume thermal and radiative equilibrium through convection in the outer layers and shows an altered CNO abundances due to the first dredge-up (1DUP) event. On the other hand, A--F supergiants less massive ($M$\,$<$\,10\,M$_{\odot}$) have also initiated the He-core burning without visiting the red giant branch, a fully convective intermediate zone is predicted, the envelope is able to establish the thermal equilibrium and the stars are kept in the blue supergiant phase. Under these conditions CNO abundances remain unchanged (Stothers \& Chin 1976; 1991). However, another scenario is possible for objects with intermediate masses (3\,$<$\,M\,$<$\,9\,M$_{\odot}$) and is called the blue loop. In this point the star has eventually developed a convective envelope and is rising up the Hayashi line in the HRD. These objects have already reached the red supergiant stage but eventually evolved back into a blue supergiant phase (Walmswell et al.\@ 2015). During the red supergiant phase, the convective zone mixing materials of H-burning shell which are subsequently released to the surface by the event of the 1DUP and it show changes in the observed CNO abundance patterns. The amount CNO-processed material in such objects allows to discriminate different types of supergiants. The main goal of this work is based on a detailed study of chemical abundances for a set of four low-latitude A--F supergiants HD\,45674, HD\,180028, HD\,194951 and HD\,224893 under LTE assumption and the determination of their atmospheric parameters from excitation and ionization equilibrium. For this purpose we employ high-resolution spectroscopy and a set of atmospheric models constructed with plane-parallel geometry, hydrostatic equilibrium, local thermodynamic equilibrium (LTE) and the \texttt{ODFNEW} opacity distribution (Castelli \& Kurucz 2004). It is expected that the abundances derived correspond to the typical abundances observed for supergiant stars in the galactic disc. This paper are organized as follows: In \S~\ref{sec:observ} regards to the description of the sample selection. In \S~\ref{sec:param} presents how the atmospheric parameters were estimated. We determine the chemical abundances for the three stars in \S~\ref{sec:abund}. An individual analysis of abundances for each stars is present in \S~\ref{sec:discuss}. In \S~\ref{sec:results} is dedicated to discuss our results, and finally, in \S~\ref{sec:concluss} gives the conclusions of the paper. \begin{figure} \centering \includegraphics[width=7.5cm,height=7.5cm]{REGION5140.eps} \caption{Representative spectra of the sample stars HD\,45674, HD\,180028, HD\,194951 and HD\,224893. The location of lines of certain important elements have been indicated by dashed lines. This stars are arranged in decreasing order the HD number.} \label{fig:figure1} \end{figure} \begin{table*} \begin{center} \begin{minipage}{170mm} \caption{Basic parameters for the sample.} \label{tab:table1} \scriptsize{\begin{tabular}{rccccccccccc} \hline \hline \multicolumn{1}{l}{No. HD}& \multicolumn{1}{l}{SpT}& \multicolumn{1}{c}{$\alpha$}& \multicolumn{1}{c}{$\delta$}& \multicolumn{1}{c}{$V$}& \multicolumn{1}{c}{$B$}& \multicolumn{1}{c}{$l$}& \multicolumn{1}{c}{$b$}& \multicolumn{1}{c}{$V_\mathrm{r}$}& \multicolumn{1}{c}{$\pi$}& \multicolumn{1}{c}{$E(B-V)$}& \multicolumn{1}{c}{$v~sini$}\\ \multicolumn{1}{c}{}& \multicolumn{1}{c}{}& \multicolumn{1}{c}{(h~m~s)}& \multicolumn{1}{c}{($^{o}$~$^{'}$~$^{"}$)}& \multicolumn{1}{c}{(mag)}& \multicolumn{1}{c}{(mag)}& \multicolumn{1}{c}{($^{o}$)}& \multicolumn{1}{c}{($^{o}$)}& \multicolumn{1}{c}{(km\,s$^{-1}$)}& \multicolumn{1}{c}{(mas)}& \multicolumn{1}{c}{(mag)}& \multicolumn{1}{c}{(km\,s$^{-1}$)}\\ \hline 45674&F1Ia&06 28 47&$-$00 34 20&6.56&7.30&210.85&$-$05.30&$+$18.4$\pm$0.9&1.31$\pm$0.51&0.382&$\cdots$\\ 180028&F6Ib&19 14 44&$+$06 02 54&6.96&7.73&040.97&$-$02.39&$-$5.1$\pm$0.7&-0.32$\pm$0.51&0.346&23.3\\ 194951&F1II&20 27 07&$+$34 19 44&6.41&6.84&073.86&$-$02.34&$-$13.5$\pm$2.0&1.00$\pm$0.41&0.230&20\\ 224893&A8II&00 01 37&$+$61 13 22&5.57&5.94&116.97&$-$01.07&$-$23.2$\pm$2.0&1.06$\pm$0.27&0.244&40\\ \hline \end{tabular}} \end{minipage} \end{center} \end{table*}
\label{sec:concluss} From this study we performed a detailed analysis of the photospheric abundances for a sample of four A--F type supergiant stars of intermediate mass ($\sim$5--9\,M$_{\odot}$) using high-resolution spectra. We have determined atmospheric parameters, masses and abundances using spectral synthesis and equivalent widths. Our three stars (HD\,45674, HD\,194951 and HD\,224893) for which we were able to determine both carbon and nitrogen show signs of internal mixing. A mean [N/C] ratio of $+$0.90$\pm$0.13 dex is found and this value is in agreement with the preditions of non-rotating models by Meynet \& Maeder (2000), which predict [N/C]\,=$+$0.72 dex. A surface nitrogen enhancement was observed in these stars and it only could comes from as a result of deep mixing during the 1DUP. The sample of three stars shows very little variability in radial velocities by discarding binarity in them. According to abundances analysis we can conclude that HD\,45674, HD\,194951 and HD\,224893 show typical values of abundances for supergiants of the thin disc population and where evolutionarily the post-first dregde-up (post-1DUP) phase was reached. These objects are located in the red-blue loop region. HD\,180028, on the contrary, also show typical abundances of the population I but its evolutionary status could not be satisfactorily defined.
16
9
1609.02201
1609
1609.05798_arXiv.txt
We present new deep UBVRI images and high-resolution multi-object optical spectroscopy of the young ($\sim$ 6 - 10 Myr old), relatively nearby (800 pc) open cluster IC 2395. We identify nearly 300 cluster members and use the photometry to estimate their spectral types, which extend from early B to middle M. We also present an infrared imaging survey of the central region using the IRAC and MIPS instruments on board the {\em Spitzer Space Telescope}, covering the wavelength range from 3.6 to 24\,$\micron$. Our infrared observations allow us to detect dust in circumstellar disks originating over a typical range of radii $\sim$\,0.1 to $\sim$\,10\,AU from the central star. We identify 18 Class II, 8 transitional disk, and 23 debris disk candidates, respectively 6.5\%, 2.9\%, and 8.3\% of the cluster members with appropriate data. We apply the same criteria for transitional disk identification to 19 other stellar clusters and associations spanning ages from $\sim$ 1 to $\sim$ 18 Myr. We find that the number of disks in the transitional phase as a fraction of the total with strong 24 $\mu$m excesses ([8] - [24] $\ge$ 1.5) increases from $8.4 \pm 1.3$\% at $\sim$ 3 Myr to $46 \pm 5$\% at $\sim$ 10 Myr. Alternative definitions of transitional disks will yield different percentages but should show the same trend.
The {\it Spitzer Space Telescope} \cite[{\it Spitzer};][]{wer04} has significantly improved our understanding of how protoplanetary disks form, evolve, and eventually dissipate. By 15\,Myr the accretion of gas onto protostars has largely ceased and most primordial disks have dissipated \cite[eg.][]{hai01,mam04,meng16}. By this time, planetesimals have formed and dust produced in their collisions yields planetary debris disks. These regenerated disks indirectly reveal the presence of planetary bodies required to replenish the dust and allow us to trace the evolution of planetary systems over the full range of stellar ages \citep[e.g.,][]{lag00, dom03, wya08,gaspar13, sierchio14}. The beginning of the transition from an optically-thick accretion disk to an optically-thin debris disk occurs from the inside-out \cite[e.g.][]{skr90,sic05,meg05,muzerolle10, espaillat14} and is marked by a characteristic spectral energy distribution (SED) with little excess at shorter ($< 6 \mu$m) wavelengths but still retaining strong emission at the longer ones. These ``transitional'' disks appear to represent the process of clearing ``caught in the act''. The result is a largely evacuated inner region accompanied by an optically-thick primordial disk at larger radii. This phase is crucial to our understanding of disk dissipation and planet formation because it signals the end of stellar accretion and the consumption of nearly all the gas in the disk. Given the small number of transitional disks identified relative to the number of primordial and debris disks, it has been concluded that this phase is of short duration, on the order of a few hundred thousand years \citep{skr90,ken95,sim95,wol96, muzerolle10, espaillat14}. The key time period to observe the transitional phase is from a few to about 15\,Myr. Clusters and associations are ideal laboratories for studying disk evolution as the member stars are coeval to within a few million years, of similar composition and reddening, at similar and reliably measured distance, and numerous enough to have a wide range of masses and to support drawing statistically valid conclusions. Unfortunately, there are only a few appropriately aged young clusters within a kiloparsec to support characterizing the transitional disk phase. For more distant clusters, {\em Spitzer} has insufficient sensitivity in the mid-infrared to measure the photospheres of the lowest mass members and to identify complete samples of transitional disks. The open cluster IC\,2395 can augment studies of this phase of disk evolution. The cluster is 800 pc distant \citep[Section 3.3,] []{cla03}. Despite its proximity, it has not been extensively studied. \cite{cla03} conducted the largest photometric investigation of the cluster, identifying candidate members and estimating the cluster's age, distance, extinction, and angular size. Sensitive to a limiting magnitude of $V$ $<$ 15\,mag, their survey found 78 probable and possible members through $UBV$ photometry. There have also been several proper motion studies but none combined with photometric data. The cluster age has been estmated at 6 $\pm$ 2 Myr \citep{cla03} on the traditional calibration for young cluster and association ages \citep[e.g.,][]{mam09}. A revised age calibration has been proposed \citep[e.g.,][]{pecaut12,bell13,bell15}, on which we derive in this paper an age of $\sim$ 9 Myr (Section 3.3). On either calibration, IC 2395 is in the critical range to characterize transitional disk behavior. To increase our understanding of this cluster, we have obtained $\sim45$ square arcmin fields of deep optical ($UBVRI$), and mid-IR (3.6, 4.5, 5.8, 8.0\, and 24\,$\micron$) photometry and high-resolution optical spectroscopy of IC\,2395. We describe the new observations in Section 2 and discuss the cluster membership and age in Section 3. In Section 4, we identify and characterize the circumstellar disks in the cluster with emphasis on identifying transitional disks. We combine these results with a homogeneous treatment of transitional disks in 19 other young clusters and associations in Section 5, to probe the evolution of circumstellar disks through the transitional phase. We summarize and conclude the paper in Section 6.
The open cluster IC 2395 can add significantly to our understanding of protoplanetary and early debris disk evolution, since it is relatively close (800 pc) and at a critical age where protoplanetary disks are disappearing and debris disks begin to dominate. However, the cluster has largely been overlooked in disk studies. We report optical and infrared photometry and high resolution optical spectroscopy of the cluster, from which we: \begin{itemize} \item Increase the list of probable members to nearly 300, spanning spectral types of early B to middle M; \item Estimate an age of $9 \pm 3$ Myr on the revised age scale, e.g. that of \citet{bell13}; this value compares with $6 \pm 2$ Myr on the traditional scale \citep{cla03}; and \item Identify 18 Class II (6.5\% of the members with full IRAC data), 8 transitional disk (2.9\%), and 23 debris disk candidates (8.3\%). \end{itemize} We have combined the transitional disk information with homogeneously defined similar objects in nineteen additional young clusters and associations to quantify the evolution of this phase; finding that \begin{itemize} \item The dominant cause of variations in the proportion of transitional disks is age; most clusters of similar age have similar proportions of transitional disks among the systems with strong 24 $\mu$m excesses. The single possible exception is $\rho$ Oph, where transtional disks are relatively rare. \item The relative numbers of disks with different degrees of 24 $\mu$m excess do not change significantly with age, implying that the change in the proportion of transitional disks is not driven by a systematic change of disk properties, e.g., a thinning of disks that makes them more susceptible to dissipation \item The number of disks in the transitional phase as a fraction of the total with strong 24 $\mu$m excesses ([8] - [24] $\ge$ 1.5) increases from $8.4 \pm 1.3$\% at $\sim$ 3 Myr to $46 \pm 5$\% at $\sim$ 10 Myr; alternative definitions of transitional disks will yield different percentages but should show the same trend. \item Under the conventional assumption that the lifetime of the transitional stage is fixed, and given the evidence that the nature of the individual Class II and transitional disks does not change with age, this result implies that the decay in the proportion of systems with strong 24 $\mu$m excesses cannot be exponential, but must start more slowly and finish more rapidly than the ``best fit'' exponential. \end{itemize} We have also demonstrated that IC 2395 is a rich cluster at a critical age for circumstellar disk evolution, worthy of additional study.
16
9
1609.05798
1609
1609.09749_arXiv.txt
The central molecular zone (CMZ) hosts some of the most massive and dense molecular clouds and star clusters in the Galaxy, offering an important window into star formation under extreme conditions. Star formation in this extreme environment may be closely linked to the 3-D distribution and orbital dynamics of the gas. Here I discuss how our new, accurate description of the $\{l,b,v\}$ structure of the CMZ is helping to constrain its 3-D geometry. I also present the discovery of a highly-regular, corrugated velocity field located just upstream from the dust ridge molecular clouds (which include G0.253+0.016 and Sgr B2). The extremes in this velocity field correlate with a series of massive ($\sim10^{4}$~M$_{\odot}$) cloud condensations. The corrugation wavelength ($\sim23$~pc) and cloud separation ($\sim8$~pc) closely agree with the predicted Toomre ($\sim17$~pc) and Jeans ($\sim6$~pc) lengths, respectively. I conclude that gravitational instabilities are driving the formation of molecular clouds within the Galactic Centre gas stream. Furthermore, I suggest that these seeds are the historical analogues of the dust ridge molecular clouds -- possible progenitors of some of the most massive and dense molecular clouds in the Galaxy. If our current best understanding for the 3-D geometry of this system is confirmed, these clouds may pinpoint the beginning of an evolutionary sequence that can be followed, in time, from cloud condensation to star formation.
The ultimate goal of star formation research is to develop an understanding of stellar mass assembly as a function of environment. The central molecular zone of the Milky Way (CMZ; i.e. the inner few hundred parsecs) hosts some of the most extreme products of the star formation process within the Galaxy. These include massive star clusters (e.g. Arches and Quintuplet; \citealp{longmore_2014}), massive ($10^{5}-10^{6}$\,M$_{\odot}$) protocluster clouds (e.g. Sgr B2), and massive ($10^{4}-10^{5}$\,M$_{\odot}$) infrared dark clouds (e.g. G0.253+0.016; \citealp{longmore_2012,rathborne_2014,rathborne_2015}). In spite of this, the present-day star formation rate ($\sim0.05$\,M$_{\odot}$\,yr$^{-1}$; e.g. \citealp{crocker_2012}) is 1-2 orders of magnitude lower than one might naively expect when only considering the reservoir of dense ($\gtrsim10^{3}$\,cm$^{-3}$) gas (\citealp{longmore_2013a, kruijssen_2014b}). Star formation within this complex environment may be closely linked to the orbital dynamics of the gas (\citealp{longmore_2013b}). Understanding how this influences star formation necessitates an accurate, holistic understanding of the gas kinematics and distribution -- something which has been sought after for several decades (e.g. \citealp{bally_1988, sofue_1995, jones_2012}). To date, the kinematics of the CMZ have been described using a combination of position-velocity diagrams, channel maps, and moment analysis. Although such techniques are simple to implement and well-understood, their output can be subjective and misinterpreted in complex environments. With this in mind, \citet[H16]{henshaw_2016a} developed an analysis tool which efficiently and systematically fits large quantities of multi-featured spectral line profiles with multiple Gaussian components. Using {\sc scouse}\footnote{Publicly available at \url{https://github.com/jdhenshaw/SCOUSE}} H16 were able to investigate the kinematics of the CMZ on the scales of individual clouds, \emph{whilst maintaining a global perspective} -- critical in understanding how environment influences star formation in this complex region. \sidecaptionvpos{figure}{c} \begin{SCfigure} \captionsetup{format=plain} \includegraphics[trim = 10mm 5mm 10mm 10mm, clip, width = 0.40\textwidth]{Figure_pv_models.pdf} \vspace{-0.23cm} \caption{\small{Three different interpretations for the 3-D structure of the CMZ as they would appear in $\{l,\,v_{\rm LSR}\}$ space. Within each panel the inset figure represents a schematic of the top-down view of the respective interpretation. The top, central, and bottom panels refer to the spiral arm, closed elliptical, and open stream interpretations (see text), respectively. The black circle with a plus denotes the location of Sgr A*. Additionally, the locations of prominent molecular clouds are overlaid. In order of increasing Galactic longitude (from right to left): The Sgr C complex (black plus); the 20~km\,s$^{-1}$ and 50~km\,s$^{-1}$ clouds (black upward triangles); G0.256+0.016 (black square); Clouds B-F (black squares); The Sgr B2 complex (black diamond). }} \label{Figure:pv_models} \end{SCfigure} \vspace{-0.5cm}
16
9
1609.09749
1609
1609.02979.txt
% $\gtrsim$ Using Non-Redundant Mask interferometry (NRM), we searched for binary companions to objects previously classified as Transitional Disks (TD). These objects are thought to be an evolutionary stage between an optically thick disk and optically thin disk. We investigate the presence of a stellar companion as a possible mechanism of material depletion in the inner region of these disks, which would rule out an ongoing planetary formation process in distances comparable to the binary separation. For our detection limits, we implement a new method of completeness correction using a combination of randomly sampled binary orbits and Bayesian inference. The selected sample of 24 TDs belong to the nearby and young star forming regions: Ophiuchus ($\sim$ 130 pc), Taurus-Auriga ($\sim$ 140 pc) and IC348 ( $\sim$ 220 pc). These regions are suitable to resolve faint stellar companions with moderate to high confidence levels at distances as low as 2 au from the central star. With a total of 31 objects, including 11 known TDs and circumbinary disks from the literature, we have found that a fraction of 0.38 $\pm$ 0.09 of the SEDs of these objects are likely due to the tidal interaction between a close binary and its disk, while the remaining SEDs are likely the result of other internal processes such as photoevaporation, grain growth, planet disk interactions. In addition, we detected four companions orbiting outside the area of the truncation radii and we propose that the IR excesses of these systems are due to a disk orbiting a secondary companion
\label{sec: intro} After the formation of a star, the lifetime of a disk is estimated to be $\la$ 10 Myrs. At an age of $\sim$5 Myrs, around 90$\%$ of these objects already went through an evolution process of dispersion of their optically thick primordial disks \citep{SiciliaAguilar2006}. The dispersion of the inner disk material creates unique morphologies in the disk that can be detected by their unusual spectral energy distributions (SED) \citep{Strom1989}. Assuming that all disks go through this dispersing phase, then approximately 10$-$20$\%$ of the disks are in a ``transition" phase with time-scales within $<$ 0.5 Myr; \citep{Furlan2011, Koepferl2013}. In comparison with the characteristic continuum level of the SED of a Classical T Tauri Star (CTTS), these objects are defined as: stellar objects with small near-infrared (NIR) and/or mid-infrared (MIR) excesses and large MIR and/or far-infrared (FIR) excesses \citep[e.g.][]{Espaillat2014}. Given the ambiguity in the literature as to whether a disk in a ``transition phase" makes reference exclusively to a disk with an inner hole surrounding a single star or also includes binary systems in a transition phase, we will describe disks around single stars exclusively as \textit{Transitional Disk} (TD) and to describe disks around binary stars as \textit{Circumbinary Disks} (CD). Detailed modelling of TD disk SEDs has interpreted the reduction of excess in the NIR-MIR as the dearth of small dust grains and thin gas in the inner region of the disk \citep{Espaillat2012}. In addition, mm-interferometric observations have mapped this particular disk morphology of the TDs, showing a dust-depleted region in the inner disk and/or gaps \citep{Andrews2011, Canovas2016}. Although the physical origins causing these particular shapes in the disks are still unclear, several theories have been proposed to explain the clearing mechanisms in the disk from inside out, such as grain growth \citep{Dullemond2001}, magnetorotational instability \citep{ChiangMurray2007}, photoevaporation \citep{Clarke2001, Alexander2007}, dust filtration \citep{Rice2006a}, and disk-planet(s) interactions \citep{Kraus2012, Dodson2011}. However, it has been difficult to reconcile the main process of dispersion of the disk, especially since these mechanisms might dominate at different time-scales and radii. For instance, planet formation and photoevaporation may play a sequential dominating role in the disk dispersion phase, since photoevaporation disperses more rapidly once a planet is formed and has carved a gap in the disk \citep{Rosotti2015}. Unfortunately, these models are still not able to simultaneously explain the evolution process of all TDs, especially those with high accretion rates and large inner cavities full of large amounts of gas near the central star. However, fully understanding the disk dispersal process is of a vital importance, because it provides insights about the formation of planetary systems like our own \citep{Dodson2011}. In particular, knowledge of the timescales of gas survival sets constraints on the time available for the formation of a gas rich planet via core accretion \citep{Pollack1996}. Alternatively, another clearing mechanism has been proposed for the truncation of the inner disk: the presence of a \textit{stellar companion}. \citet{Artymowicz1994} showed that in the binary$-$disk interaction, the stellar companion will truncate the CD at a distance, which depends highly on the eccentricity and mass ratio of the binary system. These theoretical models predict that the ratio of the inner radii ($r_{d}$) about the center of mass and the semi-major axis ($a$) of the binary system ranges from 1.7 to 3.3 for nearly circular orbits (e $=$ 0$-$0.25) and highly eccentric binaries ($e$ $\sim$ 0.75), respectively. Although, previous surveys of stellar companions in a range of $\sim$ 3 $-$ 50 au have indicated that binary truncation might not be a primary mechanism for the clearing inner region of the disk \citep{Pott2010, Kraus2012}, there are different factors that prevented the detection of faint stellar companions in general, such as inner working angle and a small separation of the binary at the observing epoch. In addition, a misleading interpretation of the SEDs can occur in the classification process of TDs through the SEDs of the CDs. Since an unresolved faint infrared companion can aggregate NIR flux to the net SED and if this object is surrounded by a disk, it could emit MIR levels similar to the MIR excess seen in the SED of TDs \citep[e.g][]{Duchene2003, Kraus2015_FWTau}. Although, the SED of these CDs present several overlapping features with a ``normal" SED of TDs, it would be misleading to treat them in a similar way. For instance, the implications for the presence of another star in the star-disk system entails an incorrect measurement of the luminosity and temperature, which translates into inaccurate age and mass estimates. This is the case of Coku Tau/4 and CS Cha that were originally described as TDs \citep{Forrest2004, Espaillat2007}, but eventually were presented as CDs \citep{Ireland2008, Guenther2007}. This misclassification would be reflected in the estimation of birthplaces and timescales for formation of sub-stellar companions (brown dwarfs) and/or planetary systems, and the demographic properties of these populations \citep[e.g.][]{Najita2015}. Therefore, determining a more accurate relative picture of the lifetime of TDs and CDs requires a comprehensive survey capable of resolving close binaries ($\la$ 30 au) and measuring their frequency in objects previously classified as TDs through their SEDs. Although, the open gap in the inner region of the disk might have different physical origins, in this paper we seek to identify if the dispersion of the primordial material in the inner region of the disk is a result of the tidal interaction between a close binary system and the disk. At small separations, detecting faint companions orbiting bright stars, that in addition, are surrounded by dusty material, can be challenging due to the high contrast between the companion and the primary star. However, observations of objects at early ages provide favorable IR contrast ratios for the detection of so far, unresolved faint companions because of their intrinsically higher luminosity ($\Delta$K $<$ 5 mag). We use the \textit{Non-Redundant Mask interferometry} (NRM) technique and NIRC2 instrument located at the Keck II telescope, which offers a solution to reach angular resolutions with the necessary contrast and is resistant to speckle noise in the image by measuring a self-calibrating quantity known as \textit{closure-phase} \citep[e.g.][]{Martinache2011}. In order to achieve a higher accuracy in the detection limits of our data, the NRM completeness as a function of position and contrast utilizes a combination of a \textit{MonteCarlo Integration} approach, giving a randomly sample of artificial binary stars, and \textit{Bayesian Inference}, which uses prior probability density functions of the binary orbital parameters. We have restricted the selection of objects to regions with an age of $\sim$ 1 $-$ 3 Myrs and within a distance of about 220 pc. Taurus-Aurigae , IC348 region (Perseus) and Ophiucus star forming regions satisfy these criteria \citep{Loinard2008, Wilking2008}. This article is arranged as follows. In Section 2. we present the motivation for the sample selection, description of observations together with the data analysis and a review of the target properties such as distance to the star-forming regions and estimations of the inner radii. A simple \textit{Bayesian} modelling analysis of these data is conducted in Section 3, with an emphasis on prior probabilities and description of binary and single models. The results of fitting to closure$-$phases in the $\chi^{2}$ minimization are synthesized with other information in the literature in Section 4. To perform a statistical Bayesian analysis of the fraction of the binarity as the main responsible mechanism opening the gaps in the TDs, we present a \textit{Jefreys Prior} and its posterior probability in Section 5. Based on that analysis and observational results, we attempt to reconcile the observations with theoretical predictions from tidal interaction models and possible scenarios of planetary formation in Section 6. Finally, we provide an overall review of the work done and results in Section 7.
In our combined sample consisting of 31 objects, including 11 TDs and CDs with known multiplicity from the literature, and excluding 3 wide binaries, we find that a fraction of 0.38 $\pm$ 0.09 of the SEDs are being produced by the flux emission of a binary star $+$ disk instead of a single star $+$ disk. This means that the remaining SEDs with low NIR and MIR excesses observed to date are the result of the dispersion of the primordial material due to another internal mechanism. Our binary detections inside the fitted disk wall inner radii do not necessarily have projected separations between $\onethird$ and $\onehalf$ of the inner radii, which is the expected semi-major axis range for a binary to cause the truncation of the disk. However, all detections lie within $\onehalf$ of our calculated inner disk radii, consistent with projection effects. Given the criteria applied to select our sample and following the standards for disk classification, we emphasize that these objects should be treated as CDs that possibly are in a \textit{transitional phase}, and no longer treat them as TDs with a single star. Originally, the SEDs of these objects were described assuming only one object in the interior of the disk and using detailed disk models to fit the excess continua \citep[e.g.][]{Espaillat2012}. As demonstrated by this work, there is a significant fraction of these SEDs which were mis-classified. However, as seen in Figure \ref{fig:imagessed}, the CD SEDs of LRL 31, V410 X-ray 6, WSB 12, WSB 40 , 2MASS J16335560-2442049, 2MASS J04210934+2750368 and 2MASS J16315473-2503238 are indistinguishable from TDs. Although, to date the resemblance between CD SEDs and TD SEDs is well established \citep[e.g.][]{Ireland2008}, unfortunately we could not set an observational constraint such as accretion rate or flux emission in our sample. For example, the SEDs of V410 X-ray 6 and 2MASS J16335560-2442049 bear a resemblance to the large MIR emission and zero NIR excess detected in the binary Coku Tau/4 \citep{Dalessio2005, Ireland2008}, while the other objects show a more similar SED to a typical TD SED. On the other hand, for those objects shown in Table \ref{table:bayes} with Bayes' factors $\simeq$ 1, that due to resolution limitations we were not able to confirm or rule out their binarity, multi-epoch RV monitoring observations are needed \citep[e.g.][]{Kohn2016}, because there might be more binary objects dispersing the inner region of the disk efficiently. We have also detected 4 new binary systems with the location of the secondary component outside the inner region of the disk. Interestingly, these systems produced SEDs characteristic of the TDs and are low accretors (Table~\ref{table:nosample}). We have proposed that those SEDs composed of a low-mass binary star with one of its components orbiting outside the inner radius of the disk, might have its more ``evolved'' disk orbiting the sub-stellar companion, instead of the primary component. Although, it is also plausible that the primary component has a circumstellar disk that is being dispersed by the close sub-stellar companion. Previously, \citet{Harris2012} performed a high angular resolution millimeter-wave dust continuum imaging survey of circumstellar material associated with the individual components of multiple star systems in the Taurus$-$Auriga young cluster. They found that the presence of a close stellar companion ($<$ 30 au ) impacts disk properties, producing a disk mass depletion with a factor of $\sim$ 25. In the case of the LRL 72, LRL 182, LRL 213 and LRL 135 systems, a faster dispersion of the disk by the presence of the stellar companion located at $\leq$ 20 au could influence the initial conditions for the formation of planets and prevent the first steps of this evolutionary process (e.g. dust settling and grain growth). \subsection{Physical Sources of Typical TD SEDs} Planetary formation could potentially explain the estimated inner optically thick disk radii for these objects and therefore, the peculiar shape or decreased flux observed in the NIR/MIR SEDs of these TDs. Depending on the inner hole size, the gap could be cleared up by single or multiple planets orbiting this region \citep{ Lubow1999, Rice2006, Dodson2011}. In the context of planet disk interaction, and as a consequence of a massive planet clearing out the inner region of the disk, a local pressure bump is created at the inner edge of the outer disk. In the last decade, this local pressure bump was proposed to act as a filter at the outer edge of a disk gap, filtering particles of size $\gtrsim$ 10 $\micron$ and impeding the drift inward of them \citep{Rice2006a}. As a result of this \textit{dust filtration}, the disk profile is shown with an abrupt discontinuity in its dust radial profile and at the same time permits the presence of small particles closer to the central star ($\lesssim$ 10 $\micron$) \citep[e.g][]{Garufi2013}. Thus, this optically thin dust might be responsible of the weak NIR/MIR excess present in TD SEDs. In addition, inside this cavity coupling between $\micron$ size dust grains and gas is expected \citep{Garufi2013}, while the location to pile-up the dust at a sub- to millimeter scale in a pressure maximum, leads to different locations of gap edges for gas and ``bigger" dust particles \citep{Pinilla2012}. In our approach to estimate inner cavities, we consider the location of particles of $\sim$ 0.1 $\micron$ that might coincide with the gaseous cavity, ingredients necessary to explain the detected accretion rates in our sample of TDs. Most of our TDs show accretion rates ranging from 10$^{-8}$ to 10$^{-10} $ $M_{\odot}$/yr and, although these accreting TDs are also ideal targets to test the role of some photoevaporation models \citep[e.g.][]{Alexander2006a, Alexander2006}, there are other missing pieces to the puzzle such as disk mass measurements needed to obtain a complete picture of this transitional phase. Therefore, the observed SEDs of TDs with the presence of a single star might be subject to a dominating internal mechanism and the amount of mass in the disk. Thus, in order to distinguish the dominating dispersal mechanism producing the inner holes in the disks, a follow-up program of millimeter observations of the TDs is required to be able to estimate the disk mass of these objects. Nevertheless, the inner region of these TDs could be depleted by a combination of two or more mechanisms that dominate at different distances from the central star and timescales dictated by the initial physical conditions. \subsection{Single vs. Binary Stars: Hosting Planetary Formation} At first glance, it is tempting to suggest that single stars have a higher probability of hosting the formation of planetary systems than close binary systems. However, \citet{Pascucci2008} studied the first steps of planetary formation in single and binary systems with projected separations between $\sim$ 10 and 450 au and they found no statistical significant difference in the degree of dust settling and grain growth of those systems, indicating that expected differences in the exoplanet properties arise in the later stages of their formation and/or migration \citep[e.g.][]{Kley2000, Kley2007}. Our close binary companions are detected at angular separations between 2$-$10 au; these small angular separations might affect the initial conditions for the formation of planets in the inner region of the circumbinary disks. This is mainly due to the modification of the binary eccentricity and excitation of density waves generated by the resonant interactions of the binaries with the disk, which remove primordial material \citep{Lubow2000}. Based on these assumptions, the ``weak'' excess from the circumstellar material in the SEDs of the CDs, increased by the secondary flux radiation, could point out a lower probability for the formation of a planet in radii of around $a \leq 10$ au in very close binary stars. On the other hand, single stars are more probable to host forming planets at inner radii around $<$ 10 au than close binary stars, where actually most of the planet formation might take place. Because the time available to form any planet(s) in a circumstellar disk might vary depending on the initial conditions and the evolution of the disk, it is necessary in future surveys to characterize the distribution of disk masses in CDs with close binary and single stars, that together with the accretion rates will establish the physical parameters constraining where and when planets form in those systems. Additionally, accretion rates have been used to estimate the dissipation of the primordial disks once accretion stops; however, we did not find any trend in $\dot{M}_{*}$ or difference between close binary and single stars in our sample that helps us to constrain the timescales of these systems.
16
9
1609.02979
1609
1609.02989_arXiv.txt
We develop a magnetic ribbon model for molecular cloud filaments. These result from turbulent compression in a molecular cloud in which the background magnetic field sets a preferred direction. We argue that this is a natural model for filaments and is based on the interplay between turbulence, strong magnetic fields, and gravitationally-driven ambipolar diffusion, rather than pure gravity and thermal pressure. An analytic model for the formation of magnetic ribbons that is based on numerical simulations is used to derive a lateral width of a magnetic ribbon. This differs from the thickness along the magnetic field direction, which is essentially the Jeans scale. We use our model to calculate a synthetic observed relation between apparent width in projection versus observed column density. The relationship is relatively flat, similar to observations, and unlike the simple expectation based on a Jeans length argument.
The {\it Herschel Space Observatory} has revealed a wide-ranging network of elongated (filamentary) structures in molecular clouds \citep[e.g.,][]{and10,men10}. Even though filamentary structures in molecular clouds were already well established \citep[e.g.,][]{sch79}, the Herschel continuum maps of dust emission at 70-500 $ \rm{\mu} $m have achieved unprecedented sensitivity and revealed a deeper network of filaments, in both star-forming and non-star-forming molecular clouds. This implies that the filamentary network is an imprint of initial conditions, likely turbulence, rather than the result of pure gravitational instability. Furthermore, the prestellar cores and protostars, when present, are preferentially found along massive filaments. Much interpretation of the filaments has been based on the assumption that they are isothermal cylinders. This simplifies their analysis as their observed shape is then independent of most viewing angles and one can rely on established theoretical results about the equilibrium or collapse of infinite cylinders. \citet{and10} interpreted the observations in terms of the critical line mass of an isothermal cylinder $m_{\rm l,crit} = 2\,c_s^2/G$, where $c_s$ is the isothermal sound speed. For a mass per unit length $m > m_{\rm l,crit}$, a cylinder undergoes indefinite collapse as long as the gas is isothermal, and for $m < m_{\rm l,crit}$ it can settle into an equilibrium structure, although still unstable to clumping along its length into Jeans length sized fragments \citep{lar85}. \citet{and10} argue that star formation is initiated when $m > m_{\rm l,crit}$. A challenge to the view of filaments as cylinders is the magnetic field alignment inferred from polarized emission. \citet{pal13} find that large scale magnetic fields are aligned perpendicular to the long axis of the massive star-forming filaments \citep[see also][]{pla16}. This makes a circular symmetry of a cylinder about the long axis unlikely unless the magnetic field strength is dynamically insignificant. A more natural configuration is a magnetic ribbon, a triaxial object that is flattened along the direction of the large-scale magnetic field with its shortest dimension in that direction. In the lateral direction to the magnetic field, elongated structures can form due to turbulence and gravity. Indeed, simulations of turbulence accelerated star formation in a strongly magnetic medium \citep{li04,nak05,kud08,bas09,kud11} show the formation of ribbon-like structure in a layer that is flattened along the magnetic field direction. Magnetic ribbons have recently been investigated theoretically by \citet{tom14} and \citet{han15}. They study magnetohydrostatic equilibria of ribbons that arise from a parent filament of radius $R_0$, which is a free parameter in the problem. They find that a critical line-mass-to-flux ratio exists for collapse, in analogy to the critical mass-to-flux ratio for axisymmetric three-dimensional objects \citep{mou76}. A further challenge to filaments modeled as isothermal cylinders comes from the dust emission measurement of the FWHM of the mean column density profile relative to the axis of a filament \citep{arz11}. For example, figure 7 of \citet{arz11} shows that the FWHM values for 90 filamentary structures in low mass star forming regions cluster around a mean of $\sim 0.1$ pc with some scatter over two orders of magnitude range of mean column density\footnote{Molecular line emission studies of the Taurus region show wider mean thicknesses $\sim 0.4$ pc for filaments in velocity-integrated emission and $\sim 0.2$ pc for filaments in individual velocity channels \citep{pan14}}. However, \citet{ost64} showed that the central half-mass radius of an equilibrium isothermal cylinder is $a \propto c_s/\sqrt{G\rho_c}$, essentially the Jeans length, where $\rho_c$ is the central density. The projected column density of such a circularly symmetric configuration has a central flat region of size $a$ and column density $\Sigma_c = 2 \rho_c\,a$ \citep[see][]{dap09}, so that we can also write $a \propto c_s^2/(G\Sigma_c)$. Therefore, the approximate observed relation $a \simeq {\rm constant}$ is unlike the expected $a \propto \Sigma_c^{-1}$. However, the observed set of values of the FWHM radii also intersect the line of Jeans length at the median log column density, which implies that the Jeans length may not be wholly unrelated to them. In this paper, we explore the consequences of a magnetic ribbon model for molecular cloud filaments for the measured relation between apparent width and the observed column density. We argue that this is a more natural model for filaments and is based on the interplay between turbulence, strong magnetic fields, and gravitationally-driven ambipolar diffusion, rather than pure gravity and thermal pressure. We extend the analytic model of \citet{kud14} for the formation of magnetic ribbons that is based on numerical simulations. We derive a lateral width of a magnetic ribbon and use it to calculate a synthetic observed relation between apparent width in projection versus observed column density.
We have presented a minimum hypothesis model for the width of a filament in a molecular cloud in which magnetic fields and magnetohydrodynamic turbulence are initially dominant. A turbulent compression leads to a magnetic ribbon whose thickness is set by the standoff between ram pressure and magnetic pressure region. Gravitationally-driven ambipolar diffusion then leads to runaway collapse of the densest regions in the ribbon, where the mass-to-flux ratio has become supercritical. This process has been demonstrated in published simulations of trans-Alfv\'enic turbulence in a cloud with an initial subcritical mass-to-flux ratio \citep[e.g.,][]{nak05,kud11}. We have extended the semi-analytic model of \citet{kud14} to estimate their lateral (perpendicular to magnetic field and ribbon long axis) width. This quantity is independent of the density of the ribbon. This lateral width can also be used to estimate the parent filament radius $R_0$ in the theoretical magnetic ribbon model of \citet{tom14}. In our model, the thickness parallel to the magnetic field is essentially the Jeans scale and does depend on density. Hence, we calculate a distribution of apparent widths seen in projection assuming a random set of viewing angles. The resulting distribution of apparent widths versus apparent column density is relatively flat (unlike expectations based on the Jeans length) over the range $10^{21}$ cm$^{-2}$ -- $10^{23}$ cm$^{-2}$, in rough agreement with the observations of \citet{arz11}. Other models have been introduced to explain the apparent near-uniform width of observed filaments. \citet{fis12} introduce an external pressure to an isothermal cylinder and find that the FWHM versus column density is a peaked function and approximately flat in the regime where $m_{\rm l}$ is well below $m_{\rm l,crit}$ and the external pressure is comparable to the central pressure. However, filaments with $m_{\rm l} > m_{\rm l,crit}$ would be in a time-dependent state of dynamical collapse. \citet{hen13} develop a model of a cylindrical self-gravitating filament that is accreting at a prescribed rate. A near-uniform radius is derived based on assumption of a steady-state balance between energy input from accretion and dissipation of energy by ion-neutral friction at the filament radius scale. \citet{hei13} also develops a model of accretion at the free-fall rate onto a filament with $m_{\rm l} < m_{\rm l,crit}$ and uses various prescribed forms of internal structure to find that the FWHM has a peaked dependence on column density. A series of simulation papers \citep{smi14,kir15,fed16} use hydrodynamic or MHD simulations (with supercritical mass-to-flux ratio) and analyze filament widths at particular snapshots in time. Although their filament widths cluster at $\sim 0.1$ pc with some scatter, there is a mild to strong density dependence of the widths, and the filaments are single time snapshots in a situation of continuing collapse. \citet{fed16} suggests that $\sim 0.1$ pc is special since the linewidth-size relation of \citet{lar81} would lead to subsonic turbulence below that scale, but it is not clear if his simulations satisfy this scaling internally. We believe that the magnetic ribbon model provides an alternative simplified interpretation that accounts for turbulence and strong magnetic fields. We have developed a method to estimate the width of a magnetic ribbon based on the characteristic scale and amplitude of MHD turbulence. Such ribbons can have a line mass that exceeds the hydrodynamic limit $2 c_s^2/G$ and still be in a dynamically oscillating quasi-equilibrium state. However, gravity still leads to star formation in the dense interior through rapid ambipolar diffusion.
16
9
1609.02989
1609
1609.03177_arXiv.txt
We present three dimensional maps in monochromatic extinction $A_{\rm 0}$ and the extinction parameter $R_0$ within a few degrees of the Galactic plane. These are inferred using photometry from the Pan-STARRS1 and Spitzer Glimpse surveys of nearly $20$ million stars located in the region $l = 0-250^{\circ}$ and from $b = -4.5^{\circ}$ to $b=4.5^{\circ}$. Given the available stellar number density, we use an angular resolution of $\unit{7}{'} \times \unit{7}{'}$ and steps of $\unit{1}{mag}$ in distance modulus. We simultaneously estimate distance modulus and effective temperature $T_{\rm eff}$ alongside the other parameters for stars individually using the method of \citet{Hanson2014} before combining these estimates to a complete map. The full maps are available via the \textit{MNRAS} website.
Recently, several new studies analysing the distribution of extinction and dust in the Galaxy have appeared, emphasising the importance of improving our understanding of this key component of the Milky Way Galaxy. Having moved on from the two-dimensional maps that can only characterise the total line of sight extinction \citep[e.g.][]{Schlegel1998}, we can now estimate extinction in three dimensions, utilising several large-scale photometric surveys to infer individual stellar parameters and distances to millions of stars. \citet{Marshall2006} use red giant stars to map extinction using near infrared data from 2MASS based on a Galactic model. \citet{Gonzalez2011, Gonzalez2012} similarly compare colours of red clump stars to reference measurements in Baade's window to obtain a high-resolution map of the central bulge. \citet{Berry2012} compare SDSS and 2MASS photometry to the spectral energy distribution from stellar templates, performing a $\chi^2$ fit to the data. Analogously, \citet{Chen2014} analyse XSTPS-GAC, 2MASS and WISE data on the Galactic anti-centre. In recent years, several new methodological approaches have been introduced, in particular Bayesian ones. \citet{Bailer-Jones2011} uses our understanding of the Hertzsprung-Russell diagram (HRD) to put a prior on the available stellar parameter space, and simultaneously infers extinction, effective temperature and distances to stars, based on broadband photometry and Hipparcos parallaxes. \citet{Hanson2014} expand this method to use SDSS and UKIDSS data when parallaxes are absent and also to infer the extinction parameter at high Galactic latitudes. \citet{Sale2014} use a hierarchical Bayesian system developed in \citet{Sale2012} applied to IPHAS data to map extinction in the northern Galactic plane. \citet{Green2014} and \citet{Schlafly2014b} combine Galactic priors to obtain probabilistic three dimensional extinction estimates for most of the Galaxy above declination $-30$ degrees with Pan-STARRS1 data. \citet{Vergely2010} and \citet{Lallement2014} apply an inversion method to data from multiple surveys to map the local interstellar medium in particular. In \citet{Hanson2014} we demonstrated the method used in the present work on SDSS and UKIDSS data of the Galactic poles, finding good agreement with other studies. In this work we use Pan-STARRS1 \citep{Kaiser2010} and Spitzer IRAC data from the GLIMPSE (\textit{Galactic Legacy Infrared Mid-Plane Survey Extraordinaire}) surveys \citep{Churchwell2009,Benjamin2003} to probe the inner few degrees of the Galactic plane, thereby covering more diverse regions of extinction and its variation. This allows us to not only map the line-of-sight extinction but also to quantify the variation of the extinction parameter which characterises the properties and size distribution of dust grains in the interstellar medium. The paper is organised as follows. In Section~\ref{sec:method} we summarise the method used here, focussing on how we construct the maps presented later on. In Section~\ref{sec:data} we describe the surveys and data products we use to construct the map. Results are presented in Section~\ref{sec:extinctionmap}, where we illustrate the performance and validity of our results. We close with a conclusion and discussion in Section~\ref{sec:conclusion}, suggesting future steps and goals. The map data are available via the \text{MNRAS} website.
\label{sec:conclusion} We have presented three dimensional maps in cumulative line of sight extinction $A_{\rm 0}$ and extinction parameter $R_0$ which are constructed using a Bayesian method. This method is general and not bound to specific photometric systems. It is based on work by \citet{Bailer-Jones2011} and expanded in \citet{Hanson2014}. We take advantage of the physical understanding of stellar evolution that is encapsulated in the Hertzsprung-Russell Diagram. Using photometric measurements of $19\,885\,031$ stars with data from the cross-matched Pan-STARRS1 and Spitzer Glimpse surveys (six bands in total), we infer extinction $A_{\rm 0}$, extinction parameter $R_0$, effective temperature $T_{\rm eff}$ and distance modulus $\mu$ to all stars individually. We achieve mean relative uncertainties of $0.17$, $0.09$, $0.04$ and $0.18$ for extinction, extinction parameter, effective temperature and distance modulus, respectively whilst obtaining average uncertainties of $\unit{0.17}{mag}$, $0.36$, $\unit{185}{K}$ and $\unit{2.6}{mag}$ for the four parameters. We emphasise that while we believe the $R_0$ variations we measure, we are less confident in the absolute values. Using these inferred parameters we compute the estimated total extinction to arbitrary distances and estimates of the extinction parameter, as formulated in Equation~\ref{eq:extinctionmap-weightedmean}. The angular stellar density allows us achieve a reliable resolution of $\unit{7}{'} \times \unit{7}{'}$ in latitude and longitude. We select steps of $\unit{1}{mag}$ in distance modulus. From the distribution of distance estimates within all three-dimensional cells, we estimate that the reported extinction map is reliable from $\mu = \unit{6-13}{mag}$. At closer distances we have too few stars for trustworthy estimates due to the bright magnitude limits of both surveys. Beyond that distance range, individual estimates become too uncertain. We do not expect many stars beyond that distance due to the faint magnitude limits, so we do not report values outside this range. We find that the extinction law varies with each line of sight and along the line of sight, supporting previous works which contend that using a single value to parametrize extinction is insufficient to properly model the three dimensional dust distribution in the Galaxy. The data are available via the \textit{MNRAS} website. As previously discussed in \citet{Hanson2014}, the key limitation at this stage is the distance inference, which is limited by photometric errors and intrinsic model degeneracies. Furthermore, on the account of our use of stellar models to estimate stellar effective temperatures, there are likely to be systematic uncertainties in our estimates of $A_0$ and $R_0$. These enter through the assumption of 'true' model temperatures, the use of an HRD prior and lack of metallicity variations \citep[again, see][]{Hanson2014}. Furthermore, our extinction estimates for individual lines of sight do not account for correlations in angular dimensions. That is, neighbouring lines-of-sight are solved for independently. This clearly does not mirror reality, where the extinction estimates for stars that are close in space (and whose photons are affected by the same dust structures) should be strongly correlated, whereas those of stars that have a large separation should be less so. Theoretically, due to the finite cross-sectional area of a line-of-sight, a more distant star could show less extinction. This shortcoming is now starting to be addressed. \citet{SaleMagorrian2014} introduce a method based on Gaussian random fields and a model of interstellar turbulence, which addresses the discontinuities we currently see in most extinction maps. \citet{Lallement2014} use an inversion method with spatial correlation kernels that attempts to reconstruct structures of the ISM in a more realistic manner. Combining current large area photometric surveys, such as those employed here, with parallax measurements from \textit{Gaia} will enable us to construct accurate 3D maps of stars in the Galaxy. Including stellar parameter estimates from future data releases by the Data Processing and Analysis Consortium (\textit{DPAC}), as summarised in \citet{Bailer-Jones2013}, will significantly increase our capabilities of reconstructing the full three dimensional distribution of dust.
16
9
1609.03177
1609
1609.06278_arXiv.txt
{A lot of effort has been put into the detection and determination of stellar magnetic fields using the spectral signal obtained from the combination of hundreds or thousands of individual lines, an approach known as ``multi-line techniques''. However, so far most of multi-line techniques developed that retrieve stellar mean longitudinal magnetic fields recourse to sometimes heavy simplifications concerning line shapes and Zeeman splittings.} { To determine stellar longitudinal magnetic fields by means of the Principal Components Analysis and Zeeman Doppler Imaging (PCA-ZDI) multi-line technique, based on accurate polarised spectral line synthesis.} {In this paper we present the methodology to perform inversions of profiles obtained with PCA-ZDI.} { Inversions with various magnetic geometries, field strengths and rotational velocities show that we can correctly determine the effective longitudinal magnetic field in stars using the PCA-ZDI method.} {}
It is well known that the magnetic solar type stars host weak longitudinal fields, typically of the order of a few tens of Gauss (e.g. \citealt{Marsdenetal2014}). For such field strengths, Stokes $V$ circular polarisation signatures of individual spectral lines are generally well below the noise level. The use of so-called multi-line techniques makes it possible to overcome this problem through the ``addition'' of multiple individual lines in Doppler space, resulting in a ``mean'' circular polarisation profile, the so-called Multi-Zeeman-Signature (MZS) which greatly increases the signal to noise ratio. Starting with the pioneering work of \citet{SemelLi1996}, different techniques have been developed over the following two decades for the establishment of Multi-Zeeman-Signatures. The most popular appears to be the Least Squares Deconvolution (LSD) technique first described in \citet{Donatietal1997}. Two assumptions underlie the LSD technique. The first one states that the local circular polarisation profiles to be added are all of similar shape; the second postulates that the Zeeman broadening is very small compared to the thermal Doppler broadening, such that the weak field approximation can safely be applied to the Stokes profiles. As a consequence, the coupled system of equations of polarised radiative transfer do not have to be formally solved. Instead, under a perturbative scheme the system of equations can be partially decoupled, permitting to find an analytical solution: to first order the circular polarisation profile is proportional to the first derivative of the intensity, and to second order, the linear polarised profiles are proportional to the second derivative of the intensity (e.g. \citealt{LandiLandolfi2004}). In recent times, LSD and the weak field approximation (WFA) have been widely employed at different levels of sophistication with the ultimate goal of precisely measuring weak stellar magnetic fields on the basis of MZSs (\citealt{KochukhovMaPi2010}, \citealt{Kochukhov2015}, \citealt{MartinezAsensio2012}, \citealt{Martinezetal2012}, \citealt{CarrollStr2014}, \citealt{AsensioPet2015}). An alternative technique based on the Principal Components Analysis (PCA) was proposed by \citet{Martinezetal2008}. In this work, the MZS is derived by means of the addition of many lines in Doppler space (as in LSD) but a denoising procedure, the filtering of uncorrelated noise, is applied to individual lines to increase the signal to noise ratio in the final MZS. It is an important feature of this technique that similarity between the individual polarised circular line profiles is not required; neither is it necessary to invoke the WFA. The validity of this robust technique for the analysis of magnetic fields has been proved with numerical simulations but the method has not yet been applied to the quantitative {\em measurement} of field strengths. In fact, most current techniques devoted to the analysis of stellar magnetic fields by means of MZS are avoiding spectral line synthesis on account of the computing resources required in view of the wide spectral ranges involved (thousands of Angstroms). To contribute to remedy this situation, we will present the basis of a novel technique for the determination of stellar longitudinal magnetic fields, a method that is based on spectral line synthesis incorporating detailed polarised radiative transfer.
Spectropolarimetry constitutes the optimum observational technique for the study of solar and stellar magnetism. These data are best analysed with the help of codes that implement spectral line synthesis in the presence of magnetic fields, solving the coupled equations of polarised radiative transfer. It is highly desirable that any multi-line approach to the determination of global magnetic quantities such as the effective longitudinal field $H_{\rm eff}$ or the mean magnetic field modulus $H_{\rm s}$ should incorporate full opacity sampling and a correct treatment of polarised radiative transfer when calculating local Stokes $IQUV$ profiles. This is exactly what we have done with the help of a database of Stokes profiles established with the help of the polarised line synthesis code {\sc Cossam} and MZSs obtained by means of the PCA-ZDI technique. Efforts to derive $H_{\rm eff}$ from MZSs obtained with PCA-ZDI date back to \citet{Semeletal2009} who showed that the centre of gravity method can be applied to the MZSs for an estimate of magnetic field strengths. Additionally, using all four Stokes parameters, \citet{Ramirezetal2010} demonstrated that MZSs correctly encode the information on the magnetic field, meaning that both strength and orientation of the magnetic field can successfully be recovered for field strengths of up to 10\,kG. In these two papers, the results were obtained using ``solar'' MZSs, i.e. any effect of rotation was neglected and no inhomogeneous spatial distribution of magnetic strengths over the surface was assumed. The present study introduces a novel approach that can be applied to the analysis of spectropolarimetric data. The principal idea behind this method consists in the use of broadened ``solar'' MZSs to infer the effective longitudinal field $H_{\rm eff}$. We have shown that the BFs are an effective tool to properly reproduce the rotational and magnetic broadening effects on the Stokes profiles in wavelength space, much as they are for the MZSs in Doppler space. We have tested our approach for different moderate values of $v\,\sin i$, obtaining good results in all cases. Gratifyingly enough it has turned out that the results do not depend on whether one uses fixed grids or adaptive grids. {\sc Cossam}, the polarised spectral line synthesis code employed in this work, is considered a reference for the calculation of stellar Stokes parameters (\citealt{Wadeetal2001}, \citealt{CarrollKopStr2008}). For the inversion -- in the general case -- of the stellar MZSs, we adopted fixed stellar atmospheric model parameters ($T_{\rm eff}$, $log\,g$, $[M/H]$, $v_{\rm turb}$, $\xi$), but we freely varied the parameters of the magnetic model incorporated in {\sc Cossam}, viz. the magnetic moment $m$, the Eulerian angles $\alpha$, $\beta$, $\gamma$, the vector of the dipole offset [0, $x_2$, $x_3$] and the inclination $i$. We were able to demonstrate that ``solar'' broadened MZSs are able to reproduce the shape of stellar MZSs, providing us with the possibility to determine $H_{\rm eff}$. Currently, {\sc Cossam} assumes a tilted eccentric dipole but our technique should also work for quadrupolar or higher order configurations, even for magnetic fields concentrated in starspots. Computing times for the (point-source) ``solar'' MZSs are of course largely inferior to those for (integrated) stellar MZSs which increase proportionally to the number of spatial quadrature points. Still, for the approach presented here extensive calculations are required to arrive at a significant number of stellar MZSs needed to establish the BFs that serve to broaden the ``solar'' MZSs. About 3 weeks had to be spent on a 56 core workstation to obtain 7500 stellar MZSs covering the interval from 350 to 1000\,nm in steps of 1\,km\,s$^{-1}$, and adopting an 80 point spatial grid. It might at this point legitimately be asked why one should spend such an effort on the determination of $H_{\rm eff}$ instead of trying to directly model the stellar MZSs. Given the huge number of combinations in the parameter space of tilted eccentric dipole model, an obvious reason for wanting to know the effective longitudinal field at various phases is to reduce this number, possibly by a very large amount. The availability of (even modestly sized) supercomputers can greatly increase the attractiveness of our method. In a forthcoming paper we shall present the application of this technique to observational data.
16
9
1609.06278
1609
1609.08323_arXiv.txt
{} {We aim at developing an efficient method to search for late-type subdwarfs (metal-depleted dwarfs with spectral types $\geq$ M5) to improve the current statistics. Our objectives are: improve our knowledge of metal-poor low-mass dwarfs, bridge the gap between the late-M and L types, determine their surface density, and understand the impact of metallicity on the stellar and substellar mass function.} {We carried out a search cross-matching the Sloan Digital Sky Survey (SDSS) Data Release 7 (DR7) and the Two Micron All Sky Survey (2MASS), and different releases of SDSS and the United Kingdom InfraRed Telescope (UKIRT) Infrared Deep Sky Survey (UKIDSS) using STILTS, Aladin, and Topcat developed as part of the Virtual Observatory tools. We considered different photometric and proper motion criteria for our selection. We identified 29 and 71 late-type subdwarf candidates in each cross-correlation over 8826 and 3679 square degrees, respectively (2312 square degrees overlap). We obtained our own low-resolution optical spectra for 71 of our candidates. : 26 were observed with the Gran Telescopio de Canarias (GTC; R\,$\sim$\,350, $\lambda\lambda$5000--10000\,\,\AA{}), six with the Nordic Optical Telescope (NOT; R\,$\sim$\,450, $\lambda\lambda$5000--10700\,\,\AA{}), and 39 with the Very Large Telescope (VLT; R\,$\sim$\,350, $\lambda\lambda$6000--11000\,\,\AA{}). We also retrieved spectra for 30 of our candidates from the SDSS spectroscopic database (R\,$\sim$\,2000 and $\lambda\lambda$\,3800--9400\,\,\AA{}), nine of these 30 candidates with an independent spectrum in our follow-up. We classified 92 candidates based on 101 optical spectra using two methods: spectral indices and comparison with templates of known subdwarfs.} {We developed an efficient photometric and proper motion search methodology to identify metal-poor M dwarfs. We confirmed 86\% and 94\% of the candidates as late-type subdwarfs from the SDSS vs 2MASS and SDSS vs UKIDSS cross-matches, respectively. These subdwarfs have spectral types ranging between M5 and L0.5 and SDSS magnitudes in the $r$\,=\,19.4--23.3 mag range. Our new late-type M discoveries include 49 subdwarfs, 25 extreme subdwarfs, six ultrasubdwarfs, one subdwarf/extreme subdwarf, and two dwarfs/subdwarfs. In addition, we discovered three early-L subdwarfs to add to the current compendium of L-type subdwarfs known to date. We doubled the numbers of cool subdwarfs (11 new from SDSS vs 2MASS and 50 new from SDSS vs UKIDSS). We derived a surface density of late-type subdwarfs of 0.040$^{+0.012}_{-0.007}$ per square degree in the SDSS DR7 vs UKIDSS LAS DR10 cross-match ($J$\,=\,15.9--18.8 mag) after correcting for incompleteness. The density of M dwarfs decreases with decreasing metallicity. We also checked the Wide Field Survey Explorer (AllWISE) photometry of known and new subdwarfs and found that mid-infrared colours of M subdwarfs do not appear to differ from their solar-metallicity counterparts of similar spectral types. However, the near-to-mid-infrared colours $J-W2$ and $J-W1$ are bluer for lower metallicity dwarfs, results that may be used as a criterion to look for late-type subdwarfs in future searches. } {0}
\label{sdM_VO:intro} Subdwarfs have luminosity class VI in the Yerkes spectral classification system and lie below the main-sequence in the Hertzsprung-Russell diagram \citep{morgan43}. Subdwarfs appear less luminous than solar metallicity dwarfs with similar spectral types, due to the lack of metals in their atmospheres \citep{baraffe97}. They have typical effective temperatures ($T_{eff}$) between $\sim$\,2500 and 4000\,K, interval dependent on metallicity \citep{woolf09}. Subdwarfs are Population II dwarfs located in the halo and the thick disk of the Milky Way. They are part of the first generations of stars and can be considered tracers of the Galactic chemical history. They are very old, with ages between 10 and 15 Gyr \citep{burgasser03b}. Subdwarfs have high proper motions and large heliocentric velocities \citep{gizis97a}. In the same way as ordinary main-sequence stars, stellar cool subdwarfs\footnote{We will use indistinctly the terms subdwarfs and cool subdwarfs when mentioning our targets.} produce their energy from hydrogen fusion and show strong metal-hydride absorption bands and metal lines. Some L dwarfs with low-metallicity features have been found over the past decade, but no specific classification exists for L subdwarfs yet. \citet{gizis97a} presented the first spectral classification for M subdwarfs dividing them into two groups: subdwarfs and extreme subdwarfs. The classification was based on the strength of the TiO and CaH absorption bands at optical wavelengths. \citet{lepine07c} updated the \citet{gizis97a} classification using a parameter which quantifies the weakening of the strength of the TiO band in the optical as a function of metallicity; introducing a new class of subdwarfs: the ultrasubdwarfs. The current classification of low-mass M stars includes dwarfs and three low-metallicity classes: subdwarfs, extreme subdwarfs, and ultrasubdwarfs, with approximated metallicities of $-$0.5, $-$1.0, and $-$2.0 respectively \citep{lepine07c}. \citet{jao08} also proposed a classification for cool subdwarfs based on temperature, gravity, and metallicity. The typical methods to identify subdwarfs focus on proper motion and/or photometric searches in photographic plates taken at different epochs \citep{luyten79,luyten80,scholz00,lepine03b,lodieu05b}. Nowadays, the existence of large-scale surveys mapping the sky at optical, near-infrared, and mid-infrared wavelengths offer an efficient way to look for these metal-poor dwarfs. After the first spectral classification for M subdwarfs proposed by \citet{gizis97a}, other authors contributed to the increase in the numbers of this type of objects. New M subdwarfs with spectral types later than M7 were published in \citet{gizis97b}, \citet{schweitzer99}, \citet{lepine03a}, \citet{scholz04c}, \citet{scholz04b}, \citet{lepine08b}, \citet{cushing09}, \citet{kirkpatrick10}, \citet{lodieu12b}, and \citet{zhang13}. The largest samples come from \citet{lepine08b}, \citet{kirkpatrick10}, \citet{lodieu12b}, and \citet{zhang13} and include 23, 15, 20, and 30 new cool subdwarfs, respectively. \citet{burgasser03b} published the first "substellar subdwarf", with spectral type (e?)sdL7. It was followed by a sdL4 subdwarf \citep{burgasser04} and years later by other seven L subdwarfs: a sdL3.5--4 in \citet{sivarani09}, a sdL5 in \citet{cushing09}, a sdL5 in \citet{lodieu10a} (re-classified in this paper as sdL3.5--sdL4), a sdL1, sdL7, and sdL8 in \citet{kirkpatrick10}, a sdL5 in \citet{schmidt10a} and also in \citet{bowler10a}. Our group published two new L subdwarfs \citep{lodieu12b}. In this work we add three more, with spectral types sdL0 and sdL0.5\@. The coolest L subdwarfs might have masses close to the star-brown dwarf boundary for subsolar metallicity according to models \citep{baraffe97,lodieu15a}. The main purpose of this work is to develop an efficient method to search for late-type subdwarfs in large-scale surveys to increase their numbers using tools developed as part of the Virtual Observatory (VO)\footnote{http://www.ivoa.net} like STILTS\footnote{www.star.bris.ac.uk/$\sim$mbt/stilts} \citep{taylor06}, Topcat\footnote{www.star.bris.ac.uk/$\sim$mbt/topcat} \citep{taylor05}, and Aladin\footnote{aladin.u-strasbg.fr} \citep{bonnarel00}. We want to improve our knowledge of late-type subdwarfs, bridge the gap between late-M and L spectral types, determine the surface densities for each metallicity class, and understand the role of metallicity on the mass function from the stellar to the sub-stellar objects. This is the second paper of a long-term project with several global objectives. The first paper was already published in \citet{lodieu12b}, where we cross-matched SDSS DR7 and UKIDSS LAS DR5, reporting 20 new late-type subdwarfs. In this second paper, we present the second part of our work, reporting new subdwarfs identified in SDSS DR9 \citep{york00}, UKIDSS LAS DR10 \citep{lawrence07}, and 2MASS \citep{cutri03,skrutskie06}.
\label{sdM_VO:conclusions} We have demonstrated that cross-correlations between optical and near-infrared large-scale surveys represent a very powerful tool to identify cool subdwarfs. Combining this study with \citet{lodieu12b}, we have increased the number of late-type subdwarfs by a factor of two and confirmed spectroscopically four new L subdwarfs. In this work, we report 68 new metal-poor M dwarfs, divided up into 36 subdwarfs, 26 extreme subdwarfs, and six ultrasubdwarfs, to which we should add two L subdwarfs and two dM/sdM, Our photometric and astrometric search shows success rates beyond the 80\% mark. The spectrophotometric distances of our new late-type subdwarfs range from 50 to \,$\sim$\,500 pc for subdwarfs and extreme subdwarfs. We inferred a surface density for M-type subdwarfs of 0.033--0.052 per square degrees in the SDSS/UKIDSS cross-match in the $J$\,=\,15.9--18.8 mag range after correcting for incompleteness. We also note that the proper motions calculated by the catalogue positions and epochs can sometimes be erroneous. It is therefore necessary to check these proper motions and improve them to optimise further photometric and proper motion searches. We searched for wide companions of early spectral types to our subdwarfs in different catalogues using our refined proper motions for those in SDSS vs UKIDSS and the PPMXL proper motions for those in 2MASS and SDSS\@. We found one potential bright companion in the Hipparcos-Tycho catalogue based on proper motion and spectroscopic distance. However the lack of metallicity estimate does not lead to a strong conclusion. We also cross-matched our sample of new late-type subdwarfs as well as known subdwarfs with the AllWISE database to investigate the role of metallicity in the mid-infrared. We conclude that subdwarfs with spectral types later than M7 appear bluer than their solar-abundance counterparts in the $J-W1$ and $J-W2$ colours, most likely due to the onset of the collision-induced H$_{2}$ opacity beyond 2 $\mu$m. We suggest this as new colour criteria to look for ultracool subdwarfs in the future. The main objective of this large project is to increase the number of metal-poor dwarfs, determine the space density of metal-poor dwarfs, improve the current classification of M subdwarfs, and expand it to the L subdwarfs (and later T) regime. We are now able to optimise our photometric and proper motion criteria and apply them to future searches in new data releases of optical, near-infrared, and mid-infrared large-scale surveys. This will allow us to increase the census of metal-poor dwarfs, especially at the coolest temperatures and lowest metallicities.
16
9
1609.08323
1609
1609.01728_arXiv.txt
In the next few years, intensity-mapping surveys that target lines such as CO, Ly$\alpha$, and CII stand to provide powerful probes of high-redshift astrophysics. However, these line emissions are highly non-Gaussian, and so the typical power-spectrum methods used to study these maps will leave out a significant amount of information. We propose a new statistic, the probability distribution of voxel intensities, which can access this extra information. Using a model of a CO intensity map at $z\sim3$ as an example, we demonstrate that this voxel intensity distribution (VID) provides substantial constraining power beyond what is obtainable from the power spectrum alone. We find that a future survey similar to the planned COMAP Full experiment could constrain the CO luminosity function to order $\sim10\%$. We also explore the effects of contamination from continuum emission, interloper lines, and gravitational lensing on our constraints and find that the VID statistic retains significant constraining power even in pessimistic scenarios.
Intensity mapping has arisen in recent years as a powerful new means to observe the high-redshift universe. Traditional galaxy surveys become less effective at great distances as more luminous galaxies start to fall below detection thresholds. intensity-mapping surveys, first proposed by \citet{Suginohara1999}, make use of the emission from these fainter galaxies by measuring the intensity fluctuations of a chosen spectral line on large spatial scales. Such a survey thus makes use of the aggregate emission from all of the galaxies within a target volume, and can make statistical measurements of the entire galaxy population. By studying a single emission line, it is possible to observe these intensity fluctuations in three dimensions, as the observed frequency of a line maps one-to-one to the emission redshift. This allows detailed study of line emission in relatively unexplored periods of cosmic history. The fluctuations observed in an intensity mapping survey depend on the luminosity function and spatial distribution of the source galaxies. The luminosity function depends on the detailed astrophysical conditions within the emitters, such as star formation rates and metallicities, while the spatial distribution traces the underlying dark matter field, the properties of which in turn depend on cosmological parameters. Intensity mapping can thus provide information about a wide variety of cosmological and astrophysical topics. Typically, all of this information is extracted from a map using its power spectrum, a powerful statistic which has proven valuable for studying both galaxy distrubutions \citep{Tegmark1998} and the cosmic microwave background \citep{Planck2015c}. However, for highly non-Gaussian fields, the power spectrum, a two-point statistic, leaves out a significant amount of information. In the CMB, non-Gaussianities are typically probed using higher-point statistics such as the bispectrum and trispectrum \citep{Bartolo2010}. Unfortunately, these statistics are rather difficult to work with, both from a theoretical and observational perspective. We propose instead to study the one-point statistics of intensity maps using a quantity we will refer to as the voxel intensity distribution, or VID. This VID statistic is the probability distribution function of observed pixel intensities. It can be predicted in a straightforward manner from a model luminosity function, and it can be estimated from a map simply by making a histogram of the observed intensity values. Our VID method is an extension of a technique known as probability of deflection, or $P(D)$ analysis, which is a general method for predicting observed intensities from confusion-limited populations. $P(D)$ analysis was originally developed for radio astronomy \citep{Scheuer1957}, but has since been applied to observations ranging from gamma rays \citep{Lee2009,Lee2015} to X-rays \citep{Barcons1994} to the infrared \citep{Glenn2010}. Since intensity maps provide deliberately confused observations of galaxy populations, they are good candidates for $P(D)$ analysis. \citet{Breysse2016a} first discussed this technique in the context of intensity mapping as a method to measure high-redshift star formation rates. Here, we study this method in far more detail with the goal of creating a procedure that can be readily applied to many different intensity-mapping surveys targeting different lines. The most well-known line used for intensity mapping is the 21 cm spin-flip line from neutral hydrogen (see for example \citet{Morales2010} and references therein). Experiments such as the Precision Array for Probing the Epoch of Reionization (PAPER, \citealt{Ali2015}), the Murchison Widefield Array (MWA, \citealt{Tingay2013}), the LOw-Frequency ARray (LOFAR, \citealt{Haarlem2013}), the Hydrogen Epoch of Reionization Array (HERA, \citealt{DeBoer2016}) and the Square Kilometer Array (SKA, \citealt{Santos2015}) seeking to study the epoch of reionization and experiments like the Canadian Hydrogen Intensity Mapping Experiment (CHIME, \citealt{Bandura2014}) and the Hydrogen Intensity and Real-time Analysis eXperiment (HIRAX, \citealt{Newburgh2016}) seeking to study galaxies around $z\sim1$. However, different lines probe different astrophysical processes, and have to deal with vastly different foregrounds and systematic effects, so there has been a recent effort to study lines besides 21 cm. The Lyman $\alpha$ line \citep{Pullen2014,Comaschi2016}, targeted by the Spectro-Photometer for the History of the Universe, Epoch of Reionization, and Ices Explorer (SPHEREx, \citealt{Dore2014}), also traces hydrogen gas, but in hotter environments than 21 cm. Ionized regions can be studied using lines such as the 158 $\mu$m CII fine structure line \citep{Yue2015,Silva2015}, using experiments such as the planned Tomographic Intensity Mapping Experiment (TIME, \citealt{Crites2014}). Rotational transitions of CO molecules, sought by the CO Power Spectrum Survey (COPSS, \citep{Keating2016}) and CO Mapping Array Pathfinder (COMAP, \citep{Li2016}), probe cool, dense molecular gas \citep{Righi2008,Pullen2013,Breysse2014}. Many other lines, including Balmer series and molecular hydrogen lines as well as lines from helium, oxygen, and nitrogen have also been discussed in the literature \citep{Visbal2010,Gong2013,Visbal2015,Fonseca2016}. Below, we provide a detailed formalism for the VID statistic that should be readily applicable to a wide variety of intensity mapping models. In order to demonstrate the efficacy of this method, we apply this formalism to a four-parameter model of a CO intensity mapping survey. We apply a Fisher matrix analysis to this model and demonstrate that the VID can constrain the parameters of this model to order $\sim10\%$. We then go on to consider several forms of foreground contamination that are expected to affect intensity maps and find that the VID retains its usefulness even under rather pessimistic assumptions. For this work we assume a $\Lambda$CDM cosmology with $(\Omega_m,\Omega_\Lambda,h,\sigma_8,n_s)=[0.27,0.73,0.7,0.8,0.96]$ which is consistent with the WMAP results. Section 2 contains a discussion of the power spectrum and its limitations along with the presentation of our VID formalism. Section 3 describes our CO emission model, which we use in Section 4 to demonstrate the constraining power of the VID. Section 5 investigates how contamination from continuum emission, interloper lines, and gravitational lensing effects our constraints. We discuss our results in detail in Section 6 and conclude in Section 7.
We have presented a powerful new method for measuring line luminosity functions from intensity maps using the probability distribution of voxel intensities. This voxel intensity distribution can be calculated using $P(D)$ analysis techniques and measured from a map by making a histogram of voxel intensities. Because intensity maps are extremely non-Gaussian, this one-point statistic contains a substantial amount of information that cannot be obtained from usual power spectrum analyses. We tested our formalism on a four-parameter model of CO emission observed by an experiment similar to the planned COMAP survey. We found that the VID statistic was able to constrain these four parameters with an average error of order $\sim10\%$, despite not including any prior information. Incorporating various forms of foreground contamination such as continuum emission, interloper lines, and gravitational lensing weakens these constraints by varying degrees. However, the VID statistic still provides useful information despite these contaminants, even in very pessimistic cases where the power spectrum would be completely swamped by foregrounds. Our results here serve as an excellent proof of the VID concept. Though more work is necessary to fine tune the various subtleties of this method, this work suggests that the VID will make a powerful addition to the intensity mapping toolbox as more and more experiments come online in the coming years. The authors would like to thank Tony Li, Garrett Keating, and the participants of the Opportunities and Challenges in Intensity Mapping Workshop for useful discussions. This work was supported at JHU by NSF Grant No. 0244990, NASA NNX15AB18G, the John Templeton Foundation, and the Simons Foundation. LD is supported at the Institute for Advanced Study by NASA through Einstein Postdoctoral Fellowship grant number PF5-160135 awarded by the Chandra X-ray Center, which is operated by the Smithsonian Astrophysical Observatory for NASA under contract NAS8-03060. PSB was supported by program number HST-HF2-51353.001-A, provided by NASA through a Hubble Fellowship grant from the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Incorporated, under NASA contract NAS5-26555.
16
9
1609.01728
1609
1609.05201_arXiv.txt
We study thermodynamic and transport properties for the isotropic color-spin-locking (iso-CSL) phase of two-flavor superconducting quark matter under compact star constraints within a NJL-type chiral quark model. Chiral symmetry breaking and the phase transition to superconducting quark matter leads to a density dependent change of quark masses, chemical potentials and diquark gap. A self-consistent treatment of these physical quantities influences on the microscopic calculations of transport properties. We present results for the iso-CSL direct URCA emissivities and bulk viscosities, which fulfill the constraints on quark matter derived from cooling and rotational evolution of compact stars. We compare our results with the phenomenologically successful, but yet heuristic 2SC+X phase. We show that the microscopically founded iso-CSL phase can replace the purely phenomenological 2SC+X phase in modern simulations of the cooling evolution for compact stars with color superconducting quark matter interior.
Present astrophysical observational programmes monitoring compact stars (CS) have provided new, high-quality data for their static properties, thermal and spin evolution. These modern measurements constrain the equation of state (EoS) and the transport properties of dense matter in CS interiors \cite{Klahn:2006ir}, for a recent review see~\cite{Blaschke:2016}. In particular, the evidence for high masses \cite{Demorest:2010bx,Antoniadis:2013,Fonseca:2016tux} and large radii \cite{Bogdanov:2012md} of CSs, suggests that the EoS at high densities must be sufficiently stiff. This prompts the question for the possibility of deconfined quark matter at CSs interiors \cite{Ozel:2006km}. In this debate, it has been demonstrated that microscopic models of quark matter EoS allow for extended quark cores of CS, while satisfying current mass and radius constraints \cite{Alford:2006vz,Klahn:2006iw,Blaschke:2007ri,Grunfeld:2007jt}. This offers still a broad spectrum of possible realizations of hybrid stars in nature, as classified recently in Ref.~\cite{Alford:2015dpa}. The two extreme scenarios are the {\em masquerade} case \cite{Alford:2004pf}, where the corresponding quark-hadron hybrid stars appear to have almost identical static properties to pure neutron stars, and the {\em high-mass twin} case \cite{Blaschke:2013ana,Benic:2014jia} associated with a strong first-order phase transition. The latter can be identified by observing CSs with similar high masses (such as PSR J1614-2230 \cite{Demorest:2010bx,Fonseca:2016tux} and PSR J0348+0432 \cite{Antoniadis:2013}) but significantly different radii. This requires an accuracy of radius measuments of about 500 m, as it shall be provided by the NICER mission of NASA \cite{nicer}, planned for launch in the near future. In the case of a smooth cross-over transition, i.e. the masquerade case, precise observations of CS mass and/or radii will not allow to provide evidence for the existence of quark matter at their interiors. In such a situation, the transport properties of dense matter may provide the decisive diagnostic tool via the cooling history of CS. Besides cold CSs, also in protoneutron stars (PNS) the transport properties play a crucial role. PNSs are born hot and lepton rich in the violent event of a core-collapse supernova. They deleptonize and cool on a timescale on the order of 10--30~s via the emission of neutrinos of all flavors \cite{Pons:1998mm,Fischer:2010,Huedepohl:2010, Roberts:2012, MartinezPinedo:2012rb, Fischer:2016}. The appearance and role of quark matter in PNS and core-collapse supernovae has long been studied by means of conducting numerical studies \cite{Pons:2001,Nakazato:2008su,Sagert:2008ka,Fischer:2011}, also in particular as the trigger of the actual supernova explosion via a strong first-order phase transition at high density. This launches a strong hydrodynamics shock wave, in addition of the standard supernova standing bounce shock, and the release of an outburst of neutrinos of all flavors \cite{Dasgupta:2009yj}. Those neutrinos are released during the shock passage across the neutrinospheres of last scattering, located always at low densities where matter is composed of hadronic degrees of freedom. The future observation of such neutrino signal may reveal yet unknown details of the associated with the quark-hadron phase transition. The caveat in all these studies was the treatment of neutrino interactions in quark matter, which was treated at the level of nucleons only. This approximation is valid when temperatures are on the order of 10~MeV or above. However, during the long-term evolution of deleptonizing protoneutron star, as the core temperature decreases below about 1~MeV, weak interactions at the quark level become important. Unlike in studies of cooling CS, where neutrino-quark interactions are treated at different levels of sophistication \cite{Iwamoto:1980eb,Iwamoto:1982,Haensel:1986}, for supernova studies the general framework has to be derived along the lines of Refs.~\cite{Burrows:1980,Iwamoto:1983}. Since the cooling and spin evolution of CS depends sensitively on the thermal and transport properties of dense matter, the latter can be determined from the observation of cooling CSs, with particular emphasis on young objects like Cassiopeia~A \cite{Ho:2009mm}. For a recent discussion of the role of the stiffness of the EoS and the superfluidity gaps in this context, cf. Refs.~\cite{Ho:2014pta,Grigorian:2016leu}. If quark matter is present in the CS interior we expect it to be in a color-superconducting state which entails a strong dependence on the pairing pattern and the sizes of pairing gaps. In the present study, we will focus on the discussion of direct Urca neutrino emissivities and bulk viscosities of color-superconducting quark matter. The numerical analysis is based on a Nambu-Jona-Lasinio (NJL) type model, allowing a consistent description of the density and temperature dependent quark masses, pairing gaps and chemical potentials under neutron star constraints. The resulting phase diagram suggests that three-flavor phases of the color-flavor-locking (CFL) type occur only at rather high densities \cite{Ruester:2005jc,Blaschke:2005uj} and render hybrid star configurations gravitationally unstable \cite{Buballa:2003qv,Klahn:2006iw}. Moreover, due to large pairing gaps in CFL quark matter, the r-mode instabilities cannot be damped \cite{Madsen:1999ci} and cooling is inhibited \cite{Blaschke:1999qx}. By this reasoning, we shall focus on two-flavor quark matter as the relevant case for discussion of quark deconfinement in CSs as well as in the protoneutron star evolution during supernova collapse. Due to the pairing instability the scalar antitriplet diquark correlations form a condensate in the color superconducting 2SC phase with a critical temperature $T_{\rm 2SC}$ that is on the order of $20 - 50$ MeV \cite{Ruester:2005jc,Blaschke:2005uj}. Within the Polyakov-loop extension of the NJL model, this temperature may even reach up to the pseudocritical temperature $T_c=154$ MeV found in recent lattics QCD simulations \cite{Borsanyi:2013bia,Bazavov:2014pvz} for the chiral and Polyakov-loop transition at vanishing baryon number densities, see \cite{Blaschke:2010ka,Ayriyan:2016lbx}. The standard 2SC phase, however, pairs only two of the three colors (e.g., red-green) of quarks, leaving color unpaired (blue quarks in this example) on which then the rapid direct Urca cooling process may proceed, too rapid in comparison with compact star phenomenology. This problem has prompted the introduction of a purely phenomenological gap (X-gap) for the quarks of the unpaired color \cite{Grigorian:2004jq}. For a recent investigation of such a fully gapped 2SC phase see \cite{Sedrakian:2015qxa,Sedrakian:2013xgk}, which may be contrasted to the transport \cite{Alford:2014doa} and cooling properties \cite{Hess:2011qw} in the original 2SC phase. In this context also the anisotropic crystalline color superconductivity phases have been discussed, which have been reviewed in \cite{Alford:2007xm,Anglani:2013gfu}. It is an unsatisfactory situation to have no candidate for the microscopic pairing pattern that could justify the phenomenological X-phase in the 2SC+X model of the fully gapped 2SC phase. One alternative is provided by the isotropic color-spin-locking (iso-CSL) phase suggested in \cite{Aguilera:2005tg,Aguilera:2006cj} modifying earlier work on spin-1 color superconducting phases \cite{Schmitt:2004et}. The iso-CSL phase is a single flavor pairing scheme and therefore rather inert against isospin asymmetry and strong magnetic fields, thus qualifying as a robust pairing pattern for compact star applications. Technically the description of the transport and cooling properties of this phase follows that of the family of spin-1 color superconductors which have been studied in detail in \cite{Schmitt:2005wg}. In the present work, we will focus on two-flavor color-superconducting phases in CSs, the 2SC+X phase of Ref.~\cite{Grigorian:2004jq}, for which a detailed investigation of the cooling phenomenology for hybrid stars has already been worked out \cite{Popov:2005xa,Blaschke:2006gd}, and the iso-CSL phase \cite{Aguilera:2005tg,Aguilera:2006cj} for which a consistent microscopic calculation of the direct Urca emissivity and the bulk viscosity will be presented here for the first time \cite{Blaschke:2007bv}. This will form the basis of further phenomenological in astrophysics, with applications to supernovae and CSs.
Transport properties in dense quark matter depend sensitively on the color superconductivity pairing patterns and provide thus a tool for unmasking the CS interiors by their cooling and rotational evolution characteristics. On the example of neutrino emissivities and bulk viscosities for the 2SC+X and the iso-CSL phase we have demonstrated that both two-flavor color superconducting phases fulfil constraints from the CS phenomenology. For the 2SC+X phase with yet heuristic assumptions for the X-gap the hybrid star configurations and their cooling evolution have been numerically evaluated in accordance with observational data. The temperature and density behavior of the neutrino emissivity in the microscopically well-founded iso-CSL phase appear rather similar so that we expect a good agreement with CS cooling data too. The bulk viscosities for both phases have been presented here for the first time and provide sufficient damping of r-mode instabilities to comply with the phenomenology of rapidly spinning CS. We conclude that the subtle interplay between suppression of the direct Urca cooling process on the one hand and sufficiently large bulk viscosity puts severe constraints on microscopic approaches to quark matter in compact stars.
16
9
1609.05201
1609
1609.09505_arXiv.txt
{High-contrast scattered light observations have revealed the surface morphology of several dozens of protoplanetary disks at optical and near-infrared wavelengths. Inclined disks offer the opportunity to measure part of the phase function of the dust grains that reside in the disk surface which is essential for our understanding of protoplanetary dust properties and the early stages of planet formation.} {We aim to construct a method which takes into account how the flaring shape of the scattering surface of an (optically thick) protoplanetary disk projects onto the image plane of the observer. This allows us to map physical quantities (scattering radius and scattering angle) onto scattered light images and retrieve stellar irradiation corrected ($r^2$-scaled) images and dust phase functions.} {The scattered light mapping method projects a power law shaped disk surface onto the detector plane after which the observed scattered light image is interpolated backward onto the disk surface. We apply the method on archival polarized intensity images of the protoplanetary disk around HD~100546 that were obtained with VLT/SPHERE in $R'$-band and VLT/NACO in $H$- and $K_{\rm s}$-band.} {The brightest side of the $r^2$-scaled $R'$-band polarized intensity image of HD~100546 changes from the far to the near side of the disk when a flaring instead of a geometrically flat disk surface is used for the $r^2$-scaling. The decrease in polarized surface brightness in the scattering angle range of $\sim$40\degr\,--\,$70\degr$ is likely a result of the dust phase function and degree of polarization which peak in different scattering angle regimes. The derived phase functions show part of a forward scattering peak which indicates that large, aggregate dust grains dominate the scattering opacity in the disk surface.} {Projection effects of a protoplanetary disk surface need to be taken into account to correctly interpret scattered light images. Applying the correct scaling for the correction of stellar irradiation is crucial for the interpretation of the images and the derivation of the dust properties in the disk surface layer.}
\label{sec:introduction} High-contrast scattered light observations of protoplanetary disks have revealed intriguing morphologies such as spiral arms, gaps, asymmetries, and shadows \citep[e.g.,][]{wisniewski2008,mayama2012,quanz2013,grady2013,wagner2015,rapson2015,stolker2016}, which may be signposts for disk evolution and planet-disk interactions \citep[e.g.,][]{pinilla2015,zhu2015,dong2015,rosotti2016}. Inclined disks offer the opportunity to measure the dust scattering efficiency at different angles with the star (i.e., the phase function) which is essential for our understanding of the properties and evolution of the dust in the disk surface. Projection effects are important for inclined disk surfaces, however, it is common practice to use a geometrically flat disk for the calculation of stellar irradiation corrected ($r^2$-scaled) images and phase functions \citep[e.g.,][]{quanz2011,garufi2014,thalmann2015}. Planet formation is a complex process which requires sub-micron sized dust grains from the interstellar medium to grow 14 orders of magnitude in size towards planets \citep[e.g.,][]{armitage2013}. Dust grains will coagulate, settle, drift, and fragment depending on many aspects of the disk structure and dust properties \citep{testi2014}. Protoplanetary disks are optically thick at optical and near-infrared wavelengths, consequently, scattered light observations probe the dust in the surface layer. In the disk surface, dust grains are expected to be (sub-)micron sized, because large compact grains will settle and grow efficiently towards the midplane leading to a vertical stratification of dust grain sizes \citep{dubrulle1995,dullemond2004}. However, there are indications that also larger dust grains can be present in the disk surface \citep{mulders2013,stolker2016} which have presumably an aggregate structure that provides them with aerodynamic support against settling. Polarimetric differential imaging (PDI) is a powerful technique to image protoplanetary and debris disks in scattered light at high angular resolution. The unpolarized speckle halo is removed with a differential linear polarization measurement which allows for high-contrast observations of disks that are multiple orders of magnitude fainter in scattered light compared to the stellar light. The scattered light surface brightness distribution of a disk depends on disk properties such as the pressure scale height and surface density, as well as the single scattering albedo, phase function, and single scattering polarization of the dust grains. Interpretation of scattered light images of inclined protoplanetary disks can be non-trivial for several reasons. Firstly, the surface layer of a protoplanetary disks has usually a flaring shape which results in complex projection effects. Secondly, the surface brightness is partially scattering angle dependent because of the phase function and single scattering polarization of the dust grains. Thirdly, the stellar irradiation of the disk surface scales with the reciprocal of the squared distance. All these effects have to be taken into account for a correct interpretation of scattered light images and phase functions. In this work, we will investigate the effect of the scattering surface geometry of a protoplanetary disk on the calculation of \mbox{$r^2$-scaled} images and phase functions. In Sect.~\ref{sec:mapping}, we construct a numerical method which corrects scattered light images of inclined and flaring disks for the dilution of the stellar radiation field and retrieves the phase function of the dust. In Sect.~\ref{sec:HD100546}, we apply the method on polarized intensity images of the HD~100546 protoplanetary disk. In Sect.~\ref{sec:discussion}, we discuss the importance of the method for the interpretation of polarized scattered light observations of HD~100546 and the dust grain properties in the disk surface in particular. In Sect.~\ref{sec:conclusions}, we summarize the main conclusions.
\label{sec:conclusions} We present a numerical method, scattered light mapping, for the interpretation of scattered light images of protoplanetary disks. The method considers how a flaring disk surface projects on the image plane of the observer. In this way, the scattering radii and scattering angles can be mapped to the pixels of a scattered light image such that stellar irradiation corrected images and phase functions can be retrieved. The main conclusions are: \begin{itemize} \item Taking into account the projection effect of a inclined and flaring disk surface on the image plane can have a significant effect on the calculation of stellar irradiation corrected images and dust phase functions. An estimate of the height of the $\tau=1$ surface is required which may be obtained from a radiative transfer model that reproduces, at minimum, the infrared excess of the SED, or from fitting ellipses to concentric structures, such as rings and gaps, in a scattered light image of an inclined disk \citep{ginski2016}. \item Application of the mapping method on polarized scattered light images of the protoplanetary disk around HD~100546 revealed that the near side is brighter than the far side both in polarized intensity and total intensity, as opposed to earlier work with results inferred from a geometrically flat disk geometry. \item The derived dust phase functions from the $R'$-, $H$, and $K_{\rm s}$-band $Q_\phi$~images of HD~100546 indicate that large ($2\pi a\gtrsim\lambda$) dust grains dominate the scattering opacity in the disk surface. Large dust grains in the disk surface are expected to have an aggregate structure which prevents them from settling efficiently towards the midplane. \end{itemize}
16
9
1609.09505
1609
1609.09396_arXiv.txt
{ In this paper, we embed the model of flipped GUT sneutrino inflation --in a flipped SU(5) or SO(10) set up-- developed by Ellis et al. in a supergravity framework. The GUT symmetry is broken by a waterfall which could happen at early or late stage of the inflationary period. The full field dynamics is thus studied in detail and these two main inflationary configurations are exposed, whose cosmological predictions are both in agreement with recent astrophysical measurements. The model has an interesting feature where the inflaton has natural decay channels to the MSSM particles allowed by the GUT gauge symmetry. Hence it can account for the reheating after the inflationary epoch. } \end{center} \clearpage \renewcommand{\thefootnote}{\arabic{footnote}} \setcounter{footnote}{0}
In the last years, the results published by the Planck collaboration about the Cosmic Microwave Background (CMB)~\cite{Ade:2015lrj} have lead to the speculation of a connection between the ideas of cosmological inflation and supersymmetric grand unification. Using the measured value for the amplitude of scalar perturbations in the CMB, $A_s = (2.19 \pm 0.11) \times 10^{-9}$, it is possible to estimate the energy density during the inflationary epoch as \begin{equation} V = (2 \times 10^{16} \gev)^4 \left( \frac{r}{0.15} \right), \label{inflationatunification} \end{equation} where $r$ is the ratio of tensor to scalar perturbations. Therefore, for a value of $r \sim 0.1$, perfectly compatible with Planck's observations, equation \eqref{inflationatunification} shows the remarkable coincidence of the energy density during inflation $V^{1/4}$ and the unification scale predicted by Supersymmetric GUTs, $M_{GUT} \sim 2\times 10^{16}$ GeV. Though only an apparent relation, since there is a priori no connection between the involved parameters, it serves as a motivational factor to build models of GUT inflation. The issue of inflationary Grand Unification has been studied extensively in the past \cite{Lyth:1998xn,Kyae:2005nv,Antusch:2010va,Rehman:2009yj,Arai:2011nq, Ellis:2014xda,Heurtier:2015ima,Kawai:2015ryj}. These models typically require an epoch of hybrid inflation \cite{Linde:1993cn,Antusch:2008pn,Antusch:2009ef, Clesse:2010iz,Buchmuller:2014epa}, during which the inflaton field(s), as it rolls down the slope of its potential, destabilizes the GUT preserving minimum and thus breaks the symmetry. After dropping down to its new vacuum, the GUT Higgs field backlashes into the inflationay potential, causes the end of the inflationary phase. Many of the SUSY GUTs in the literature, built to accommodate low-energy phenomena \cite{Ellis:1981tv,Ellis:1990wk,Antoniadis:1987dx} cannot be consistently constructed within a hybrid inflationary framework. This is due to the known GUT monopole problem~\cite{'tHooft:1974qc, Polyakov:1974ek}, which refers to the overproduction of magnetic monopoles during the GUT epoch, in tension with experimental searches, and having the unfortunate side effect of overclosing the universe~\cite{Guth:1980zm}. This issue is then avoided if the GUT symmetry is broken before the inflationary era so that these monopoles, along with other topological defects such as domain walls or cosmic strings, are diluted away by the rapid expansion of the universe. However, hybrid inflation expects the unification symmetry to be broken towards the end of inflation, with potentially not enough e-folds left over to wash away the topological defects. Fortunately, as pointed out by 't Hooft \cite{'tHooft:1974qc}, monopoles do not arise in systems with non-semisimple symmetry groups, i.e. Lie-groups of the form $\mathcal{G}\times U(1)_X$, where the charge of the abelian factor $Q_X$ enters in the linear combination of the electromagnetic charge \be Q_{em} = \alpha_1 Q_1 + \dots \alpha_n Q_n + \beta Q_X, \ee where the charge $Q_i$ and parameters $\alpha_i$ and $\beta$ ($i=1,\dots,n$) depend on the structure of $\mathcal{G}$ and the pattern of symmetry breaking. Therefore, we will take non-semisimple groups as the unification symmetry, in particular $SU(5)\times U(1)$ and $SO(10) \times U(1)$, similar to previous works \cite{Kyae:2005nv,Rehman:2009yj,Ellis:2014xda}, where the role of the inflaton is played by the right-handed sneutrino \cite{Ellis:2014xda,Bjorkeroth:2016qsk,Peloso:2016xqq,Deen:2016zfr,Kallosh:2016sej}. We will also embed the model into a supergravity framework, with the addition of a shift symmetry\footnote{Another alternative to solve the $\eta$-problem is the addition of a Heisenberg symmetry, as discussed in \cite{Antusch:2008pn}.} which avoids the intrinsic $\eta$-problem \cite{Yamaguchi:2000vm,Brax:2005jv,Antusch:2009ef,Dudas:2014pva,Heurtier:2015ima}. Although it is not easy to build a consistent hybrid inflation model within this framework, due mostly to the chaotic nature of the inflationary potential, there are a couple of scenarios in which it can successfully be constructed, and where the model predictions for the cosmological observables lie within $2\sigma$ of the experimental measurements. This paper is structured as follows. In section \ref{Section:TheModel} we describe in detail the model chosen, giving the details of the relevant field content in two flipped GUT scenarios, and the part of the superpotential relevant for inflation. In section \ref{sec:inflation} we perform an analysis of the inflationary trajectories and the conditions required by the model to satisfy successful symmetry breaking, with a few comments on the predictions for the cosmological observables. In section \ref{Section:Reheating} we will discuss reheating after the period of inflation and in the last section \ref{Section:Conclusions} we will conclude with some discussion about potential issues and summarize the results.
\label{Section:Conclusions} In this paper we have proposed a way to embed in supergravity a full set up of chaotic inflation within the framework of a GUT {\em flipped} group of the sort $\mathcal G \times U(1)_X$. The $\eta$-problem is evaded using the conjugate shift symmetry developed in \cite{Heurtier:2015ima} while the inflationary material is released in a similar fashion than in \cite{Ellis:2014xda}. Inflation is driven by the sneutrino whereas the spontaneous breaking of the GUT symmetry is triggered by a $U(1)_X$ charged heavy Higgs. The dynamics of all the fields is treated in full detail, including that of spectator fields, as well as the backreaction of the waterfall field on the inflationary potential. Two scenarios of inflation are detailed: either (i) the critical point when the waterfall takes place is located at very low values of the inflaton and inflation is mostly released in the region where the waterfall field is stuck at the origin, or (ii) the waterfall happens at early stages of inflation, and the last 60 e-folds of inflation take place after the waterfall has happened. In the latter case, the inflaton direction goes along the minimum of the waterfall field. Cosmological observables are computed in both scenarios and can both be accommodated to lay in the 2-$\sigma$ region of the last Planck measurements. However, we note that some fine tuning is necessary on the parameters in Eq. \eqref{finetuning} in order to have consistency with the observables, particularly in scenario (i) above. In the second case (ii) the breaking of the GUT symmetry takes place much earlier than inflation itself. Although this seems to suggest that the flipped structure of the unification is not needed anymore, we have seen that it remains actually crucial for releasing a satisfactory reheating period. In both scenarios we find that inflation and the observables follow from the value of $\mu_\phi$, as in standard chaotic inflation, which a priori has no relation to the symmetry breaking scale $M_{GUT}$. Though some artificial relation is imposed, through equations \eqref{eq:GUTbreaking}, \eqref{lhlf} and \eqref{finetuning}, we acknowledge that this is an artifice of our model and fine tuning, and hence this apparent relation should not be taken as a consequence of our analysis. One should note that the question of supersymmetry breaking in the vacuum has not been addressed in this paper. However the presence of large F-term contributions in the vacuum could backreact on the inflation trajectory under the form of soft-terms in the scalar potential, as detailed in \cite{Buchmuller:2014pla, Buchmuller:2015oma}. Such implementation would strongly depend on the SUSY breaking sector added to the model. Simplest models would require a high SUSY breaking scale since the inflation sector is built from a quadratic term in the superpotential. However, models of sgoldstinoless inflation \cite{Ferrara:2014kva, DallAgata:2014qsj} could be accomodated in our model, rendereing low scale inflation possible to release. We have shown an example of the backreaction of SUSY breaking on the inflationary dynamics on a toy model, where we found a solution similar to the case with a stabilizer on the SUSY scale. More specific discussions are rather model dependent, so we defer them to future work. Finally, the reheating temperature for these scenarios is computed as well and it turns out to be rather constraining. Indeed Yukawa couplings of order unity would lead to a unacceptably high reheating temperature and overclose the universe. The required value for the Yukawa coupling, $Y \sim 10^{-5}$, corresponds to that of the up quark, leading to the conclusion that the inflaton belongs to the first generation and, assumming the decoupling of the other two generations, could satisfy the constraints of reheating. This further motivates the choice of a flipped $SU(5)\times U(1)$ unification group, where the right-handed sneutrino has direct couplings to the rest of the MSSM particles.
16
9
1609.09396
1609
1609.05940_arXiv.txt
The flux density scale of \citet{PB13} is extended downwards to $\sim$ 50 MHz by utilizing recent observations with the Karl G. Jansky Very Large Array (VLA)\footnote{The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc.} of twenty sources between 220 MHz and 48.1 GHz, and legacy VLA observations at 73.8 MHz. The derived spectral flux densities are placed on an absolute scale by utilizing the \citet{Baa77} values for Cygnus A (3C405) for frequencies below 2 GHz, and the Mars-based polynomials for 3C286, 3C295, and 3C196 from \citet{PB13} above 2 GHz. Polynomial expressions are presented for all 20 sources, with accuracy limited by the primary standards to 3 -- 5\% over the entire frequency range. Corrections to the scales proposed by \citet{PB13}, and by \citet{SH12} are given.
The radio astronomy flux density scale has long been based on the polynomial expressions given in \citet{Baa77} for four `absolute' and 13 `secondary' sources. The members of the first group -- Cygnus A, Casseopeia A, Taurus A, and Virgo A -- all have angular extents of arcminutes and high spectral flux densities, typically in excess of 1000 Jy at $\lambda = $1m. They are sufficiently strong that their flux densities can be accurately measured with low-resolution telescopes with known gains, but their large scale structure makes them unsuitable for calibration by high-resolution interferometric arrays. The members of the second group are much smaller -- typically 10s of arcseconds or less, and weaker, with typical flux densities less than 50 Jy at $\lambda = 1$m. The flux densities at low frequencies of these weaker sources could not be accurately measured by single antennas of known gain, primarily because of source confusion, and were determined by taking ratios against the members of the first group, using higher resolution arrays. A subset of these `secondary' sources, comprising the smallest amongst them, has been extensively utilized for flux density calibration of interferometers in the meter -- centimeter wavelength range. The \citet{Baa77} (hereafter Baars77) scale for the compact (`secondary') sources is nominally valid between 0.4 and 15 GHz with accuracy of $\sim$5\%. Prior to $\sim$1990, the great majority of observing with the VLA was at its two lowest frequency bands -- 1.5 and 5 GHz, so there was little incentive to extend the Baars77 flux density scale above 15 GHz. After that time, the 15 and 23 GHz receivers were replaced with much more sensitive ones, and a new band, centered at 45 GHz was added. These additions, and the subsequent holographic surface adjustments and improvements in observing methodologies needed to support high frequency VLA observing, resulted in a greatly increased use of high frequencies on the VLA, necessitating extension of the Baars77 scale to higher frequencies. Accurate measurements of a set of small-diameter radio sources by the VLA resulted in the \citet{PB13} (herafter PB13) scale, which was placed on an absolute scale via VLA and WMAP observations of the planet Mars, utilizing a thermophysical model of that planet's emission. The PB13 scale is valid between 1 and 50 GHz, with claimed accuracy of $\sim$1\% over the central frequencies, rising to $\sim$3\% at the highest and lowest frequencies. However, a comparison of VLA and ATCA interferometric observations with Planck observations of 65 compact sources at 22 and 43 GHz by \citet{Par16} suggests the PB13 scale is low by $\sim$2.5\% at 28 GHz, and by $\sim$5.5\% at 43 GHz. Over the past decade, there has been an upsurge in interest in, and development of, low-frequency radio astronomy. However, the Baars77 scale for the compact objects useful for calibration purposes is not defined below 400 MHz. Indeed the low-frequency flux density scale has long been quite uncertain, as summarized in the Baars77 paper. Table 7 of that paper shows the ratio of their scale for the `absolute' sources to the scales of \citet{CKL63}, \citet{K64}, \citet{BMW65}, \citet{BH72}, and \citet{W73}. Variations at the $\sim$10\% level are seen -- most likely due to the effects of background source confusion to the low-resolution instruments utilized at the time. A useful low-frequency scale has been proposed by \citet{SH12} (hereafter SH12), valid between 30 and 300 MHz. This work is a rationalization of 13 different, but interlinked, flux density scales. The SH12 scale adopts the B77 scale above 325 MHz, and the \citet{RBC73} scale below 325 MHz. The correction factors needed to adjust the 13 scales to those adopted for the SH12 scale are given in their Table 2 -- some are as large as 20\%. Given the renewed interest in low-frequency astronomy, placing the low frequency flux density scale on a firm footing, using modern telescopes, is a worthy endeavor. Accurately placing the low-frequency flux density scale onto an absolute standard requires a highly linear, high-resolution array, preferably comprised of high-gain elements, capable of cleanly separating the proposed calibration targets from surrounding emission. As demonstrated by PB13, the VLA is easily capable of determining ratios between proposed calibration sources to $\sim$1\% accuracy at its low frequency bands. Placing the results on an absolute scale requires a highly linear correlator, as the only suitable `absolute' reference source over the .02 -- 2 GHz range is Cygnus A, whose flux density exceeds by more than two orders of magnitude those of the proposed secondary qcalibrators. The original Very Large Array was ill-suited to this task, as its digital correlator was not sufficiently linear over the large range of correlation coefficients to directly bootstrap the standard calibrators to Cygnus A\footnote{For Cygnus A, the correlation coefficient at 300 MHz through 1500 MHz is over 0.7 -- which is well beyond the linear range for the VLA's original 3-level correlator}. This situation changed in 2012, with the commissioning of the Jansky Very Large Array, and its new correlator, which uses 4-bit (16 level) correlation. The result of this was that the response to the absolute standard source Cygnus A can now be safely linked to weaker -- and smaller -- radio sources suitable for calibration for interferometers. Most of the previous work on the flux density scales has been limited to northern sources, with few southern sources reliably linked to the absolute standards. In view of the development of numerous southern hemisphere low-frequency arrays, it is important to rectify this situation by inclusion of southern sources. In this paper, we propose a single flux density scale, valid from 50 MHz through 50 GHz, based on new observations with the VLA of 19 proposed calibrator sources, over the frequency range 230 MHz to 48 GHz, located in both hemispheres, along with `legacy' observations of 13 sources with the VLA at 73.8 MHz.
We have defined a comprehensive new flux density calibration scale for radio astronomy, valid between 50 MHz and 50 GHz. Polynomial coefficients for 20 proposed calibrators, distributed over both hemispheres, and useable both for single dishes and for interferometers of up to $\sim$5000K$\lambda$ baseline length, are given. The majority of the sources are stable over long periods of time. Some are slowly time variable, and will need regular monitoring to be useful for accurate flux density calibration. This scale replaces that proposed by us in 2013, as it extends that scale downwards from 1 GHz to 50 MHz. The new scale is identical (to the quoted errors) to the old scale above 2 GHz. Correction factors for the older Baars et al. scale and the Scaife and Heald scale are given. The chief weakness of our new scale is at frequencies below 240 MHz. The polynomical expressions are entirely dependent on a single measurement made with the VLA's `legacy' 74 MHz system for 13 of our 20 sources. For the remaining seven sources, there is no VLA measurement, so our expressions cannot be used below 200 MHz. While we have no reason to doubt the accuracy of the old measurements, confidence would be increased when data from the VLA's new low frequency system are available. Probably most useful for confirming the accuracy of our proposed scale would be measurements made by the new generation of low-frequency facilities, notably the MWA and LOFAR. These would fill the gaps below 250 MHz for our measurements, confirm the ratios that we have determined, and fill in the large gap between 74 and 240 MHz. A final point worthy of mention is the question of `what calibrates the calibrator'? Out scale at low frequencies is entirely based on observations of Cygnus A with absolutely calibrated antennas and interferometers, nearly all of these done more than 40 years ago. While we do not doubt the accuracy of these efforts, the availability of modern technologies suggests that more accurate and robust measurements of this fundamental standard should be possible today. Given that the errors of our scale are entirely due to the error in the primary calibrators, better accuracy can only be obtained with better fundamental standards.
16
9
1609.05940
1609
1609.05492_arXiv.txt
Distance estimates from Gaia parallax and expected luminosities are compared for KIC 8462852. Gaia DR1 yields a parallax of $2.55\pm0.31$mas, that is a distance of $391.4\substack{+53.6 \\ -42.0}$pc, or $391.4\substack{+122.1 \\ -75.2}$pc including systematic uncertainty. The distance estimate based on the absolute magnitude of an F3V star and measured reddening is $\sim454\pm35$pc. Both estimates agree within $<1\sigma$, which only excludes some of the most extreme theorized scenarios for KIC 8462852. Future Gaia data releases will determine the distance to within 1\% and thus allow for the determination a precise absolute luminosity.
The space mission Gaia is currently surveying the entire sky and repeatedly observing the brightest one billion objects, down to 20th magnitude, on a 5-year mission \citep{2016arXiv160904153G}. The telescope collects data providing absolute astrometry, broad-band photometry, and low-resolution spectro-photometry \citep{2012Ap&SS.341...31D}. The first public data was released on 14 September 2016 (DR1, \citet{2016arXiv160904303L}), and contains the five-parameter astrometric solution: positions, parallaxes and proper motions for stars in common between the Tycho-2 Catalogue \citep{2000A&A...355L..27H} and Gaia \citep{2015A&A...574A.115M}. As an immediate application, we analyze Gaia's distance estimate for KIC 8462852 (TYC 3162-665-1). This object is an F3 main-sequence star, which was observed by the NASA Kepler mission \citep{2010Sci...327..977B} from April 2009 to May 2013. An analysis by \citet{2016MNRAS.457.3988B} shows inexplicable series of day-long brightness dips up to 20\%. The behavior has been theorized to originate from a family of large comets \citep{2016ApJ...819L..34B}, or signs of a Dyson sphere \citep{2016ApJ...816...17W}. Subsequent analysis found no narrow-band radio signals \citep{2016ApJ...825..155H} and no periodic pulsed optical signals \citep{2016ApJ...825L...5S, 2016ApJ...818L..33A}. The infrared flux is equally unremarkable \citep{2015ApJ...815L..27L, 2015ApJ...814L..15M, 2016MNRAS.458L..39T}. Recently, the star has been claimed to dim by $0.16$mag ($\sim14\%$) between 1890 and 1990 \citep{2016ApJ...822L..34S}, and lost $\sim3\%$ of brightness during the 4.25yrs of Kepler mission \citep{2016arXiv160801316M}. The century-long dimming has been challenged by \citet{2016ApJ...825...73H} and \citet{2016arXiv160502760L}. To resolve the controversy whether this star has dimmed by $\sim20\%$ over 130 years (and perhaps more so earlier), a precise distance and therefore a precise absolute luminosity would be very helpful. \begin{table} \center \caption{Gaia data DR1\label{tab:gaia}} \begin{tabular}{lcccc} \tableline Parameter & Value \\ \tableline Identifier & TYC 3162-665-1 \\ Source ID & 2081900940499099136\\ G mag & 11.685 ($n=140$)\\ Parallax & $2.555\pm0.311$mas ($n=109$)\\ Distance & $391.4\substack{+53.6 \\ -42.0}$pc only random uncertainty\\ & $391.4\substack{+122.1 \\ -75.2}$pc incl. systematic uncertainty\\ \tableline \end{tabular} \end{table} \begin{figure*} \includegraphics[width=\linewidth]{figure1} \caption{\label{fig:parallaxes}Left: Distance-luminosity relation from Hipparcos (blue) and Gaia (red). The dashed line is based on the expected absolute luminosity plus 0.11 mag extinction (Section~\ref{sec:abs}). KIC 8462852 is shown with a black symbol in the upper right corner. Its symbol size represents an uncertainty in brightness of 20\%, corresponding to the deepest dip recorded during the Kepler mission. The distance uncertainty is from Gaia's parallax, with (gray) and without (black) the systematic uncertainty. The non-Gaussian spread of stars is mainly caused by higher reddening values.} \end{figure*}
The distance estimates from absolute magnitude, and parallax measurement agree within $<1\sigma$, but with large uncertainties. As of now, we cannot firmly determine the absolute luminosity of this star. This will be possible with future Gaia data which will constraint the distance to KIC 8462852 within 1\%.
16
9
1609.05492
1609
1609.02567_arXiv.txt
{ Far-infrared molecular emission is an important tool used to understand the excitation mechanisms of the gas in the inter-stellar medium of star-forming galaxies. In the present work, we model the emission from rotational transitions with critical densities $n \gtrsim 10^4$~\cmt. We include $ 4-3 < J \le 15-14$ transitions of CO and \thco, in addition to $J \le 7-6$ transitions of HCN, HNC, and \hcop on galactic scales. We do this by re-sampling high density gas in a hydrodynamic model of a gas-rich disk galaxy, assuming that the density field of the interstellar medium of the model galaxy follows the probability density function (PDF) inferred from the resolved low density scales. We find that in a narrow gas density PDF, with a mean density of $\sim 10$~\cmt~ and a dispersion $\sigma = 2.1$ in the log of the density, most of the emission of molecular lines, even of gas with critical densities $> 10^4$~\cmt, emanates from the 10-1000~\cmt~part of the PDF. We construct synthetic emission maps for the central 2 kpc of the galaxy and fit the line ratios of CO and \thco~ up to $J = 15-14$, as well as HCN, HNC, and \hcop up to $J = 7-6$, using one photo-dissociation region (PDR) model. We attribute the goodness of the one component fits for our model galaxy to the fact that the distribution of the luminosity, as a function of density, is peaked at gas densities between 10 and 1000~\cmt, with negligible contribution from denser gas. Specifically, the Mach number, $\mathcal{M}$, of the model galaxy is $\sim 10$. We explore the impact of different log-normal density PDFs on the distribution of the line-luminosity as a function of density, and we show that it is necessary to have a broad dispersion, corresponding to Mach numbers $\gtrsim 30$ in order to obtain significant ($ > 10$\%) emission from $n > 10^4$~\cmt~ gas. Such Mach numbers are expected in star-forming galaxies, luminous infrared galaxies (LIRGS), and ultra-luminous infrared galaxies (ULIRGS). This method provides a way to constrain the global PDF of the ISM of galaxies from observations of molecular line emission. As an example, by fitting line ratios of HCN(1-0), HNC(1-0), and \hcop(1-0) for a sample of LIRGS and ULIRGS using mechanically heated PDRs, we constrain the Mach number of these galaxies to $29 <\mathcal{M} < 77$.}
\label{sec:paper4_intro} The study of the distribution of molecular gas in star-forming galaxies provides us with an understanding of star formation processes and their relation to galactic evolution. In these studies carbon monoxide (CO) is used as a tracer of star-forming regions and dust, since in these cold regions ($ T < 100$~K) H$_2$ is virtually invisible. The various rotational transitions of CO emit in the far-infrared (FIR) spectrum, and are able to penetrate deep into clouds with high column densities, which are otherwise opaque to visible light. CO lines are usually optically thick and their emission emanates from the C$^+$/C/CO transition zone \citep{Wolfire89-1}, with a small contribution to the intensity from the deeper part of the cloud \citep{meijerink2007-1}. On the other hand, other molecules, whose emission lines are optically thin beacuse of their lower column densities, probe greater depths of the cloud compared to CO. These less abundant molecules (e.g., \thco) have a weaker signal than CO, and a longer integration time is required in-order to observe them. Since ALMA became available, it has become possible to obtain well-resolved molecular emission maps of star-forming galaxies in the local universe, due to its high sensitivity, spatial and spectral resolution. In particular, many species have been observed with ALMA, including the ones we consider in this paper, namely CO, \thco, HCN, HNC, and \hcop~ \citep[e.g.,][]{Imanishi13-1, Saito13-1, Combes13-1, Scoville13-1, Combes14-1}. Massive stars play an important role in the dynamics of the gas around the region in which they form. Although the number of massive stars ($ M > 10$~\Msun) is about 0.1\% of the total stellar population, they emit more than 99\% of the total ultraviolet (FUV) radiation. This FUV radiation is one of the main heating mechanisms in the ISM of star-forming regions. Such regions are referred to as photon-dominated regions (PDRs) and they have been studied since the 1980s \citep{tielenshb1985, Hollenbach1999}. Since then, our knowledge of the chemical and thermal properties of these regions has been improving. Since molecular clouds are almost invariably accompanied by young luminous stars, most of the molecular ISM forms in the FUV shielded region of a PDR, and thus this is the environment where the formation of CO and other molecular species can be studied. In addition, the life span of massive stars is short, on the order of 10 Myrs, thus they are the only ones that detonate as supernovae, liberating a significant amount of energy into their surroundings and perturbing it. A small fraction of the energy is re-absorbed into the ISM, which heats up the gas \citep{Usero07-1, falgaron2007p, loenen2008}. In addition, starbursts (SB) occur in centers of galaxies, where the molecular ISM can be affected by X-rays of an accreting black hole (AGN) and enhanced cosmic ray rates or shocks \citep[][among many others]{maloney96, komossa03, martin06-1, oka2005ApJ_1, vandertak2006_1, pan2009-1, papadopoulos10, meijerink11, meijerink13-1, Rosenberg14} that ionize and heat the gas. By constructing numerical models of such regions, the various heating mechanisms can be identified. However, there is no consensus about which combination of lines define a strong diagnostic of the different heating mechanisms. This is mainly due to the lack of extensive data which would probe the various components of star-forming regions in extra-galactic sources. Direct and self-consistent modeling of the hydrodynamics, radiative transfer and chemistry at the galaxy scale is computationally challenging, thus some simplifying approximations are usually employed. In the simplest case it is commonly assumed that the gas has uniform properties, or is composed of a small number of uniform components. In reality, on the scale of a galaxy or on the kpc scale of starbursting regions, the gas density follows a continuous distribution. Although the exact functional form of this distribution is currently under debate \citep[e.g.][]{Nordlund99-1}, it is believed that in SB regions, where the gas is thought to be supersonically turbulent \citep{norman96-1, Goldman12-1} the density distribution of the gas is a log-normal function \citep{Vazquez94-1, Nordlund99-1, wada01-a, wada07-1, kritsuk11-1, ballesterosParedes11-1, Burkhart13-1, hopkins13-1}. This is a universal result, independent of scale and spatial location, although the mean and the dispersion can vary spatially. Self-gravitating clouds can add a power-law tail to the density PDF \citep{kainulainen09-1,froebrich10-1,russeil13-1,AlvesdeOliveira14-1,Schneider14-1}. However, \cite{kainulainen2013-1} claim that such gravitational effects are negligible on the scale of giant molecular clouds, where the molecular emission we are interested in emanates. In \cite{mvk15-a}, we studied the effect of mechanical heating (\gm~hereafter) on molecular emission grids and identified some diagnostic line ratios to constrain cloud parameters including mechanical heating. For example, we showed that low-$J$ to high-$J$ intensity ratios of high density tracers will yield a good estimate of the mechanical heating rate. In \cite{mvk15-b}, KP15b hereafter, we applied the models by \cite{mvk15-b} to realistic models of the ISM taken from simulations of quiescent dwarf and disk galaxies by \cite{inti2009-1}. We showed that it is possible to constrain mechanical heating just using $J < 4-3$ CO and \thco~ line intensity ratio from ground based observations. This is consistent with the suggestion by \cite{israel03} and \cite{israel2009-1} that shock heating is necessary to interpret the high excitation of CO and \thco~in star-forming galaxies. This was later verified by \cite{loenen2008}, where it was shown that mechanically heated PDR models are necessary to fit the line ratios of molecular emission of high density tracers in such systems. Following up on the work done by KP15b, we include high density gas ($n > 10^4$~\cmt) to produce more realistic synthetic emission maps of a simulated disk-like galaxy, thus accounting for the contribution of this dense gas to the molecular line emission. This is not trivial as global, galaxy wide models of the star-forming ISM are constrained by the finite resolution of the simulations in the density they can probe. This paper is divided into two main parts. In the first part, we present a new method to incorporate high density gas to account for its contribution to the emission of the high density tracers, employing the plausible assumption, on theoretical grounds \citep[e.g.][]{Nordlund99-1} that the density field follows a log-normal distribution. In the methods section, we describe the procedure with which the sampling of the high density gas is accomplished. Once we have derived a re-sampled density field we can employ the same procedure as in KP15b to model the line emission of molecular species. In Section-\ref{subsec:paper4_emissionmaps}, we highlight the main trends in the emission of the $J = 5-4$ to $J = 15-14$ transitions of CO and \thco~ tracing the densest gas, along with the line emission of high density tracers HCN, HNC and \hcop for transitions up to $J = 7-6$. In Section-\ref{subsec:paper4_constrainig}, we fit emission line ratios using a mechanically heated PDR (mPDR hereafter) and constrain the gas parameters of the model disk galaxy. In the second part of the paper, Section-\ref{section:constrainingpdf}, we will follow the reverse path and examine what constraints can be placed on the PDF from molecular line emissions, following the same modeling approach as in the first part. We discuss the effect of the shape and width of the different density profiles on the emission of high density tracers. In particular, we discuss the possibility of constraining the dispersion and the mean of an assumed log-normal density distribution using line ratios of high density tracers. We finalize with a summary and general remarks.
We have constructed luminosity maps of some molecular emission lines of a disk-like galaxy model. These emission maps of CO, \thco, HCN, HNC, and \hcop~have been computed using subgrid PDR modeling in post-processing mode. Because of resolution limitations, the density of the simulation was restricted to $n < 10^4$~\cmt. We demonstrated that the density PDF is log-normal for $n > 10^{-2}$~\cmt. Most of the emission of the high density tracers emanates from the gas with densities $\sim 10^2$~\cmt~ for quiescent galaxies, which is at least 1000 times lower than the critical density of a typical high density tracer. We attribute this to the fact that the dispersion of the PDF is narrow, and thus the probability of finding dense gas is low. The main findings of this paper are: \begin{itemize} \item It is necessary to have a large dispersion in the density PDF ($\sigma > 2.7$) in order to have significant emission of high density tracers from $n > 10^4$~\cmt~ gas. \item It is possible to constrain the shape of the PDF using line ratios of high density tracers. \item Line ratios of HCN(1-0), HNC(1-0), and \hcop(1-0) for star-forming galaxies and starbursts support the theory of supersonic turbulence. \end{itemize} A major caveat for this approach is the assumption concerning the thermal and the chemical equilibrium. Care must be taken in interpreting and applying such equilibrium models to violently turbulent environments such as starbursting galaxies and galaxy nuclei. Despite the appealing fact that the line ratios obtained from the example we have shown in Figure \ref{fig:paper4_line_ratio_vs_density_PDF} favor high Mach numbers ($ 29 < \mathcal{M} < 77$) consistent with previous prediction of supersonic turbulence in starbursts, a time-dependent treatment might be essential.
16
9
1609.02567
1609
1609.03082_arXiv.txt
{The temperate Earth-mass planet Proxima b is the closest exoplanet to Earth and represents what may be our best ever opportunity to search for life outside the Solar System.} {We aim at directly detecting Proxima b and characterizing its atmosphere by spatially resolving the planet and obtaining high-resolution reflected-light spectra.} {We propose to develop a coupling interface between the SPHERE high-contrast imager and the new ESPRESSO spectrograph, both installed at ESO VLT. The angular separation of 37 mas between Proxima b and its host star requires the use of visible wavelengths to spatially resolve the planet on a 8.2-m telescope. At an estimated planet-to-star contrast of $\sim10^{-7}$ in reflected light, Proxima b is extremely challenging to detect with SPHERE alone. However, the combination of a $\sim$10$^3$-10$^4$ contrast enhancement from SPHERE to the high spectral resolution of ESPRESSO can reveal the planetary spectral features and disentangle them from the stellar ones.} {We find that significant but realistic upgrades to SPHERE and ESPRESSO would enable a 5-$\sigma$ detection of the planet and yield a measurement of its true mass and albedo in 20-40 nights of telescope time, assuming an Earth-like atmospheric composition. Moreover, it will be possible to probe the O$_2$ bands at 627, 686 and 760 nm, the water vapour band at 717 nm, and the methane band at 715 nm. In particular, a 3.6-$\sigma$ detection of O$_2$ could be made in about 60 nights of telescope time. Those would need to be spread over 3 years considering optimal observability conditions for the planet.} {The very existence of Proxima b and the SPHERE-ESPRESSO synergy represent a unique opportunity to detect biosignatures on an exoplanet in the near future. It is also a crucial pathfinder experiment for the development of Extremely Large Telescopes and their instruments, in particular the E-ELT and its high-resolution visible/near-IR spectrograph.}
\subsection{Observing exoplanet atmospheres} The field of exoplanets has seen tremendous progress over the past two decades, evolving from a niche research field with marginal reputation to mainstream astrophysics. The number of exoplanet properties that have become accessible to observations has been continuously growing. The radial velocity and transit techniques have been the two pillars over which exoplanet studies have developed, providing the two most fundamental physical properties of an exoplanet: mass and radius. Simultaneously, spatially-resolved imaging has been studying the properties of young and massive exoplanets on wide orbits. More recently, the field has been moving towards a more detailed characterization of planets and planetary systems, from their orbital architecture to their internal structure to the composition of their atmospheres. The study of exoplanet atmospheres, in particular, is widely seen as the new frontier in the field, a necessary step to elucidate the nature of the mysterious and ubiquitous super-Earths and mini-Neptunes. It is also the only means of directly addressing the fundamental question: has life evolved on other worlds? Atmospheric characterization heavily relies on the availability of favourable targets, given the extremely low-amplitude signals to be detected and the present instrumental limitations. That is why a major effort is being made to systematically search for the nearest exoplanets with the largest possible atmospheric signatures. One of the most successful techniques so far has been transit spectroscopy, where an exoplanet atmosphere is illuminated from behind by the host star, and light is transmitted according to the wavelength-dependent opacities of chemical species within the atmosphere. In this approach, the most favourable targets are planets transiting nearby, small stars. Not only gaseous giant planets, but also mini-Neptunes and super-Earths have been probed via transmission spectroscopy, paving the way towards the characterization of truly Earth-like planets. Recently, \citet{Gillon2016} have announced the discovery of a system that is potentially promising in this respect: three transiting Earth-size objects around the 0.08-$M_\odot$ star TRAPPIST-1. More generally, it is expected that the ongoing/upcoming transit searches (TESS, CHEOPS, PLATO, NGTS, MEarth, TRAPPIST, SPECULOOS, ExTrA) will find most of the nearest transiting systems in the next few years, opening a new era of atmospheric characterization with e.g. the James Webb Space Telescope. An alternative to transit spectroscopy is the detection of exoplanets using high angular resolution, i.e. the capability to spatially resolve the light emitted by an exoplanet from the one of its host star. So far, this technique has been mostly applied to young, self-luminous massive planets on wide orbits, which offer the highest planet-to-star contrasts at the angular separations currently accessible to 10-m class telescopes equipped with state-of-the-art adaptive optics (AO) systems. Given the very challenging flux ratios (10$^{-4}$-10$^{-10}$ at sub-arcsec angular separations), many different technical and observational strategies are being explored to make progress in this field. A promising avenue for atmospheric characterization is the high-contrast/high-resolution technique (hereafter HCHR). It was first envisaged by \citet{Sparks2002} and \citet{Riaud2007}, and simulated in details by \citet{Snellen2015}. It recently found a first real-life application on the young exoplanet Beta Pic b \citep{Snellen2014}, leading to the detection of CO in the planet thermal spectrum and to the measurement of the planet spin rate. The technique combines a high-contrast AO system to a high-resolution spectrograph to overcome the tiny planet-to-star flux ratio in two steps. In the first stage, the AO system spatially resolves the exoplanet from its host star and enhances the planet-to-star contrast at the planet location. However, the remaining stellar signal (from e.g. the wings of the stellar PSF or non-perfect AO correction) may still be orders of magnitude larger than the planet signal. In the second stage, the light beam at the planet location is sent to a high-resolution spectrograph. Simultaneously, a reference star-only spectrum is recorded from a spatial location away from the planet. The planet spectrum can then be recovered by differencing the two spectra, provided sufficient signal-to-noise ratio (SNR) is achieved. Spectral features that are present both in the planet and star spectrum can be separated thanks to the Doppler shifts induced by the planet orbital velocity. At the same time, measuring the planet orbital velocity enables the measurement of the orbital inclination angle and true mass of the planet, when combined to high-precision RV measurements of the star. For the HCHR technique to be applicable, it is necessary to spatially resolve the star-planet system. The distance to the star therefore plays a critical role. Finding exoplanets orbiting the very nearest stars (within $\sim$5-10 pc) is thus a necessary step for the future of this technique, and atmospheric characterization in general. \subsection{Proxima b} \label{SectProxima} \citet{Anglada2016} have recently announced the detection of a low-mass exoplanet candidate around our closest stellar neighbour, Proxima Centauri ($d$ = 1.30 pc). Proxima is a 0.12-$M_\odot$ red dwarf (spectral type M5.5V) with a bolometric luminosity of 0.00155 $L_\odot$ \citep{Boyajian2012}, which translates into a visual magnitude of only $V$ = 11.13 despite the proximity of the star. The planet, Proxima b, has an orbital period of 11.2 days and a semi-major axis of 0.048 au. Its minimum mass is 1.27 $\pm$ 0.18 $M_\oplus$. The planet discovery is based on long-term, high-cadence radial velocity (RV) time series obtained with the HARPS and UVES spectrographs \citep{Mayor2003,Dekker2000} at the European Southern Observatory. Proxima b is exceptional in several respects: it is the closest exoplanet to Earth there will ever be; with a minimum mass of 1.3 $M_\oplus$ it is likely to be rocky in composition; and with a stellar irradiation that is about 67\% of Earth's irradiation it is plausible that habitable conditions prevail at least in some regions of the planet. This is especially true if the planet has a fairly thick atmosphere, as could be expected from its mass that is larger than Earth. The question of a habitable climate on Proxima b is a complex one that is being addressed by numerous studies \citep[e.g.][]{Ribas2016,Turbet2016,Meadows2016}. The spin rate of the planet is likely to be either in a tidally-locked state or in a 3:2 orbital resonance, critically impacting global atmospheric circulation and climate \citep[although the atmosphere itself may force the spin rate into a different regime as in Venus, e.g.][]{Leconte2015}. If the planet was formed {\it in situ}, it likely experienced a runaway greenhouse phase causing a partial loss of water with some uncertainties about the quantity of water which may subsist \citep{Ribas2016,Barnes2016}. If water is still present today, the amount of greenhouse gases in the atmosphere can drive the system into very different states: from a temperate, habitable climate to a cold trap on the night side causing atmospheric collapse. Adding to the complexity, the high X-ray and UV flux of the star may have caused atmospheric evaporation that could have a major effect on the planet volatile content, in particular water. Note however that the initial water content itself is not constrained; if the planet was formed beyond the ice line and migrated inwards, it could have been born as a water world and have remained so until today. Despite all these uncertainties, Proxima b concentrates on itself a number of properties that make it a landmark discovery. The immediate question that comes to mind is how to study this outstanding object in more details. With a geometric transit probability of only 1.5\%, the planet is unlikely to transit its host star, strongly limiting the observational means of characterizing it. The James Webb Space Telescope (launch 2018) may be able to obtain a thermal phase curve of the planet, provided its day-to-night temperature contrast is sufficiently high \citep{Kreidberg2016}. In this paper we examine an alternative method that can directly probe Proxima b: the HCHR technique, applied to reflected light from the planet. Quite remarkably, \citet{Snellen2015} anticipated the existence of a planet orbiting Proxima and simulated HCHR observations with the future European Extremely Large Telescope (E-ELT). In the present paper we attempt to go one step further towards a practical implementation, basing our simulations on the real planet Proxima b and existing VLT instrumentation. Proxima b was discovered with the radial velocity technique. The knowledge of the RV orbit yields two critical pieces of information for the planning and optimization of HCHR observations: the angular separation at maximum elongation from the star (quadrature), and the epochs of maximum elongations. For Proxima b, combining the RV-derived semi-major axis and the known distance to the system, one obtains a maximum angular separation of 37 milli-arcseconds (mas). The RV-derived ephemeris then also provides the timing of the two quadratures within each 11.2-d orbit (we assume here that a regular RV monitoring of the star maintains sufficient accuracy on the ephemerides). What the RV data do not provide is orbital inclination and position angle of the orbit on the sky. Thus, one specific challenge of the HCHR approach is that it first has to find the position of the planet around the star, before being able to study it in details. %
We summarize here our main findings: \begin{itemize} \item We have studied the possibility to apply the HCHR technique to the temperate exoplanet Proxima b by coupling the SPHERE and ESPRESSO instruments at ESO VLT. \item At maximum elongation, Proxima b is located 37 mas away from the star, corresponding to 2.0 $\lambda$/D at 750 nm. The expected planet-to-star contrast varies between about 4 $\times$ 10$^{-8}$ and 2 $\times$ 10$^{-7}$ at quadrature \citep{Turbet2016}. Earth-like atmospheres are predicted in the range 1.0-1.4 $\times$ 10$^{-7}$. \item We have shown at a conceptual level that a SPHERE-ESPRESSO coupling is indeed possible. In particular, an IFU with a multiplexing of 6 is able to cover the entire search area in the SPHERE focal plane and can be fed into ESPRESSO without modifications to the spectrograph itself (only relatively minor changes at the fiber injection stage are needed). \item The fiber coupling efficiency and $K$ factor are the critical variables to be maximized for the SPHERE+ESPRESSO combination to reach its full potential. Those are mainly controlled by the level of residual aberrations in the AO-corrected beam. Improvement of the AO correction and optimized coronagraphic solutions are needed to achieve a level of performance that enables the detection of Proxima b. This should be seen as a mid-term perspective (several years), as significant development is likely necessary. We designate this upgrade as a 2nd-generation SPHERE, or SPHERE+. \item The $R$ = 220,000 mode of ESPRESSO can be used, coupled to slow CCD readout and 1x2 pixel binning. This offers several advantages: higher contrast in the planetary spectral features, better separation of the telluric and planetary features, less readout noise, and less dark current noise. \item Readout noise becomes a significant contributor to the noise budget at $K$ $\sim$ 3000-5000, given an exposure time of one hour. The feasibility of taking longer exposures should be investigated to further decrease the noise level. \item Based on a sophisticated sky model, we show that sky background will not impact the detectability of Proxima b, except in bright time where the scattered moonlight contribution may need to be taken into account. \item We find that the reflected spectrum from Proxima b can be detected at the 5-$\sigma$ level in 20-40 nights of telescope time for a contrast enhancement factor $K$ = 3000 (SPHERE+) and a planet-to-star flux ratio of 1.0-1.4 $\times$ 10$^{-7}$ (Earth-like atmospheres). This includes a measurement of the planet true mass (as opposed to minimum mass) and orbital inclination, and the measurement of its broadband albedo. \item We find that O$_2$ can be detected at the 3.6-$\sigma$ level in about 60 nights of observing time at $K$ = 5000, for a planet-to-star contrast of 1.4 $\times$ 10$^{-7}$. Those nights would need to be spread over 3 years to guarantee optimal observability conditions of the planet and sufficient separation between telluric and planetary O$_2$ lines. \item We also show that H$_2$O can be probed in a similar amount of telescope time provided the H$_2$O column density is similar to wet regions of Earth. \item Finally, it is likely that CH$_4$ is detectable as well if its column density is similar to or larger than the one seen in Jupiter and Saturn, although we could not address this point quantitatively. \end{itemize} In conclusion, while we do not underestimate the technical challenges of our proposed approach, we do believe that SPHERE+ESPRESSO is competitive for becoming the first instrument to characterize a habitable planet. At this point, this path appears to be a faster-track one compared to waiting for the availability of an equivalent instrument on e.g. the E-ELT. Indeed, SPHERE is an XAO system that works very well already today, including in $R$- and $I$-band. Further improvements and innovative ideas can certainly be pursued in the next years based on accumulated experience with the actual system. While ELTs will see their first light in the mid-2020s, reaching adequate AO performances in e.g. $J$-band to enable efficient HCHR observations will be a significant challenge, as well as building an appropriate high-resolution spectrograph equipped with an AO-fed IFU. Overall, we believe that the SPHERE+ESPRESSO opportunity can be seen as a useful pathfinder for similar instruments on ELTs, in particular for the E-ELT HIRES spectrograph. Finally we note that, besides Proxima b, a number of exoplanets of various kinds could be probed with SPHERE+ESPRESSO. In particular, the gas giant GJ 876 b has an expected contrast $\sim$10 times higher than Proxima b (see Fig.~\ref{FigReflectedLight}) and would represent a relatively easy target for an early demonstration of the HCHR approach. Additional targets will be discovered around the nearest stars in the near future by ongoing and upcoming radial velocity surveys, in particular by ESPRESSO itself. SPHERE+ESPRESSO is thus by no means a single-target facility, but will enable reflected-light observations of several cool to warm exoplanets covering a broad range of masses. This is highly complementary to other planet characterization techniques such as transit spectroscopy.
16
9
1609.03082
1609
1609.04031_arXiv.txt
{The high radio frequency polarization imaging of non-thermal emission from active galactic nuclei (AGN) is a direct way to probe the magnetic field strength and structure in the immediate vicinity of supermassive black holes (SMBHs) and is crucial in testing the jet-launching scenario. To explore the the magnetic field configuration at the base of jets in blazars, we took advantage of the full polarization capabilities of the Global Millimeter VLBI Array (GMVA). With an angular resolution of $\sim$50 micro-arcseconds ($\mu$as) at 86 GHz, one could resolve scales up to $\sim$450~gravitational radii (for a 10$^9$ solar mass black hole at a redshift of 0.1). We present here the preliminary results of our study on the blazar BL~Lac. Our results suggest that on sub-mas scales the core and the central jet of BL Lac are significantly polarized with two distinct regions of polarized intensity. We also noted a great morphological similarity between the 7mm/3mm VLBI images at very similar angular resolution. } \keyword{active galaxies; BL Lacertae object: BL~Lac; jets; GMVA: high-resolution VLBI; magnetic field; polarization} \begin{document}
Understanding the physical processes happening close to the central engines and their connection to the jet activity and to the broadband flaring activity in blazars are the key challenges in active galactic nuclei physics. Ultra-high angular resolution mm-VLBI observations are the most feasible ways to answer these questions. We demonstrated the high-frequency and high-resolution polarization imaging capabilities of the GMVA. In the very near future, ALMA will participate in the observations at 3~mm and 1.3~mm. This will significantly enhance the imaging and polarization capabilities of global mm-VLBI observations. \vspace{6pt}
16
9
1609.04031
1609
1609.01422_arXiv.txt
{ We present the discovery of a molecular cloud at $\zabs \approx 2.5255$ along the line of sight to the quasar SDSS J\,000015.17$+$004833.3. We use a high-resolution spectrum obtained with the Ultraviolet and Visual Echelle Spectrograph together with a deep multi-wavelength medium-resolution spectrum obtained with X-Shooter (both on the Very Large Telescope) to perform a detailed analysis of the absorption lines from ionic, neutral atomic and molecular species in different excitation levels, as well as the broad-band dust extinction. \\ We find that the absorber classifies as a Damped Lyman-$\alpha$ system (DLA) with $\log N(\HI) (\cmsq)=20.8\pm 0.1$. The DLA has super-Solar metallicity ($Z\sim 2.5 Z_{\odot}$, albeit with factor 2-3 uncertainty) with a depletion pattern typical of cold gas and an overall molecular fraction $f=2N($H$_2$)/($2N$(H$_2)+N(\HI)) \sim 50$\%. This is the highest $f$-value observed to date in a high-$z$ intervening system. Most of the molecular hydrogen arises from a clearly identified narrow ($b\sim$0.7~\kms), cold component in which carbon monoxide molecules are also found, with $\log N($CO$) \approx 15$. With the help of the spectral synthesis code Cloudy, we study the chemical and physical conditions in the cold gas. We find that the line of sight probes the gas deep after the H\,{\sc i}-to-H$_2$ transition in a $\sim$4-5~pc-size cloud with volumic density $n_{\rm H} \sim$~80~cm$^{-3}$ and temperature of only 50~K. Our model suggests that the presence of small dust grains (down to about 0.001\,${\rm \mu m}$) and high cosmic ray ionisation rate ($\zeta_{\rm H} \sim $ a few times $10^{-15}$\,s$^{-1}$) are needed to explain the observed atomic and molecular abundances. The presence of small grains is also in agreement with the observed steep extinction curve that also features a 2175~{\AA} bump.\\ Interestingly, the chemical and physical properties of this cloud are very similar to what is seen in diffuse molecular regions of the nearby Perseus complex, despite the former being observed when the Universe was only 2.5~Gyr old. The high excitation temperature of CO rotational levels towards \jz\ betrays however the higher temperature of the cosmic microwave background. Using the derived physical conditions, we correct for a small contribution (0.3~K) of collisional excitation and obtain $T_{\rm CMB}(z=2.53) \approx 9.6$~K, in perfect agreement with the predicted adiabatic cooling of the Universe. }
The formation and evolution of galaxies is strongly dependent on the physical properties of the gas in and around galaxies. Indeed, the gas is the reservoir of baryons from which stars form and at the same time, it integrates the chemical and physical outputs from star-formation activity. The gas that is accreted onto galaxies has to cool down and go through different transitional processes that will determine its properties during its evolution before the final collapse that give birth to stars. Different phases are indeed identified in the interstellar medium, depending on the temperature and density and whether the matter is ionised or neutral (atomic or molecular). In their two-phase model, \citet{Field69} showed that thermal equilibrium leads neutral gas to segregate into a dense phase, the cold neutral medium (CNM), embedded into a diffuse intercloud phase, the warm neutral medium (WNM). Detailed theoretical and numerical works \citep[e.g.][]{Krumholz09a,Sternberg14} show that a transition from \HI\ to H$_2$ then occurs in the former phase, depending on the balance between H$_2$ formation on the surface of dust grains \citep[e.g.][]{Jura74b}, and its dissociation by UV photons \citep[e.g.][]{Dalgarno70}, itself dependent on both self- and dust-shielding. Observationally, UV absorption spectroscopy of Galactic clouds towards nearby stars revealed that the molecular fraction, $f=2$H$_2/(2$H$_2+\HI)$, sharply increases above a \HI\ column density threshold of 5$\times 10^{20}~\cmsq$. A similar threshold has been found by \citet{Reach94} from far-infrared emission studies of interstellar clouds, using dust as a tracer for H$_2$. Higher column-density thresholds were observed in the Magellanic Clouds \citep{Tumlinson07}, which could be the consequence of a higher UV radiation field together with a lower metallicity in these environments. However, it is also possible that a significant fraction of the observed \HI\ column density is actually unrelated to the atomic envelopes of the H$_2$-absorbing clouds \citep{Welty12}, since $N(\HI)$ was derived through unresolved 21-cm emission, while $N($H$_2)$ was measured in absorption. This highlights the main difficulty in observing the transition regions: because molecular clouds have sizes of only a few tens to a few hundred parsec \citep[e.g.][]{Fukui10} it is very difficult to compare H$_2$ with its associated \HI\ in the cloud envelope without also integrating nearby atomic gas. High spatial resolution (sub-pc) studies exist for nearby molecular clouds such as the Perseus cloud. \citet{Lee12} observed relatively uniform \HI\ surface density of $\Sigma_{\rm HI}\sim 6-8$~M$_{\odot}$\,pc$^{-2}$ around H$_2$ clouds, in agreement with the theoretical expectations based on H$_2$ microphysics at Solar metallicity, assuming CNM a priori \citep{Krumholz09a} or not \citep[][]{Bialy15}. Ideally, we would also like to study the atomic to molecular transition and the subsequent star formation over parsec scales in other galaxies. Observations of nearby galaxies have been possible at slightly sub-kpc resolution, revealing a saturation value around $\Sigma_{\rm HI}\approx 9$~M$_{\odot}$\,pc$^{-2}$ \citep{Bigiel08}. However, the observational techniques applied in the local Universe are not applicable yet in the distant Universe without a further strong loss of spatial resolution. Prescriptions of star-formation over galactic scales, such as the empirical relation between the molecular to atomic ratio and the hydrostatic pressure \citep[e.g.][]{Blitz06} are nevertheless available and can be used in evolution models of galaxies \citep[e.g.][]{Lagos11}, although this corresponds to an extrapolation at high redshift of a phenomenon observed in the local Universe. The increase of sensitivity in sub-mm astromomy has also permited tremendous progress in recent years with detailed studies of the relation between molecular content and star formation at intermediate redshifts \citep[e.g.][]{Tacconi13}, although still limited to relatively bright and massive galaxies. In addition, observations of atomic gas through \HI\ 21-cm emission (currently limited to $z<0.4$, e.g. \citealt{Catinella08, Freudling11, Fernandez16}) will have to await future radio facilities such as the Square Kilometre Array. At high redshift, information about gas in the Universe can be accurately obtained through absorption studies towards bright background sources. In particular, damped Lyman-$\alpha$ systems (DLAs, see \citealt{Wolfe05} for a review), with $N(\HI)\ge 2\times 10^{20}$~\cmsq, trace the neutral gas in a cross-section weighted manner, independently of the luminosity of the associated object. DLAs have been conjectured to be originating from gas associated with galaxies, in particular since DLAs contain the bulk of the neutral gas at high redshift \citep[e.g.][]{Prochaska05,Prochaska09, Noterdaeme09, Noterdaeme12c} and their metallicity is increasing with decreasing redshift \citep[e.g.][]{Rao06,Rafelski12}. While the dust production in the bulk of DLAs seems to be very low \citep{Murphy16}, the excitation of atomic and molecular species indicate some ongoing star-formation activity \citep[e.g.][]{Wolfe04,Srianand05,Neeleman15}. This is also suggested by numerical simulations \citep[e.g.][]{Cen12, Bird14} or semi-analytical models \citep[e.g.][]{Berry16} but association with galaxies remain difficult to establish directly, with only a few associations between intervening DLAs and galaxies have been revealed so far at $z>2$ \citep[][]{Moller93, Moller04, Fynbo10, Krogager12, Noterdaeme12, Bouche13, Kashikawa14, Hartoog15, Srianand16}. Indeed, statistical studies show a low level of in-situ star formation \citep{Rahmani10, Fumagalli15}, although Ly-$\alpha$ emission has been detected through stacking in sub-samples with the highest \HI\ column densities \citep{Noterdaeme14}, suggesting the latter arise more likely from gas associated with galaxies at small impact parameters. \citet{Noterdaeme15b} suggested that H$_2$ is more frequently found in high column density DLAs, but that the measured overall molecular fraction remains much lower than what would be expected from single clouds, even at the typically low metallicities of DLAs. This indicates that most of the observed H\,{\sc i} column density along the line of sight is actually unrelated to the H$_2$ core and does not participate in its shielding \citep[see also][]{Noterdaeme15}. This again marks the difficulty of distinguishing the H\,{\sc i} envelope of molecular clouds from unrelated atomic gas along the same line of sight. Several methods have been developped to statistically derive the CNM fraction in DLAs. The low detection rate of 21-cm absorption in DLAs indicates high average spin temperatures and hence that most DLAs are dominated by WNM \citep[e.g.][]{Kanekar14}. \citet{Neeleman15} recently suggested that the bulk of neutral gas could be in the CNM for at least 5\% of DLAs, based on the fine-structure excitation of singly ionised carbon and silicon. This further indicates that such clouds can be as small as a few parsecs. A small size of CNM clouds is also inferred from the lack of correspondance between 21-cm and H$_2$ absorption seen in DLAs \citep[][]{Srianand12} and by the partial coverage of the background quasar's broad line region by H$_2$-bearing clouds \citep[e.g.][]{Balashev11}. Because H$_2$-bearing systems are rare among the overall DLA population \citep[e.g.][]{Ledoux03,Noterdaeme08,Jorgenson14}, directly targeting H$_2$ (instead of blindly targeting H\,{\sc i} gas) could provide a more efficient way to study the phase transition. Unfortunately, H$_2$ lines are located in the \lya\ forest and difficult to detect at low spectral resolution \citep[except when the absorption is in the damped regime,][]{Balashev14}. In turn, neutral carbon provides an excellent tracer of H$_2$ molecules \citep[e.g.][]{Snow06}, since the ionisation energy of \CI\ is close to that of H$_2$ photodissociation. Furthermore, several transitions are located out of the \lya\ forest, making it possible to search for strong C\,{\sc i} absorption even at low spectral resolution \citep[see][]{Ledoux15}. Such selection has led to the first detections of CO molecules in absorption at $z>1.6$, which also opened the exciting possibility to directly measure the cosmic microwave background (CMB) temperature through the excitation of CO \citep{Noterdaeme11}. In the two high redshift cases where H$_2$ lines were also covered, we measured overall molecular fractions of about 25\% \citep[][]{Srianand08,Noterdaeme10b}, i.e. significantly higher than in other H$_2$-bearing DLAs, that generally have $f\sim 1\%$ or less \citep{Ledoux03}. In our quest for molecular-rich systems in the Sloan Digital Sky Survey-III \citep[see][for the corresponding search in the SDSS-II]{Ledoux15}, we found a new case, at $\zabs\sim 2.5$ towards the quasar SDSS\,J000015.17$+$004833.3 (hereafter \jz) with strong \CI\ absorption and prominent 2175~{\AA} bump that we followed-up with the Very Large Telescope. The characteristics of this system in terms of molecular fraction, CO column density, and metallicity supersede all values measured in DLAs so far. A cold, molecular-bearing component is clearly identified, allowing us to perform an unprecedented detailed analysis of the chemical and physical conditions in the molecular cloud and to study the transition from the atomic to the molecular phase. We present our observations in Sect.~\ref{s:obs}, the absorption-line analysis of ionic, atomic and molecular species in Sect.~\ref{s:abs}. We discuss the metallicity and dust abundance in Sect.~\ref{s:metallicity}, the extinction in Sect.~\ref{s:ext} and the physical conditions in the cloud in Sect.~\ref{s:phys}. We use CO to measure the cosmic microwave background temperature at $z=2.53$ in Sect.~\ref{s:cmb}. We search for star-formation activity in Sect.~\ref{s:sfr} and conclude in Sect.~\ref{s:concl}.
} In this work, we present the detection and detailed analysis of an exceptional molecular absorber at $\zabs=2.53$ towards the quasar \jz, being one of the very few absorption systems featuring CO absorption lines known to date (see \citealt{Noterdaeme11} and references therein, \citealt{Ma15}). Using both high resolution and multiwavelength spectroscopic observations, we derive the chemical composition, dust depletion and extinction as well as the molecular content of this cloud. Our main findings are the following: (1) The absorption system is characterised by super-Solar metallicity (about 2.5~$Z_{\rm \odot}$). Although the uncertainty on this value is large (by a factor of about 2-3), it is much higher than the metallicities seen in the overall population of \HI-selected DLAs at this redshift \citep[see e.g.][]{Rafelski12}. The observed depletion of refractory elements is typical of cold gas in the Galactic disc and peaks at the location where molecular gas is found. Higher spectral resolution data would be desirable to constrain better the amount of metals in this very narrow component ($b\sim$0.7~\kms). (2) The DLA has a molecular fraction reaching almost 50\% overall, being the largest value measured till now in a high-$z$ absorption system, for a total neutral hydrogen column density of $\log N({\rm H}) = 20.8$. This corresponds to a neutral gas surface density of $\Sigma_{\rm HI} \approx 5$~M$_{\rm \odot}$\,pc$^{-2}$ and a molecular hydrogen surface density $\Sigma_{\rm HI} \approx 4.4$~M$_{\rm \odot}$\,pc$^{-2}$. These surface densities are very similar to what is seen across the Perseus molecular cloud from high spatial resolution emission observations \citep{Lee12}, despite the \HI\ and H$_2$ surface densities being measured directly along a pencil beam line of sight in our study while derived from dust maps in the latter. This also shows that \HI-to-H$_2$ transition in our high-$z$ system is likely following the same physical processes as in our Galaxy. (3) We further use the different molecular and atomic species to constrain the actual physical conditions in the cold gas with the help of the spectral synthesis code Cloudy. We find that the column densities can be well reproduced by a cloud with density around $n_{\rm H} \sim 80$~cm$^{-3}$ inmersed into a moderate UV field, similar to the local interstellar radiation field. We show that a high cosmic ray ionisation rate together with the presence of small dust grains -- consistent with the depletion pattern, the steep extinction curve, and presence of a 2175~{\AA} bump -- can explain the high CO fractional abundance. The model also predicts a kinetic temperature around 50~K, in perfect agreement with that derived from the excitation of H$_2$. The CO abundance rises immediately after the \HI-to-H$_2$ transition, thanks to the efficient chemistry paths involving H$_2$, together with pre-shielding of CO. About half of the hydrogen remains in atomic form in the cloud interior, well inside the \HI-to-H$_2$ transition layer, due to H$_2$ destruction by cosmic rays. This can also explain the high HD/2H$_2$ ratio through chemical enhancement of HD compared to H$_2$. We must however keep in mind that, in addition to cosmic rays, several other processes heating the gas can also enhance the production of CO \citep[][]{Goldsmith13} and that our model suffers from uncertainties in several imput parameters, such as the metallicity. The detection of more molecular species in this system, together with comparison of different codes \citep[e.g. the Meudon PDR code, ][]{LePetit06} should break degeneracies between parameters such as dust abundance and cosmic ray rate and lead to a better understanding of the physical processes at play. Our study suggests that the presence of strong C\,{\sc i} lines, detectable at low spectral resolution \citep[as in][]{Ledoux15}, is a good indicator for high molecular hydrogen column density (in self-shielded regime), but this (or directly detecting damped H$_2$ lines as in \citealt{Balashev14}) is not a sufficient condition to get CO in detectable amounts. Since small grains seem to play a crucial role in the production of CO, selecting systems that, in addition to \CI, also present a steep extinction curve and/or the presence of a 2175~{\AA} bump should significantly increase the probability to detect CO. (4) While our study shows that the line of sight towards \jz\ has chemical and physical characteristics similar to those found in diffuse molecular regions of the Perseus cloud, we show that the former absorbing cloud is immersed into a warmer cosmic microwave background radiation. We derive the temperature of the CMB radiation at the absorber's redshift from the excitation of CO lines, correcting for a small temperature excess due to collisional excitation. The temperature we obtain is in perfect agreement with the adiabatic cooling expected in the standard hot Big-Bang theory, $T_{\rm CMB}(z)=T_0 \times (1+z)\approx 9.6$~K at $z=2.53$. Final remarks: The discovery presented here was facilitated by the steadily increasing number of available spectra of faint quasars, together with sensitive instruments on large telescopes. The involved amounts of carbon monoxide remain however far too low to be detectable in emission (even if it were at $z=0$), making the cloud being an example of CO-dark molecular gas \citep{Wolfire10}. Interestingly, such furtive phase may actually contain a large fraction of the total molecular gas in galaxies \citep[e.g.][]{Smith14}. Efforts should therefore be pursued to detect more absorption systems like the one presented here at high-redshift. These will provide excellent targets for detailled high-resolution studies (including CMB temperature) using the next generation of extremely large telescopes.
16
9
1609.01422
1609
1609.03611_arXiv.txt
The substructures of light bosonic (axion-like) dark matter may condense into compact Bose stars. We study collapses of the critical-mass stars caused by attractive self-interaction of the axion-like particles and find that these processes proceed in an unexpected universal way. First, nonlinear self-similar evolution (called ``wave collapse'' in condensed matter physics) forces the particles to fall into the star center. Second, interactions in the dense center create an outgoing stream of mildly relativistic particles which carries away an essential part of the star mass. The collapse stops when the star remnant is no longer able to support the self-similar infall feeding the collisions. We shortly discuss possible astrophysical and cosmological implications of these phenomena.
16
9
1609.03611
1609
1609.06728_arXiv.txt
We present a study on galaxy detection and shape classification using topometric clustering algorithms. We first use the DBSCAN algorithm to extract, from CCD frames, groups of adjacent pixels with significant fluxes and we then apply the DENCLUE algorithm to separate the contributions of overlapping sources. The DENCLUE separation is based on the localization of pattern of local maxima, through an iterative algorithm which associates each pixel to the closest local maximum. Our main classification goal is to take apart elliptical from spiral galaxies. We introduce new sets of features derived from the computation of geometrical invariant moments of the pixel group shape and from the statistics of the spatial distribution of the DENCLUE local maxima patterns. Ellipticals are characterized by a single group of local maxima, related to the galaxy core, while spiral galaxies have additional ones related to segments of spiral arms. We use two different supervised ensemble classification algorithms, Random Forest, and Gradient Boosting. Using a sample of $\simeq 24000$ galaxies taken from the Galaxy Zoo 2 main sample with spectroscopic redshifts, and we test our classification against the Galaxy Zoo 2 catalog. We find that features extracted from our pipeline give on average an accuracy of $\simeq 93\%$, when testing on a test set with a size of $20\%$ of our full data set, with features deriving from the angular distribution of density attractor ranking at the top of the discrimination power.
Morphology is one of the main characteristics of galaxies, as physical process happening during life time of galaxies strongly determine their shape. Therefore any theory of galaxy formation and evolution needs to closely explain the observational distribution of morphological classes \citep{Dressler1980, Bamford2009, Roberts1994}. Accurate information of galaxy types gives insight also well beyond galaxy research, testing cosmological models by studying large scale structure with ETG clustering \citep{Naab:2007}, dark mater probe by strong gravitational lensing \citep{Koopmans2004, Treu:2002} The key challenge of all this research is accurate and efficient classification of big number of galaxies. Traditional method of morphological classification classifies galaxies according to Hubble's scheme \citep{Sandage:1961}. This system classifies the galaxy morphologies into elliptical, lenticular, spiral, and irregular galaxies. However, due to the impressive amount of photometric data produced by large galaxy survey the size and quality modern data sets led to refinements in the classification \citep{Kormendy1996,Cappellari2011,vanderwel2007,Kartaltepe2015} One possible classification methods is given by citizen science projects. Excellent example of collaborative work on visual galaxy morphology classification is the Galaxy Zoo project that involved more than 100,000 volunteers to determine a galaxy class for about 900,000 (Galaxy Zoo 1) and 304,122(Galaxy Zoo 2) galaxies \citep{Lintott2011,Willett:2013ea} On the other side, larger number of galaxies available in the next generation of all-sky survey missions (Euclid, LSST, KIDS etc) makes such a human-eye analysis unmanageable. Thus, to classify large numbers of galaxies into early and late types it is compelling to use instead automated morphological classification methods. Several methods have been used to tackle this challenge, i.e. neural networks \citep{Naim1995, Lahav:1996,Goderya:2002, Banerji:2010iq,Dieleman2015} , decision trees \citep{Owens1996} and ensembles of classifiers \citep{Bazeil:2001}. However, because visual inspection requires significant spatial resolution, it is limited in galaxy sample size and it is burden with possibility of missing rare and interesting objects due to lack of scientific knowledge of volunteers. Moreover, significantly larger number of galaxies available in the next generation of all-sky survey missions (Euclid, LSST, KIDS etc) makes such a human-eye analysis unfeasible. In this paper we present an automated approach, based on a novel combination of two topometric clustering algorithms: the DBSCAN \citep{Ester96DBScan}, and the DENCLUE \citep{Hinneburg:1998wo,Hinneburg:2007tq}. The DBSCAN algorithm has been already successfully applied to the detection of sources in $\gamma-$ray photons lists \citep{Tramacere:2013it,Carlson2013} and to identify structure in external galaxies \citep{Rudick:2009}, while the DENCLUE, to our knowledge, has never been used, so far, in treatment of astronomical images. In particular we have noted that the DENCLUE algorithm, is effective both in the deblending of confused sources, and in the tracking of spiral arms. We have used these algorithms to develop a python package: \texttt{ASTErIsM} ( python AStronomical Tools for clustering-based dEtectIon and Morphometry). This software is used both to detect the sources in CCD images, and to extract features relevant to the morphological classification. \begin{figure} \centering \begin{tabular}{l} \includegraphics[width=8cm]{fig1.pdf}\\ \end{tabular} \caption{Flow chart diagram for the detection process. } \label{fig:detection_flwchart} \end{figure} The paper is organized as follows: In Sec. \ref{sec:detection}, we describe the detection process, and in particular how we modified DBSCAN and DENCLUE algorithms to work on CCD images. In Sec. \ref{sec:denclue_spir_arms}, we describe how the DENCLUE method can be used to track spiral arms. In Sec. \ref{sec:pipeline_general} we present a general view of the \texttt{ASTErIsM} pipeline for automatic detection and morphological classification, the sample used in our paper, the feature extractions process, and their statistical characterization. In Sec. \ref{sec:trainign_sets} we describe the setup of the training sets. In Sec. \ref{sec:superv_class} we describe the algorithms used for our supervised classification (Random Forest \citep{Breiman:2003} and Gradient Boosting \citep{Friedman:2001ic}), and the metrics used for the classification. In Sec. \ref{sec:class_strategy} we describe the strategy of our machine learning classification pipeline, and in Sec. \ref{sec:class_performance} we present the results of the classification together with a comparison to other similar works. In Sec. \ref{sec:conclusions}, we present our final conclusions and future developments. The code will be available at \url{https://github.com/andreatramacere/asterism} \begin{figure*} \centering \begin{tabular}{lll} \includegraphics[width=5cm]{fig2a.pdf}& \includegraphics[width=5cm]{fig2b.pdf}& \includegraphics[width=5cm]{fig2c.pdf}\\ \end{tabular} \caption{Application of the DBSCAN algorithm to source detection (see Sec. \ref{sec:source detection}). The input image is a {\it gri} summed bands image cutout centered on the object with DR8OBJID=1237667549806657543 from the Galaxy Zoo 2 Main sample with spectroscopic redshifts. {\it Left panel:} the original image. {\it Central panel:} pixels selected (white dots), with flux values above the background threshold. {\it Right panel:} the tow sources detected by the DBSCAN algorithm, with $N_{bkg }=3.9$, $K=1.5$ and $\varepsilon=1.0$. The withe line shows the source contour, the black crosses show the source centroid, and the yellow ellipses represent the containment ellipsoid.} \label{fig:fig_dbs_detection} \end{figure*} \section[]{Application of density-based clustering methods to detect sources in CCD images } \label{sec:detection} The "density based spatial clustering of applications with noise" algorithm (DBSCAN) \citep{Ester96DBScan,Zaki:2014}, is a topometric density-based clustering method that uses local density of points to find clusters, in data sets that are affected by background noise. Let $N_{\varepsilon}(p_i)$ be the set of points contained within the N-dimensional sphere of radius $\varepsilon$ centered on $p_i$, and $|N_{\varepsilon}(p_i)|$ the number of contained points, i.e. the estimator of the local density, and $K$ a threshold value. Clusters are built according to the local density around each point $p_i$. A point is classified according to the local density defined as: \begin{itemize}[leftmargin=0.5cm] \item \cbf{core point}: if $|N_{\varepsilon}(p_i)|\geq K$ . \item \cbf{border point}: if $|N_{\varepsilon}(p_i)|< K$, but at least one core point belongs to $N_{\varepsilon}(p_i)$. \item \cbf{noise point}, if both the conditions above are not satisfied. \end{itemize} Points are classified according to their inter-connection as: \begin{itemize}[leftmargin=0.5cm] \item \cbf{directly density reachable}: a point $p_j$ is defined \textit{ directly density reachable} from a point $p_k$, if $p_j \in N_{\varepsilon}(p_k) $ and $p_k$ is a \textit{core} point. \item \cbf{density reachable}: a point $p_j$ is defined \textit{ density reachable} from a point $p_k$, if exists a chain of \textit{ directly density reachable} points connecting, $p_j$ to $p_k$. \item \cbf{density connected}: two points $p_j$, $p_k$ are defined \textit{density connected} if exits a \textit{core} point $p_l$ such that both $p_j$ and $p_k$, are \textit{density reachable} from $p_l$. \end{itemize} The DBSCAN builds the cluster by progressively connecting \textit{density connected} points to each set of \textit{core} points found in the set. Thanks to its embedded capability to distinguish background noise (even when the background is not uniform), it has been successfully used to detect sources in $\gamma-$ray photon lists \citep{Tramacere:2013it}, or to identify structures in N-body simulations of galaxy clusters \citep{Rudick:2009}. For a detailed description of the application of the DBSCAN to $\gamma-$ray photon lists, we address the reader to \cite{Tramacere:2013it}. In general, a photon detection event will be characterized by position in detector/sky coordinates, and further possible features (energy, arrival time, etc...). In the case of photon lists (as in $\gamma-$ray astronomy), the detector/sky coordinates of each event can be recorded and the DBSCAN algorithm can be applied to look for density-based clusters, where a cluster is an astronomical source. Events non belonging to any source (clusters), are assigned to background (noise). In the case of CCD images, a DBSCAN suitable representation of the data is less intuitive. Indeed, detected photons are not stored as single events, being integrated and positionally binned in the CCD matrix itself. Since the pixels coordinates have a uniform spatial distribution, we can not use the original estimator of local density $|N_{\varepsilon}(p_i)|$ to find density based clusters. To overcome this limitation we have modified the DBSCAN algorithm basing on the idea to use the photon counts/flux recorded in each pixel of the CCD as a new estimator of the local density. \subsection{Image segmenation: DBSCAN} \label{sec:source detection} In our modified version of the DBSCAN algorithm we have changed the procedure for the estimation of the local density as follows: \begin{itemize}[leftmargin=0.5cm] \item We iterate through each pixel $p_{k,l}$, where $k$ refer to the $k_{th}$ row and $l$ to the $l_{th}$ column of the CCD matrix \item Let $B_{\varepsilon}(k,l)$ be the set of pixels contained in the box centered on the pixel $p_{k,l}$, and enclosing the pixels with columns $k\pm\varepsilon$ and row $l\pm\varepsilon$ \item We evaluate the local flux $M_\varepsilon(k,l)$ as total flux collected in $B_{\varepsilon}(k,l)$: \begin{equation} M_\varepsilon (k,l)= \sum_{ (i,j) \in B_{\varepsilon}(k,l) } I(p_{i,j}) \end{equation} \end{itemize} The quantity $M_\varepsilon(k,l)$ is our estimator for the local density. With this choice the classification in {\it core, border} and {\it noise points} will read as: \begin{itemize}[leftmargin=0.5cm] \item \cbf{core point}: if $M_\varepsilon(k,l)\geq K$ . \item \cbf{border point}: if $M_\varepsilon(k,l)< K$, but at least one core point belongs to $B_{\varepsilon}(k,l)$. \item \cbf{noise point}: if both the conditions above are not meet. \end{itemize} The choice to use as DBSCAN scanning brush a box rather than a circle, speeds-up the computational time, indeed we don't need to evaluate the CCD pixels distances from $p_{k,l}$ , but just to slice the sub-matrix corresponding to $B_{\varepsilon}(k,l)$. We use values of $\varepsilon$ of a few pixels, typically 1. The remaining part of the algorithm, concerning the recursive build-up of the clusters, follows the original implementation. \begin{figure*} \includegraphics[width=15cm] {fig3.pdf} \caption{Distribution of the density attractors (white crosses) for the cluster with ID=0 in central panel of Fig. \ref{fig:fig_deblending} for different values of $\varepsilon_d$ (see Sec. \ref{sec:deblending} ). Lower values of $\varepsilon_d$ result in a tighter clustering of the attractors toward the local maxima of the image. The input image is a {\it gri} summed bands image cutout centered on the object with DR8OBJID=1237663548511748377 from the Galaxy Zoo 2 Main sample with spectroscopic redshifts. } \label{fig:fig_attractors_evolution} \end{figure*} \begin{figure*} \includegraphics[width=15cm]{fig4.pdf} \caption{Application of the DENCLUE algorithm to deblend two confused sources (see Sec. \ref{sec:deblending}): original image (left), DBSCAN detection result (center), and result of the detection after DENCLUE-based deblending (right). Source and image provenance are the same as in Fig. 3.} \label{fig:fig_deblending} \end{figure*} In order to speed-up further the computational time, we have implemented in our version of the algorithm the possibility to remove from the iteration all the CCD pixels with a flux below a given background threshold. The background threshold is evaluated using the following method: \begin{itemize}[leftmargin=0.5cm] \item We split the CCD matrix in N sub-matrices (typically N=10). \item We select the sub-matrix with the lowest integrated flux, and we estimate the mode of the flux distribution $m_{bkg}$, and it's standard deviation $\sigma_{bkg}$. \item We compute a range of skewness values for distributions obtained by excluding flux values outside $n \sigma_{bkg}$ from the mode, where $n$ range from 0.5 to 2.0 with 0.1 step. We retain the $n$ value which leads to the lowest skewness and call it n*. \item $m_{bkg}$ and $\sigma_{bkg}$ are updated for the new flux distribution sigma-clipped for $n^*$ \item We set the the pre-filtering background value $bkg_{th}$ at the level of $bkg_{pre-th}=N_{bkg}\sigma_{bkg}+m_{bkg}$ above the mode. The value of $N_{bkg}$ is chosen in the range of $\simeq$ [2.5,3.5], in order to remove the bulk of the background points, but leaving at the same time enough statistic for the noise determination embedded in the DBSCAN. \item We apply a boolean mask to all the CCD pixels with a flux below $bkg_{pre-th}$. \end{itemize} A schematic view of the image segmentation process in shown in the top box of Fig. \ref{fig:detection_flwchart}, and an example of the pre-filtering procedure is shown in the left and central panels of Fig. \ref{fig:fig_dbs_detection}. The left pane of Fig. \ref{fig:fig_dbs_detection}. shows the original image, and the white dots in the central panel of the same figure show the pixel selected after the background-based pre-filtering. The last step, to apply the DBSCAN algorithm, is the setup of the parameters $K$, i.e. the DBSCAN threshold, and $\varepsilon$ i.e. the half width of the DBSCAN scanning box.. The parameter $K$ is the one tuning the DBSCAN internal noise determination, and we use the background pre-filtering to set the value of $K$ using the following method: \begin{itemize}[leftmargin=0.5cm] \item We set $K=K_{th}bkg_{pre-th} $ where the parameter $K_{th}$ is typically in the range of [1.0,2.0]. \item To avoid that the value of $K$ depends on $\varepsilon$, $M_\varepsilon(p_{k,l})$ is averaged over the number of pixels in the DBSCAN scanning box. In this way the value of $K$ represents a {\it per-pixel} threshold. \end{itemize} The second parameter, $\varepsilon$, is the one which tunes the size of the DBSCAN scanning box, hence, low values of $\varepsilon$ will allow to follow accurately also the contour of small objects. For this reason, throughout the present work, we have used $\varepsilon=1.0$ pixels, meaning that the scanning box will have a size of 9 pixels. For each source cluster we evaluate the following relevant parameters: \begin{itemize}[leftmargin=0.5cm] \item[-] $(x_c,y_c)$ the centroid coordinates. \item[-] The cluster containment ellipsoid defined by the major and minor semi axis $\sigma_{x},\sigma_{y}$, and by the inclination angle $\alpha_{\rm PCA}$, measured counterclockwise angle w.r.t. the $x$ axis. All these parameters are evaluated by applying the principal component analysis method (PCA) \citep{Jolliffe1986}, to the covariance matrix of the arrays of the cluster point position $\mathbf{x}$, $\mathbf{y}$, weighted on the cluster pixel flux. This method uses the eigenvalue decomposition of the covariance matrix of the two position arrays $\mathbf{x}$ and $\mathbf{y}$. By definition, the square root of the first eigenvalue will correspond to $\sigma_{x}$, and the second to $\sigma_{y}$. \item[-] $cnt$ the set of the coordinates of the cluster edges pixels \item[-] $r_{pca}=\sqrt{\sigma_{x}^2 + \sigma_{y}^2}$ \item[-] $r_{max}$ , i.e. the distance,from the cluster centroid, of the most distant cluster pixel. \end{itemize} The right panel In Fig. \ref{fig:fig_dbs_detection} shows the final result for an image from our data set, with $N_{bkg}=3.0$, $K_{th}=1.5$ and $\varepsilon=1.0$. The white lines represent the edges pixels of the clusters (sources), and the yellow ellipses represent the cluster containment ellipsoid, defined by $\sigma_{x},\sigma_{y}$, and $\alpha_{\rm PCA}$. The black crosses represent the clusters centroids. \subsection{Source deblending: DENCLUE} \label{sec:deblending} When sources are very close, and/or when we need to use a low value of $K_{th}$ in order to recover faint structures, as in the case of detection of spiral arms, it might happen that the DBSCAN algorithm is not able to separate them (see Fig. \ref{fig:fig_deblending}, central panel). To deblend two (or more) `confused' sources we have implemented a deblending method based on the DENCLUE algorithm \citep{Hinneburg:1998wo,Hinneburg:2007tq,Zaki:2014}. The original implementation of the DENCLUE algorithm is based on the kernel density estimation to find local maxima of dense region of points. In the case of digital images it is not possible to apply straightforwardly the equations reported in the original algorithm implementation, indeed the pixels coordinates have a uniform spatial distribution. To overcome this limitation we have modified the DENCLUE algorithm substituting the kernel density estimation with a convolution of the image with a given kernel. Let $p_j$ be the $j_{th}$ pixel with coordinates $\mathbf{q_j}$, the kernel function $G$ is a non-negative and symmetric function, centered at $\mathbf{q_j}$ that represents the influence of the pixel with $p_i$ on $p_j$. The convolved image at $p_j$ is estimated by the function $f$ as: \begin{equation} f(p_j) \propto \sum_{i=1}^{n} G\big(\frac{\mathbf{q}_j-\mathbf{q}_i}{h} \big) I(p_i) \end{equation} where $n$ is the number of pixels in the domain of the function $G$, $h$ is the bandwidth of the kernel, and $I(p_i)$ is the image flux at the pixels $p_i$ with coordinates $\mathbf{q}_i$. For example, in the case of a two dimensional Gaussian kernel, the function $G$ will read: \begin{equation} G(\mathbf{q}) \propto \exp(\mathbf{-z z^T}), \label{eq:gauss_kernel} \end{equation} where: \begin{align*} \mathbf{z}=\frac{\mathbf{q-q}_i}{h} \end{align*} and the bandwidth of the kernel $h$, acts as the standard deviation of the distribution. The DENCLUE algorithm is designed to find for each point $p_j$ the corresponding \cbf{density attractor} point i.e a local maximum of $f$. To find the attractors, rather than using a computationally expensive gradient ascent approach, we use the fast hill climbing technique presented in \cite{Hinneburg:2007tq} and \cite{Zaki:2014}, that is an iterative update rule with the formula: \begin{equation} \mathbf{q}_{t+1}=\frac{ \sum_{i=1}^{n} G\big(\frac{\mathbf{q}_t-\mathbf{q}_i}{h}\big)\mathbf{q}_i I(p_i)} { \sum_{i=1}^{n} G\big(\frac{\mathbf{q}_t-\mathbf{q}_i}{h}\big)I(p_i)} \end{equation} where the $t$ is the current iteration, $t+1$ the updated value, and $\mathbf{q}_{t=0}\equiv\mathbf{ q}_j$. The fast hill climbing starts at each point with coordinate vector $\mathbf{q}_j$, and iterates until $\|\mathbf{q}_t -\mathbf{q}_{t+1} \| \leq \varepsilon_d$. The coordinate vector $\mathbf{q}_{t+1}$ identifies the position of the \textit{density attractor} $p_j^*$ for the point $p_j$. \begin{figure*} \centering \begin{tabular}{lll} \includegraphics[height=8.2cm]{fig5a.pdf}& & \includegraphics[height=8.2cm]{fig5b.pdf}\\ \end{tabular} \caption{Application of the DENCLUE algorithm to spiral arms tracking (see Sec. \ref{sec:denclue_spir_arms}). White dots represent density attractors. \textit{Left panels}: density attractor for an elliptical galaxy, for different values of $h$, and $\varepsilon_d$, reported in the figures. \textit{Right panels:} same as in left panels, for a spiral galaxy. The images correspond to a {\it gri} summed bands cutout centered on the object with ID DR8OBJID=1237657233308188800 from the Galaxy Zoo 2 SDSS Stripe 82 sample (left panels), and DR8OBJID=1237659756599509101 (right panels). } \label{fig:fig_denclue_spiral_arms} \end{figure*} Once that all the attractors have been evaluated, then they are clustered using the DBSCAN algorithm to eventually deblend the confused sources, in a way that can be summarized by the following steps: \begin{itemize}[leftmargin=0.5cm] \item each source cluster $S$, detected by the DBSCAN, is defined by a set of points $\{p_j \in S\}$, corresponding the pixels in the source with coordinates $\mathbf{ q}_j$, and flux value $I(p_j)$ \item for each point $p_j \in S$ we compute the \textit{density attractor}. It means that for each pixel $\forall p_m \in S, \exists p_m^*$, i.e. the two sets $\{p_m \}$ and $\{p_m^*\}$, map bijectively $S$ to $S^*$. \item all the attractors points $S^*$ are clustered using the DBSCAN, in clusters \textit{density attractors} producing a list of clusters of attractors $\{CA_1,...,CA_n\}$. \item The set of pixels $p_m$ whose \textit{density attractors} belong to the same cluster of attractors $CA_n$, i.e. $\{p_m:p^*_m \in CA_n \}$, defines a new sub-source $s_n$ (sub-cluster) \item Each sub-source $s_n$ can eventually be validated or discarded according to some criteria: \begin{itemize} \item minimum pixels number \item maximum pixels number \item ratio of the sub-cluster flux compared to the parent source flux \end{itemize} \end{itemize} A schematic view of the DENCLUE-based deblending process is shown in the bottom box of Fig. \ref{fig:detection_flwchart} In the following we will use a Gaussian kernel function for the DENCLUE-based source deblending. We note that both $h$ and $\varepsilon_d$ have a significant impact on the final result of the deblending. The kernel width $h$ is relevant to the `scale' of the deblended sub-clusters, indeed large values of $h$ will smooth high frequency signals, on the contrary small values will preserve small scale features. We have found that a value of kernel width $h$ of the order of $\simeq 0.1\times r_{max}$, provides good results in deblending sources, avoiding to fragment objects with complex morphology (such as spiral galaxies), and in separating close sources, even with a large difference in the integrated flux. The parameter $\varepsilon_d$ is responsible for the convergence of the fast hill climbing algorithm, hence for the determination of the position of the final attractor. A large value of $\varepsilon_d$, will allow to find local maxima related to noisy pixels, or morphological features of the source, a small value, on the contrary, will lead to track more significant maxima related to the core of the galaxy. We can see this clearly in Fig. \ref{fig:fig_attractors_evolution}, where we show the \textit{density attractors} (white crosses) for different values of $\varepsilon_d$. We note that, as $\varepsilon_d$ is decreasing, all the attractors gets more and more tightly clustered around the two local maxima, corresponding to the two source cores. Finally, in Fig. \ref{fig:fig_deblending} we show how the deblending algorithm works. The left panel shows an image with three sources, two of which separated by a few pixels. The central panel shows the source detection with the DBSCAN threshold set to $K_{th}=1.5$, which finds only two sources. The right panel shows the image after the application of the DENCLUE-based deblending method with a Gaussian kernel function, with $h=$0.05$~r_{max}$. \section[]{application of DENCLUE to track spiral arms } \label{sec:denclue_spir_arms} Since the DENCLUE algorithm is able to track flux maxima in the 2D images, we decide to test whether this capability is able to track spiral arms too. As a first step we need to distinguish between cluster of \textit{density attractors} related to the core of the galaxy, i.e. \cbf{`core' density attractors} clusters, and \cbf{ `non-core' density attractors} clusters, that could be related to spiral arms patterns. We define the \textit{`core' density attractors } cluster as the cluster of attractors with the smallest distance form the galaxy centroid $(x_c,y_c)$, and fully contained within the galaxy effective radius $r_{eff}$. The second step is to find the optimal configuration of the DENCLUE parameters for tracking spiral arms, i.e. we face a situation that is opposite to that of source debelnding. Indeed, in this case we are interested in finding attractors not only related to the core of the source, but also to fainter morphological features. As anticipated in the previous section, the bandwidth $h$, and the fast hill climbing threshold $\varepsilon_d$ play a relevant role in the extraction of the \textit{density attractors}. Of course, $h$ has to be comparable with the scale of the feature that we want to extract, while $\varepsilon_d$ will tune the impact of the level of noise on the detection of the attractors. We have found that a Gaussian kernel with a bandwidth $h$ in the range [1.0,2.0] is able to track well spiral arms features, for the largest fraction of source sizes in our dataset, and that the optimal choice of $\varepsilon_d$ is in the range [0.1,0,2]. Even though we have identified an optimal range for both the two DENCLUE parameters, still it can happen that a given combination of $\varepsilon_d$ and $h$ is not able to find a cluster of attractors that meets the requirements to be a `core' \textit{density attractors} cluster as in the case of very noisy images. In order to mitigate such a possible bias we have implemented and automated iterative procedure to set the values of the two parameters of interest, $\varepsilon_d$ and $h$. The iteration is performed decreasing $\varepsilon_d$ of 5\%, and increasing $h$ of 5\%, until the `core' cluster of \textit{density attractors} has been found, or a maximum of $maxtrials = 10$ iterations is reached. An example of this application is given in Fig. \ref{fig:fig_denclue_spiral_arms} where we show the typical case for an elliptical (left panels) and a spiral galaxy (right panels), for $h=1.2,1.5$ , and $\varepsilon_d=0.10,0.15$. It is clear that in the case of the elliptical galaxies `non-core' \textit{density attractors} are absent or rare, on the contrary, in the case of a spiral objects, we note a significant number of `non-core' \textit{density attractors}, following quite well the local maxima of the image related to the spiral arms. \begin{figure} \centering \includegraphics[width=8.5cm]{fig6.pdf} \caption{Flow chart diagram showing the structure of the data processing pipeline} \label{fig:fig_pipeline} \end{figure}
\label{sec:conclusions} The results presented in this work show the successful application of \texttt{ASTErIsM} software, based on topometric clustering algorithms (DBSCAN and DENCLUE), to automatic galaxy detection and shape classification. For the detection process we have found that: \begin{itemize}[leftmargin=0.5cm] \item DBSCAN clusters usually preserve the actual shape of the source, allowing to follow quite well the contour of any arbitrary morphology. \item When sources are `confused', the application of the DENCLUE algorithm allows to deblend them. \end{itemize} We have verified that, in addition to deblending, the \textit{density attractors} evaluated by the DENCLUE algorithm track quite well spiral arms features, and we have found that: \begin{itemize}[leftmargin=0.5cm] \item In general, elliptical galaxies have a single cluster of e \textit{density attractors} related to the core of the galaxy, while spiral galaxies have additional ones related to the presence of spiral arms. \item The radial and angular distribution of the \textit{density attractors} are very different in the case of spiral and elliptical objects. \end{itemize} Basing on these results we have defined a new set of features for the galaxy classification, that maximize the information given by the DBSCAN clusters (see Sec. c. \ref{sec:geom_features}, \ref{sec:Hu_moments}), and the DENCLUE \textit{density attractors} (see Sec. \ref{sec:attractors_features}). In addition to these clustering-related features we have also evaluated classical morphological features (see Sec. \ref{sec:Morph_features}). We have tested the classification performance of the features evaluated by our pipeline, on a training set of about 24k objects, selected from GZ2 SDSS main sample with spectroscopic redshift, using a Random Forest and a Gradient Tree Boosting classifier. We have tested the classification performance against the GZ2 classification in elliptical vs spiral classes, and against the answers to the task \texttt{ t01} and \texttt{ t04} of the GZ2 decision tree. In general the accuracy of our classification, for the test set, is $\simeq 93\%$ with a performance comparable to other approach based on ANN \citep{Dieleman2015,Banerji:2010iq} or based on SVN and LDA classifiers \citep{HuertasCompany2008,Ferrari2015}. As future developments, we would like to investigate how deal with the classification of a larger number of morphologies, in particular investigating the capabilities to detect bars and bulges. Moreover, we aim at using the density attractors as a baseline to fit spiral arms, and investigating how this compare to human identified arms. We plan also to improve the DENCLUE-based deblending, using a ML approach, adding a feedback between \textit{density attractors} extracted in the deblending process, and the \textit{density attractors} extracted in the morphological feature extraction process.
16
9
1609.06728
1609
1609.00013_arXiv.txt
We present MUSE integral field spectroscopic observations of the host galaxy (PGC~043234) of one of the closest ($z=0.0206$, $D\simeq 90$~Mpc) and best-studied tidal disruption events (TDE), ASASSN-14li. The MUSE integral field data reveal asymmetric and filamentary structures that extend up to $\gtrsim 10$~kpc from the post-starburst host galaxy of ASASSN-14li. The structures are traced only through the strong nebular \oxy, \nitro, and \halpha\ emission lines. The total off nuclear \oxy\ luminosity is $4.7\times 10^{39}$~erg~s$^{-1}$ and the ionized H mass is $\rm \sim 10^4(500/n_e)\,M_{\odot}$. Based on the BPT diagram, the nebular emission can be driven by either AGN photoionization or shock excitation, with AGN photoionization favored given the narrow intrinsic line widths. The emission line ratios and spatial distribution strongly resemble ionization nebulae around fading AGNs such as IC~2497 (Hanny's Voorwerp) and ionization ``cones'' around Seyfert~2 nuclei. The morphology of the emission line filaments strongly suggest that PGC~043234 is a recent merger, which likely triggered a strong starburst and AGN activity leading to the post-starburst spectral signatures and the extended nebular emission line features we see today. We briefly discuss the implications of these observations in the context of the strongly enhanced TDE rates observed in post-starburst galaxies and their connection to enhanced theoretical TDE rates produced by supermassive black-hole binaries.
When a star passes within the tidal radius of a supermassive black-hole (SMBH), the strong gravitational tidal forces can tear it apart, potentially producing a short-lived luminous flare known as a tidal disruption event \citep[TDE;][]{rees88,evanskochanek89,strubbe09}. TDEs can be used to find SMBHs in the centers of galaxies, to study their mass function and its evolution in cosmic time, to study the properties of the disrupted star and the stellar debris, and to determine the SMBH mass and spin \citep[see review by][and references therein]{komossa15}. The theoretical TDE rate in a galaxy with a single SMBH has been estimated to be $10^{-5} - 10^{-4}$ per year \citep[e.g.,][]{wangmerritt04,stonemetzger16}. However, if a galaxy or galaxy merger has created a SMBH binary, the TDE rate could increase by a few orders of magnitude \citep[e.g.,][]{chen09,chen11}. The TDE rate can also be higher in steep central stellar cusps \citep[e.g.,][]{magorriantremaine99,stonevanvelzen16}. In the last few years, $\gtrsim 50$ TDE candidates\footnote{See {\tt https://tde.space}.} have been identified at different wavelengths from $\gamma$-rays to optical \citep[e.g.,][]{esquej07,gezari08,arcavi14,holoien14ae,holoien14li,holoien15oi}. One of the most unexpected observational results from these discoveries was the finding by \citet{arcavi14} that a large fraction of TDEs have post-starburst hosts, galaxies with strong Balmer absorption lines (A-type stellar populations with ages $\sim 100-1000$~Myr) on top of a spectrum characteristic of an old (elliptical) stellar population with weak or no evidence for very recent star-formation (E+A galaxies; e.g., \citealt{zabludoff96}). In a follow-up study using SDSS galaxies and a larger TDE host sample, \citet{french16} confirm this result and estimate that the observed TDE rate might be more than two orders of magnitude higher in galaxies with strong Balmer absorption features compared to normal galaxies. Given this puzzling observation and the theoretical expectation of enhanced TDE rates in galaxies with SMBH binaries, detailed studies of TDE host galaxies could provide important insights into the physical mechanism that produces this rate enhancement. The All-Sky Automated Survey for SuperNovae \citep[ASAS-SN;][]{shappee14}, a transient survey of the whole sky at optical wavelengths, has discovered three of the closest ($90-220$~Mpc) and best studied TDEs (ASASSN-14ae, \citealt{holoien14ae,brown16a}; ASASSN-14li, \citealt{holoien14li}; ASASSN-15oi, \citealt{holoien15oi}), providing an excellent sample for detailed host galaxy studies. In particular, ASASSN-14li at $z=0.0206$ ($D\simeq 90$~Mpc), discovered in November 2014, has been the best studied TDE to date at all wavelengths, including the optical/UV \citep{holoien14li,cenko16,brown16b}, radio \citep{alexander16,vanvelzen16,romero16}, X-rays \citep{holoien14li,miller15}, and mid-infrared \citep{jiang16}, as well as with theoretical modeling \citep{krolik16}. The archival, nuclear SDSS spectrum of the host galaxy of ASASSN-14li, PGC~043234 (VIII Zw 211; $\rm M_{\star}\simeq 3\times 10^9$~M$_{\odot}$), shows strong Balmer lines in absorption and no strong evidence for current star-formation \citep{holoien14li}. Indeed, the galaxy has the highest Lick H$\delta_{\rm A}$ index in the TDE host sample studied by \citet{french16}, indicating a strong post-starburst stellar population in its nuclear region with an age of $\sim 100$~Myr. In this letter, we present integral field spectroscopic observations of the host galaxy of ASASSN-14li and its surroundings obtained in early 2016. In Section~\S\ref{sec2} we discuss the observations and data reduction. In Section~\S\ref{sec3} we present the results and the analysis of the data. We discuss our results in Section~\S\ref{sec4}. Throughout the paper, we assume a distance to ASASSN-14li of $D=90.3$~Mpc corresponding to a linear scale of $0\farcs 44$/kpc \citep{holoien14li}.
\label{sec4} We have presented MUSE integral field spectroscopic observations of the nearby TDE ASASSN-14li host galaxy, PGC~043234, and its environment ($\rm 26\,kpc \times 26\,kpc$). These data reveal the presence of asymmetric and filamentary emission line structures that extend many kpc from the post-starburst host galaxy of ASASSN-14li (Figure~\ref{fig2}). The extended ($\gtrsim 5$~kpc) filamentary structures are traced only by the detection of strong nebular emission lines of \oxy, \nitro, and \halpha, and are undetected in the continuum, as illustrated in Figure~\ref{fig2}. The total off nuclear line luminosities are $4.7\times 10^{39}$~erg~s$^{- 1}$, $1.8\times 10^{39}$~erg~s$^{-1}$, and $1.3\times 10^{39}$~erg~s$^{-1}$ for [\ion{O}{3}], [\ion{N}{2}], and H$\alpha$, respectively, which implies an off-nuclear ionized H mass of $\rm M_{ion} \sim 10^4(500/n_e)\,M_{\odot}$ \citep{osterbrock06}. The location of the main nebular emission line ratios for both the extended emission line regions and the nucleus of PGC~043234 in the BPT diagnostic diagram (Figure~\ref{fig4}) are consistent with photoionization by an AGN or shock excitation, but not photoionization by current star-formation. We do not favor shock excitation models as an explanation for the line ratios because the nebular emission lines have a low average intrinsic velocity dispersion of $\sim 40$~km~s$^{-1}$ and no broad wings. Fast shocks ($v \gtrsim 200$~km~s$^{-1}$) that can produce the observed line ratios seem incompatible with the line widths \citep{allen08}, while slow shocks ($v \lesssim 200$~km~s$^{-1}$) which might be compatible, but still broader, with the line widths do not reproduce the line ratios \citep{rich11}. Also, the \hetwo\ to \hbeta\ line ratio of 0.6 in the NW1 region is higher than in most of the fast shock models \citep{allen08}. We therefore conclude that the emission lines are most likely photoionized by an AGN. There are two lines of evidence that PGC~043234 was a weak AGN prior to the TDE. The first, discussed in Section~\S\ref{sec3}, is that the stellar population-corrected SDSS archival spectrum from 2007 of the nuclear region has line ratios suggesting AGN activity. PGC~043234 is also associated with an unresolved FIRST \citep{becker95} radio source with a luminosity of $\rm L_{1.4\,GHz} \simeq 2.6\times 10^{21}$~W~Hz$^{-1}$ \citep{holoien14li} or $\rm \nu \times L_{\nu} \sim 4\times 10^{37}$~erg~s$^{-1}$ that is typical of low-luminosity AGN \citep[e.g.,][]{ho99}. Although the radio luminosity could be produced by star formation, there is no evidence for current star formation in the host galaxy from either the SDSS/MUSE spectra or the overall spectral energy distribution \citep{holoien14li}. However, the upper limit on the soft X-ray luminosity of PGC~043234 from the ROSAT All-Sky Survey \citep{voges99} of $\rm L_X < 6 \times 10^{40}$~erg~s$^{-1}$ \citep{holoien14li,miller15}, implies an ionizing luminosity which is too small to explain the extended emission line features. Using the \halpha\ luminosity of the brightest off-nuclear emission region (NW1), assuming case B recombination and following \citet{keel12}, we estimate a minimum required ionizing luminosity from a central source of $\rm L_{ion}\gtrsim 2\times 10^{41}$~erg~s$^{-1}$. The production of strong emission lines can then be explained in two ways. First, the pre-TDE nucleus could be a Seyfert~2, with other lines of sight being exposed to much higher ionizing fluxes. Such ionization ``cones'' are observed around local Seyfert~2 AGN on similar physical scales \citep[e.g.,][]{wilson94,keel12}. Second, the observed emission line structures also resemble Hanny's Voorwerp, a large ionization nebula located $15-25$~kpc from the galaxy IC~2497 \citep{lintott09}, and other ionization nebulae where the line emission is thought to be an echo of AGN activity in the recent past rather than a reflection of present day activity \citep[e.g.,][]{keel12,schweizer13}. The [\ion{S}{2}] doublet line ratio measured in the spectrum of the strongest \oxy\ emitting region (NW1) implies a recombination time that is short compared to the light travel time to the furthest emission line regions ($\sim 10^2$~years versus $\sim 10^4$~years), which suggests that the pre-TDE source would more likely be a Seyfert~2 rather than a Voorwerp. The short recombination timescale is also at odds with a ``fossil nebula'' interpretation \citep{binette87}. However, the possible systematics errors associated with the density estimate from the [\ion{S}{2}] doublet line ratio, due to the dominant telluric absorption correction at that wavelength, makes this conclusion fairly uncertain. Also, the distribution of the emission line regions around PGC~043234 do not clearly favor the geometry seen in Seyfert~2 ionization cones \citep[e.g.,][]{wilson94}. In either case, the overall morphology of the emission line features strongly indicates that PGC~043234 recently underwent a merger, leaving relatively dense gas on large scales with no associated stars. This is consistent with both recent AGN activity and the post-starburst spectrum of the galaxy, strongly supporting the galaxy-galaxy merger scenario proposed for E+A galaxies \citep{zabludoff96,goto05}. The stellar continuum emission itself is quite smooth, suggesting that we are observing the merger at a relatively late time \citep[e.g.,][]{hopkins08}. In these late phases, we might expect a relatively compact SMBH binary in the nucleus of the host galaxy, which would then naturally produce a greatly enhanced TDE rate \citep[e.g.,][]{chen09,chen11}. This would be an exciting possibility for explaining the high TDE rates that appear to be associated with post-starburst galaxies \citep{arcavi14,french16}, including the host of ASASSN-14li. Indeed, the detection of a radio source at $\sim 2$~pc from the nucleus of PGC~043234 in high resolution EVN observations could be explained by a companion AGN \citep{romero16}.
16
9
1609.00013
1609
1609.05636_arXiv.txt
We perform the study of the stability of the cosmological scalar field models, by using the Jacobi stability analysis, or the Kosambi-Cartan-Chern (KCC) theory. In the KCC approach we describe the time evolution of the scalar field cosmologies in geometric terms, by performing a "second geometrization", by considering them as paths of a semispray. By introducing a non-linear connection and a Berwald type connection associated to the Friedmann and Klein-Gordon equations, five geometrical invariants can be constructed, with the second invariant giving the Jacobi stability of the cosmological model. We obtain all the relevant geometric quantities, and we formulate the condition of the Jacobi stability for scalar field cosmologies in the second order formalism. As an application of the developed methods we consider the Jacobi stability properties of the scalar fields with exponential and Higgs type potential. We find that the Universe dominated by a scalar field exponential potential is in Jacobi unstable state, while the cosmological evolution in the presence of Higgs fields has alternating stable and unstable phases. By using the standard first order formulation of the cosmological models as dynamical systems we have investigated the stability of the phantom quintessence and tachyonic scalar fields, by lifting the first order system to the tangent bundle. It turns out that in the presence of a power law potential both these models are Jacobi unstable during the entire cosmological evolution.
A large number of cosmological observations, obtained initially from distant Type Ia Supernovae, have convincingly proven that the Universe has undergone a late time accelerated expansion \cite{1n,2n,3n,4n}. In order to explain these observations a deep change in our paradigmatic understanding of the cosmological dynamics is necessary, and many ideas have been put forward to address them. The "standard" explanation of the late time acceleration is based on the assumption of the existence of a mysterious component, called dark energy, which is responsible for the observed characteristics of the late time evolution of the Universe. On the other hand, the combination of the results of the observations of high redshift supernovae, and of the WMAP and of the recently released Planck data, indicate that the location of the first acoustic peak in the power spectrum of the Cosmic Microwave Background Radiation is consistent with the prediction of the inflationary model for the density parameter $% \Omega $, according to which $\Omega =1 $. The cosmological observations also provide strong evidence for the behavior of the parameter $w=p/\rho $ of the equation of state of the cosmological fluid, where $p$ is the pressure and $\rho $ is the density, as lying in the range $-1\leq w=p/\rho <-1/3$ \cite{acc}. In order to explain the observed cosmological dynamics, it is assumed usually that the Universe is dominated by two main components: cold (pressureless) dark matter (CDM), and dark energy (DE) with negative pressure, respectively. CDM contributes $\Omega _{m}\sim 0.3$ \cite{P2}, and its introduction is mainly motivated by the necessity of theoretically explaining the galactic rotation curves and the large scale structure formation. On the other hand, DE represents the major component of the Universe, providing $\Omega _{DE}\sim 0.7$. Dark energy is the major factor determining the recent acceleration of the Universe, as observed from the study of the distant type Ia supernovae \cite{acc}. Explaining the nature and properties of dark energy has become one of the most active fields of research in cosmology and theoretical physics, with a huge number of proposed DE models( for reviews see, for instance, \cite{PeRa03, Pa03,Od,LiM, Mort}). One interesting possibility for explaining DE are cosmological models containing a mixture of cold dark matter and quintessence, representing a slowly-varying, spatially inhomogeneous component \cite{8n}. From a theoretical as well as a particle physics point of view the idea of quintessence can be implemented by assuming that it is the energy associated with a scalar field $Q$, having a self-interaction potential $V(Q)$. When the potential energy density $V(Q)$ of the quintessence field is greater than the kinetic one, then it follows that the pressure $p=\dot{Q}% ^{2}/2-V(Q) $ associated to the quintessence $Q$-field is negative. The properties of the quintessential cosmological models have been actively considered in the physical literature (for a recent review see \cite{Tsu}). As opposed to the cosmological constant of standard general relativity, the equation of state of the quintessence field changes dynamically with time \cite{11n}. Alternative models, in which the late-time acceleration can be driven by the kinetic energy of the scalar field, called $k-$essence models, have also been proposed \cite{kessence}. Scalar fields $\phi $ that are minimally coupled to gravity via a negative kinetic energy, can also explain the recent acceleration of the Universe. Interestingly enough, they allow values of the parameter of the equation of state with $w<-1$. These types of scalar fields, known as phantom fields, have been proposed in \cite{phan1}. For phantom scalar fields the energy density and pressure are given by $\rho _{\phi}=-\dot{\phi}^2/2+V\left(\phi \right)$ and $p _{\phi}=-\dot{\phi}^2/2-V\left(\phi \right)$, respectively. The interesting properties of phantom cosmological models for dark energy have been investigated in detail in \cite{phan2,phan3, phan4}. Recent cosmological observations show that at some moment during the cosmological evolution the value of the parameter $w$ may have crossed the standard value $w= -1$, corresponding to the general relativistic cosmological constant $% \Lambda $. This cosmological situation is called \textit{the phantom divide line crossing} \cite{phan4}. In the case of scalar field models with cusped potentials, the crossing of the phantom divide line was investigated in \cite% {phan3}. Another alternative way of explaining the phantom divide line crossing is to model dark energy by a scalar field, which is non-minimally coupled to gravity \cite{phan3}. Scalar fields are also assumed to play a fundamental role in the evolution of the very early Universe, playing a major role in the inflationary scenario \cite{1nn, 2nn}. Originally, the idea of inflation was proposed to provide solutions to the singularity, flat space, horizon, and homogeneity problems, to the absence of magnetic monopoles, as well as to the problem of large numbers of particles \cite{Li90, Li98}. However, presently it is believed that the most important feature of inflation is the generation of both initial density perturbations and of the background cosmological gravitational waves. These important cosmological parameters can be determined in many different ways, like, for example, through the study of the anisotropies of the microwave background radiation, the analysis of the local (peculiar) velocity galactic flows, of the clustering of galaxies, and the determination of the abundance of gravitationally bound structures of different types, respectively \cite{Li98}. In many inflationary models the dynamical evolution of the early Universe is driven by a single scalar field, called the inflaton, with the inflaton rolling in some underlying self-interaction potential \cite{1nn,2nn,Li90, Li98}. One common approximation in the study of the inflationary evolution is the slow-roll approximation, which can be successfully used in two separate contexts. The first situation is in the study of the classical inflationary dynamics of expansion in the lowest order approximation. Hence this implies that the contribution of the kinetic energy of the inflaton field to the expansion rate is ignored. The second situation is represented by the calculation of the perturbation spectra. The standard expressions deduced for these spectra are valid in the lowest order in the slow roll approximation \cite{Co94}. Finding exact inflationary solutions of the gravitational field equations for different types of scalar field potentials is also of great importance for the understanding of the dynamics of the early Universe. Such exact solutions have been found for a large number of inflationary potentials. Moreover, the potentials allowing a graceful exit from inflation have been classified \cite{Mi}. Hence, the theoretical investigation of the scalar field models is an essential task in cosmology. Among the various methods used to study the properties of scalar fields the methods based on the applications of the mathematical formalism of the qualitative study of dynamical systems is of considerable importance. The usefulness of dynamical systems formulation of physical models is mainly determined by their powerful predictive power. This predictive power is essentially determined by the stability of their solutions. In a realistic physical system, due to the limited precision of the measurements, some uncertainties in the initial conditions always exist. Therefore a physically meaningful mathematical model must also offer detailed and useful information on the evolution of the deviations of the possible trajectories of the dynamical system from a given reference trajectory. Hence an important requirement in mathematical modelling is the understanding of the local stability of the physical and cosmological processes. This information on the system behavior is as important as the understanding of the late-time deviations. The global stability of the solutions of the dynamical systems described by systems of non-linear ordinary differential equations is analyzed in the framework of the well-known mathematical theory of Lyapounov stability. In this mathematical approach the fundamental quantities that measure exponential deviations from a given trajectory are the so-called Lyapunov exponents \cite{1,2}. It is usually very difficult to determine the Lyapounov exponents analytically for a given dynamical system, and thus one must resort to numerical methods. On the other hand, the important problem of the local stability of the solutions of dynamical systems, described by ordinary differential equations, is less understood. Cosmological models have been intensively investigated by using methods from dynamical systems and Lyapounov stability theory \cite{W,C,Liap,cosm3,cosm4}% . In particular, phase space analysis proved to be a very useful method for the understanding of the cosmological evolution. When studying the evolution of cosmological models, the dynamical equations can be represented by an autonomous dynamical system, described by a set of coupled - usually strongly non-linear - differential equations for the physical parameters. This representation allows the study of the Lyapunov stability of the model, without explicitly solving the field equations for the basic variables. Furthermore, the importance of the Lyapunov analysis is related to the fact that stationary points of the dynamical system correspond to exact or approximate analytic solutions of the field equations. Thus the dynamical system formulation provide a useful tool for obtaining exact or approximate solutions of the field equations in cosmologically interesting situations. Even that the mathematical methods of the Lyapounov stability analysis are well established, the study of the stability of the dynamical systems from different points of view is extremely important. The comparison of the results of the alternative approach with the corresponding Lyapunov exponents analysis can provide a deeper understanding of the stability properties of the system. An alternative, and very powerful method for the study of the systems of the ordinary differential equations is represented by the so-called Kosambi-Cartan-Chern (KCC) theory, which was initiated in the pioneering works of Kosambi \cite{Ko33}, Cartan \cite{Ca33} and Chern \cite{Ch39}, respectively. The KCC theory was inspired and influenced by the geometry of the Finsler spaces (for a recent review of the KCC theory see \cite{rev}). From a mathematical point of view the KCC theory is a differential geometric theory of the variational equations for the deviations of the whole trajectory with respect to the nearby ones \cite% {An00}. In the KCC geometrical description of the systems of ordinary differential equations one associates a non-linear connection, and a Berwald type connection to the system of equations. With the use of these geometric quantities five geometrical invariants are obtained. The most important invariant is the second invariant, also called the curvature deviation tensor, which gives the Jacobi stability of the system \cite{rev, An00, Sa05,Sa05a}. The KCC theory has been applied for the study of different physical, biochemical or technical systems (see \cite{Sa05, Sa05a, An93, KCC, KCC1}. An alternative geometrization method for dynamical systems, with applications in classical mechanics and general relativity, was proposed in \cite{Pet10} and \cite{Kau}, and further investigated in \cite{Pet0,Pet1}. The Henon-Heiles system and Bianchi type IX cosmological models were also investigated within this framework. In particular, in \cite{Pet0} a theoretical approach based on the geometrical description of dynamical systems and of their chaotic properties was developed. For the base manifold a Finsler space was introduced, whose properties allow the description of a wide class of physical systems, including those with potentials depending on time and velocities, for which the Riemannian approach is unsuitable. It is the purpose of the present paper to consider a systematic investigation of the Jacobi stability properties of the flat homogeneous and isotropic general relativistic cosmological models. By starting from the standard Friedmann equations we perform, as a first step in our analysis, a "second geometrization" of these equations, by associating to them a non-linear connection, and a Berwald connection, respectively. This procedure allows to obtain the so-called KCC invariants of the Friedmann equations. The second invariant, called the curvature deviation tensor, gives the Jacobi stability properties of the cosmological model. The KCC theory can be naturally applied to systems of second order ordinary differential equations. The Friedmann equations can be formulated as second order differential equations, similarly to the Klein-Gordon equation describing the scalar field. Therefore the KCC theory can be applied to matter and scalar field dominated cosmological models. We obtain the general condition for the Jacobi stability of scalar fields, which is described by two inequalities involving the second and the first derivative of the scalar field potential, the energy density of the field, as well as the time derivative of the field itself. The geodesic deviation equations describing the time variation of the deviation vector are also obtained. As an application of the developed formalism we investigate the stability properties of the scalar field cosmological models with exponential and Higgs type potentials, respectively. It turns out that the exponential potential scalar field is Jacobi unstable during its entire evolution, while the time evolution of the scalar field cosmological models with Higgs potential show a complicated dynamics with alternating stable and unstable Jacobi phases. The Jacobi stability properties of the Higgs type models are determined by the numerical value of the ratio of the self-coupling constant and the square of the mass of the Higgs particle. The Lyapunov stability properties of the scalar field cosmological models are usually investigated by reformulating the evolution equation as a set of three first order ordinary differential equations. In order to apply the KCC theory to such systems they must be lifted to the tangent bundle. From mathematical point of view this requires to take the time derivative of the first order equations, so that their "second geometrization" can be easily performed. We consider in detail the Jacobi stability properties of the phantom quintessence and tachyon scalar field cosmological models. We study in detail the Jacobi stability condition of these models, and we find that they are Jacobi unstable during the entire expansionary cosmological evolution. The present paper is organized as follows. We review the basic ideas and the mathematical formalism of the KCC theory in Section~\ref{kcc}. The Jacobi stability analysis of the homogenous isotropic flat cosmological models by using the second order formulation of the dynamics is performed in Section~% \ref{sect3}. We consider both the cases of the matter dominated and scalar field dominated cosmological models. As an application of the developed formalism we investigate in detail the Jacobi stability of the scalar fields with exponential potential, and Higgs potential, respectively. The Jacobi stability of the first order dynamical system formulation of scalar field cosmological models is considered in Section~ref{sect4}, in which the KCC geometrization of the phantom quintessence and tachyonic scalar field models is analyzed in detail. We discuss and conclude our results in Section~\ref% {sect5}. The KCC geometric quantities giving the geometric description of the phantom quintessence and tachyon scalar field cosmologies are presented in Appendix \ref{appA} and Appendix~\ref{appB}, respectively.
\label{sect5} In the present paper we have investigated the Jacobi stability properties of the scalar field cosmological models by using the KCC theory, which represents a powerful mathematical method for the analysis of dynamical systems. Scalar field cosmological models represent a non-trivial testing object for studying non-linear effects in the framework of general relativity. From a mathematical point of view the Jacobi (in)stability represents a natural generalization of the (in)stability of the geodesic flow on a differentiable manifold, endowed with a Riemannian or Finslerian type metric to a \textit{non-metric setting}. The KCC theory can be applied to scalar field cosmological models that can be formulated mathematically as sets of second order ordinary non-linear differential equations. Then the geometric invariants associated to this system (nonlinear and Berwald connections), and the deviation curvature tensor, as well as its eigenvalues, can be explicitly obtained. The time evolution of the components of the deviation vector can also be obtained by explicitly solving the geodesic deviation equations. The Jacobi stability, and its theoretical foundation, the KCC theory, offers an alternative approach to the "classical" Lyapunov approach, by investigating \textit{the deviations of the entire trajectory} of the cosmological evolution equations with respect to the nearby ones under the effects of a small perturbation. In the framework of general relativity we may call the applications of the KCC theory to the study of the gravitational fields as a "second geometrization", in which already geometric quantities are supplemented by additional geometric structures. Hence general relativistic cosmological models can be described in geometric terms originating from their dynamical system structure, with these new geometric structure fully determined by the underlying Riemannian geometry, and the physical properties of the scalar fields (their self-interaction potential). The stability properties of the perturbations of a given trajectory describing the cosmological evolution are determined by the properties of the curvature deviation tensor, a geometric quantity constructed from the connections (non-linear and Berwald) associated to the dynamical system describing the cosmological evolution. It is important to note that the KCC theory can be directly applied to systems of second order differential equations, which can be interpreted geometrically as the paths (or geodesics) associated to a semispray. In investigating the Jacobi stability of cosmological models we have followed two approaches. Since the cosmological evolution equations (the Friedmann equations) are second order differential equations, the KCC theory can be naturally and directly applied to study the stability of the cosmic evolution. As a first step one obtains the two non-zero components of the non-linear connection, with the $N_2^1$ component depending on the product of the scale factor and of the time derivative of the field, while the $N_2^2$ component depends on the energy density of the scalar field, as well as of the scalar field potential. After obtaining the components of the deviation curvature tensor we have formulated the general condition of the stability of the scalar field cosmological models, which is determined by two inequalities involving the second and the first derivative of the scalar field potential, the energy density of the field, and the time variation of the scalar field itself. As an applications of the developed formalism we have considered two scalar field models, both being relevant for the study of both the early and the late stages of the cosmological evolution. The first case we did consider is the scalar field with exponential potential. We have studied in detail the KCC geometric properties of this model. It turns out that the Jacobi stability condition, which can be expressed in terms of the components of the deviation curvature tensor is not satisfied during the cosmological evolution, and that the Universe described by the exponential potential scalar field is in a Jacobi unstable state. This result is independent on the numerical values of the parameter $\lambda $, describing the properties of the potential, and can also be inferred from the behavior of the components of the deviation vector $\xi ^i$, with $\xi ^1$ diverging exponentially in time. As a second case we have considered the case of the Higgs type potential. For this potential the KCC geometric quantities show a complex behavior. After a period in which the scalar field and the potential are almost constant, the field starts to oscillate, with the amplitude of the oscillations decreasing in time. This behavior of the Higgs field is also reflected in the behavior of the components of the deviation curvature tensor, which are also some oscillating functions. The Jacobi stability of this cosmological model strongly depends on the numerical value of the parameter $\eta =\lambda /4M^2$. For small values of $\eta $, the Universe evolves between successive Jacobi stable and unstable states. With the increase of the numerical value of $\eta $ the time intervals in which the Universe is Jacobi stable decrease quickly, and for large values of $\eta $ the Universe is in a Jacobi unstable state during its entire cosmological evolution. As a second approach for the study of the Jacobi stability of scalar field cosmologies we have considered the first order dynamical system formulation of scalar field evolution equations. In this approach by introducing a new set of variables, expressed in terms of the square root of the potential, the time derivative of the scalar field, and the Hubble function, respectively, the Friedmann equations in the presence of scalar fields can be reformulated as a first order dynamical system, consisting of three highly non-linear ordinary differential equations. In order to apply the KCC theory this dynamical system must be lifted to the tangent bundle, and formulated as a second order differential system. We have analyzed, by using this approach, two specific scalar field models, the phantom quintessence and the tachyon scalar field with power law potentials. It turns out that for this choice of the potential both scalar field models are Jacobi unstable. The power law potential gives a very simple form for the function $% \Gamma (z)$, which takes constant values during the cosmological evolution. This situation is similar to the case of the exponential potential, and leads to a significant simplification of the mathematical formalism. We have started our study of the applications of the KCC theory to cosmological problems with the investigation of the standard mater dominated cosmological models in their dynamical system formulation. We have studied the Jacobi stability of the \textit{critical points} of different models, and we have shown that they are Jacobi unstable. A full comparison between the Jacobi and Lyapunov properties of the critical points for second order systems was given in \cite{Sa05} and \cite{Sa05a}, and hence we will not discuss in detail this relation. However, this study of the critical points of matter dominated cosmological models also shows the fundamental differences between the Lyapunov stability and KCC theories: while Lypunov stability is mostly restricted to the study of critical points, the KCC theory has the potential of investigating the deviation of the full trajectory during the entire period of the cosmological evolution. Therefore we can consider a Lyapunov stability analysis of steady states (called the linear analysis), and a "Lyapunov type" stability analysis of the whole trajectory (the KCC or Jacobi stability analysis), and these two methods are \textit{complementary} but \textit{distinct} to each other. The KCC theory also introduces the first set of KCC invariants $\epsilon ^i$% , $i=1,...,n$, giving the contravariant KCC derivative of the vector field $% y^i$. The first KCC invariant can be interpreted as an external force. We did not study in detail the time evolution of the first KCC invariant, since its properties are not directly related to the stability issues that were our main points of interest. In the present paper we have performed a stability analysis of the scalar field cosmological models, in which we have considered a description of the deviations of the whole trajectories of the differential system describing the cosmological dynamics, and we have provided some basic theoretical and computational tools for this study. Further investigations of the Jacobi stability properties of cosmological models may provide some methods for discriminating between different evolutionary scenarios, as well as for the better understanding of some other fundamental processes, like, for example, structure formation, that played an essential role in the evolution of our Universe. \paragraph*{{\bf Conflict of interest}} The authors declare that there is no conflict of interest regarding the publication of this paper.
16
9
1609.05636
1609
1609.09074_arXiv.txt
We target the thermal emission spectrum of the non-transiting gas giant HD 88133 b with high-resolution near-infrared spectroscopy, by treating the planet and its host star as a spectroscopic binary. For sufficiently deep summed flux observations of the star and planet across multiple epochs, it is possible to resolve the signal of the hot gas giant's atmosphere compared to the brighter stellar spectrum, at a level consistent with the aggregate shot noise of the full data set. To do this, we first perform a principal component analysis to remove the contribution of the Earth's atmosphere to the observed spectra. Then, we use a cross-correlation analysis to tease out the spectra of the host star and HD 88133 b to determine its orbit and identify key sources of atmospheric opacity. In total, six epochs of Keck NIRSPEC \textit{L} band observations and three epochs of Keck NIRSPEC \textit{K} band observations of the HD 88133 system were obtained. Based on an analysis of the maximum likelihood curves calculated from the multi-epoch cross correlation of the full data set with two atmospheric models, we report the direct detection of the emission spectrum of the non-transiting exoplanet HD 88133 b and measure a radial projection of the Keplerian orbital velocity of 40 $\pm$ 15 km/s, a true mass of 1.02$^{+0.61}_{-0.28}M_J$, a nearly face-on orbital inclination of 15${^{+6}_{-5}}^{\circ}$, and an atmosphere opacity structure at high dispersion dominated by water vapor. This, combined with eleven years of radial velocity measurements of the system, provides the most up-to-date ephemeris for HD 88133.
Since the discovery of 51 Peg b \citep{Mayor95}, the radial velocity (RV) technique has proven indispensable for exoplanet discovery. Hundreds of exoplanets have been revealed by measuring the Doppler wobble of the exoplanet host star \citep{Wright2012}, principally at visible wavelengths. To first order, the RV method yields the period and the minimum mass ($M\sin(i)$) of the orbiting planet. In order to complete the characterization of a given exoplanet, one would want to measure its radius and constrain its atmospheric constituents. Traditionally, this information is accessible only if the planet transits its host star with respect to our line of sight via transmission or secondary eclipse photometry. Successes with these techniques have resulted in the detections of water, carbon monoxide, and methane on the hottest transiting gas giants \citep{Madhu2014}. These gas giants orbit their host stars in days, are known as hot Jupiters, and have an occurence rate of only 1$\%$ in the exoplanet population \citep{Wright2012}. Broadband spectroscopic measurements of transiting hot Jupiter atmospheres are rarely able to resolve molecular bands, let alone individual lines, creating degeneracies in the solutions for atmospheric molecular abundances. High-resolution infrared spectroscopy has recently provided another route to the study of exoplanet atmospheres, one applicable to transiting and non-transiting planets alike. Such studies capitalize on the Doppler shift between the stellar and planet lines, allowing them to determine the atmosphere compositions, true masses, and inclinations of various systems. With the CRyogenic InfraRed Echelle Spectrograph (CRIRES) at the VLT, \cite{Snellen2010} provided a proof-of-concept of this technique and detected carbon monoxide in the atmosphere of the transiting exoplanet HD 209458 b consistent with previous detections using Hubble Space Telescope data \citep{Swain2009}. By detecting the radial velocity variation of a planet's atmospheric lines in about six hours of observations on single nights, further work has detected the dayside and nightside thermal spectra of various transiting and non-transiting hot Jupiters, reporting detections of water and carbon monoxide, as well as the presence of winds and measurements of the length of day \citep{Snellen2010, Rodler2012, Snellen2014,Brogi2012, Brogi2013, Brogi2014, Brogi2016, Birkby2013, deKok2013, Schwarz2015}. With HARPS, \cite{Martins2015} recently observed the reflected light spectrum of 51 Peg b in a similar manner, combining 12.5 hours of data taken over seven nights when the full dayside of the planet was observable. \cite{Lockwood2014} studied the hot Jupiter tau Boo b using Keck NIRSPEC (Near InfraRed SPECtrometer), confirmed the CRIRES measurement of the planet's Keplerian orbital velocity, and detected water vapor in the atmosphere of a non-transiting exoplanet for the first time. NIRSPEC was used to observe an exoplanet's emission spectrum over $\sim$2-3 hours each night across multiple epochs, in order to capture snapshots of the planet's line-of-sight motion at distinct orbital phases. In combination with the many orders of data provided by NIRSPEC's cross dispersed echelle format and the multitude of hot Jupiter emission lines in the infrared, \cite{Lockwood2014} achieved sufficient signal-to-noise to reveal the orbital properties of tau Boo b via the Doppler shifting of water vapor lines in its atmosphere. Here, we continue our Keck NIRSPEC direct detection program with a study of the emission spectrum of the hot gas giant HD 88133 b, a system that allows us to test the brightness limits of this method and develop a more robust orbital dynamics model that can be applied to eccentric systems. In Section~\ref{RVs} we present new (stellar) radial velocity (RV) observations of HD 88133 and an updated ephemeris. In Section~\ref{methods} we outline our NIRSPEC observations, reduction, and telluric correction method. In Section~\ref{2dcc} we describe the cross correlation and maximum likelihood analyses, and present the detection of the thermal spectrum of HD 88133 b. In Section~\ref{discussion} we discuss the implications of this result for the planet's atmosphere and for future observations.
We report the detection of the emission spectrum of the non-transiting exoplanet HD 88133 b using high resolution near-infrared spectroscopy. This detection is based on the combined effect of thousands of narrow absorption lines, predominantly water vapor, in the planet's spectrum. We find that HD 88133 b has a Keplerian orbital velocity of 40 $\pm$ 15 km/s, a true mass of 1.02$^{+0.61}_{-0.28}M_J$, and a nearly face-on orbital inclination of 15${^{+6}_{-5}}^{\circ}$. Direct detection of hot Jupiter atmospheres via this approach is limited in that it cannot measure the {\it absolute} strengths of molecular lines, relative to the photometric contrast. Thus, this method will yield degeneracies between the vertical atmospheric temperature gradients and absolution molecular abundance ratios, but the relative abundances of species should be better constrained. For transiting planets having Spitzer data, it should be possible to better measure absolute abundances by comparing Spitzer eclipse depths and the output of our cross-correlation analyses using various planetary atmosphere models. With the further refinement of this technique and with the improved future implementation of next-generation spectrometers and coronagraphs, especially on the largest optical/infrared telescopes, we are optimistic that this method may be extended to the characterization of terrestrial atmospheres at Earth-like semi-major axes. This paper shows progress in that direction by presenting an algorithm capable of removing telluric lines whilst preserving planet lines even if the planet does not move significantly during the observations.
16
9
1609.09074
1609
1609.05894_arXiv.txt
LkCa~15 is an extensively studied star in the Taurus region known for its pre-transitional disk with a large inner cavity in dust continuum and normal gas accretion rate. The most popular hypothesis to explain the LkCa~15 data invokes one or more planets to carve out the inner cavity, while gas continues to flow across the gap from the outer disk onto the central star. % We present spatially unresolved HCO$^+$~$J = 4\rightarrow3$ observations of the LkCa\,15 disk from the JCMT and model the data with the \textsc{ProDiMo} code. We find that: {\it (1)} HCO$^+$ line-wings are clearly detected, certifying the presence of gas in the cavity within $\lesssim$\,50\,AU of the star. {\it (2)} Reproducing the observed line-wing flux requires both a significant suppression of cavity dust (by a factor $\gtrsim10^4$ compared to the ISM) and a substantial increase in the gas scale-height within the cavity ($H_0/R_0 \sim 0.6$). An ISM dust-to-gas ratio (d:g$=$10$^{-2}$) yields too little line-wing flux regardless of the scale-height or cavity gas geometry, while a smaller scale-height also under-predicts the flux even with a reduced d:g. {\it (3)} The cavity gas mass is consistent with the surface density profile of the outer disk extended inwards to the sublimation radius (corresponding to mass $M_d \sim 0.03$\,\msun), and masses lower by a factor $\gtrsim$10 appear to be ruled out.
\label{intro} The early stages of planet formation are thought to begin in the disks rotating around T Tauri and Herbig stars. However, the processes dust and gas undergo to form planetary systems are not well understood. The most natural candidates for sites of planetary formation are transitional disks. A transitional disk is defined as a primordial or `protoplanetary' disk with little to no near-infrared (near-IR) and mid-infrared (mid-IR) emission in the disk SED and strong dust continuum emission at wavelengths $\geq 10$~$\micron$, where the disk has an inner hole of (presumed) dust depletion. Similarly, `pre-transitional' disks have an optically thick inner disk that is separated from the optically thick outer disk by an optically thin gap or cavity in dust continuum emission. Pre-transitional disk SEDs have been fit with evidence to support near-IR dust emission from the thick inner disk in combination with a reduction in mid-IR \citep{2005ApJ...630L.185C, 2010ApJ...717..441E}. While observational techniques (e.g. interferometry) can be used to resolve transitional disks, molecular line spectroscopy can investigate other disk properties. Line profiles of Keplerian disks have double-peaked emission due to rotation and higher velocity line-wings that trace the inner disk radii. Fitting disk models to molecular line profiles can place constraints on the disk mass, extent (radii), inclination and other properties of the disk (e.g. \citealt{2010A&A...518L.124M, 2010A&A...510A..18K, 2009A&A...501L...5W, 2010A&A...518L.125T, 2010A&A...518L.127M, 2011A&A...530L...2T, 2011A&A...534A..44W, 2012A&A...538A..20T}). \citet{2004MNRAS.351L..99G} use this technique with HCO$^+$~$J=4\rightarrow3$ on six T Tauri stars with circumstellar disks, including LkCa~15. HCO$^+$ is a useful tracer of the dense gas in disks with a high critical density $n_{crit}\sim2\times10^{6}$~cm$^{-3}$ \citep{2011piim.book.....D} and varies gradually with disk radius \citep{2002A&A...386..622A}. Results from \citet{2004MNRAS.351L..99G} indicate a lack of HCO$^+$ line-wing emission in the LkCa~15 disk. \citet{2004MNRAS.351L..99G} placed the outer radius of the disk gap at $\sim200$~AU, which was consistent with marginally resolved HCO$^+$~$J=1\rightarrow0$ interferometric images of LkCa 15 from \citet{2003ApJ...597..986Q}. The dust-hole size is estimated as 58~AU from IR emission \citep{2010ApJ...717..441E} and 50~AU from millimetre emission (\citealt{2011ApJ...732...42A}, hereafter A11). However, there is evidence that gas is present in the dust continuum cavity from observations of $^{12}$CO and $^{13}$CO (e.g. \citealt{2007A&A...467..163P, 2015A&A...579A.106V}), where gas is found at a radii 13$\pm5$~AU and 23$\pm$8~AU for $^{12}$CO and $^{13}$CO respectively. To better understand the mechanism behind accretion onto LkCa~15, more rigid constraints need to be placed on gas mass in the inner disk cavity using high density tracers like HCO$^+$. Even though CO is an abundant molecule that can trace low-mass material in a disk, it often becomes optically thick at low density (i.e. $\sim1$~M$_\mathrm{Jupiter}$ of gas; \citealt{2001ApJ...561.1074T}) and is not necessarily suited to tracing the region of the disk forming Jupiter-mass exoplanets. In this paper, HCO$^+$~$J=4\rightarrow3$ line observations are used to trace the dense gas in the LkCa~15 disk. We use this spatially unresolved spectrum and a chemical disk model to study the properties (mass, dust-to-gas ratio or `d:g' and scale height) of the central cavity and outer disk. This work improves on the past study from \citet{2004MNRAS.351L..99G} by observing HCO$^+$ in the LkCa~15 disk at a factor $\sim10$ deeper. \S\ref{observations} details the observations HCO$^+$~$J=4\rightarrow3$ emission of the LkCa~15 disk. \S\ref{modelling} describes the modelling parameters we use to fit the data, including the disk surface density, scale height and grain settling. We present the results from the model fits in \S\ref{results}, detailing how the models are developed and improved to fit the HCO$^+$ line. Lastly, \S\ref{discussion} discusses the results and the implications for accretion in LkCa~15.
\label{discussion} A number of models have been tested for fitting the HCO$^+$ line emission in the LkCa~15 disk, focussing on the disk cavity at a radius $<50$~AU. We detect significant line-wing flux, indicating the presence of gas in the disk cavity up to 50~AU from the star. We have been able to model the observed line-wing flux by suppressing the cavity dust (d:g$=10^{-6}$) and increasing the gas scale height substantially in this region ($H_0/R_0\sim0.6$ instead of the standard outer disk $H_0/R_0\sim0.1$). Both an ISM-like d:g$=10^{-2}$ and/or a small scale height ($H_0=10$~AU) under-predict the HCO$^+$ line flux. Lastly, the gas mass in the cavity is roughly what is expected in the absence of a cavity (0.03~M$_\odot$), where masses lower by a factor $\sim10$ under-predict line-wing flux. Our study suggests that possible planets sculpting the LkCa~15 dust cavity appear to do so without greatly diminishing the amount of gas within it. However, spatially resolved observations are needed to test this result. The detected HCO$^+$~$J=4\rightarrow3$ line-wings in LkCa~15 are consistent with Greaves (2004) HCO$^+$ line-wings detected in GG~Tau, GM~Aur and DM~Tau, which are all known to have cavities in the dust continuum emission. Indeed, at velocities $\mathrm{v_{lc}} \pm \gtrsim 3.0$~km~s$^{-1}$, each of these sources have similar high velocity line-wing flux, implying the cavities of all transitional disks potentially have similarly low dust-to-gas ratios and puffed-up inner rims in gas, as in LkCa~15. As discussed in \S\ref{disk_structure}, our models have focussed on fitting the disk cavity from 0.1--50~AU using a simple one-component model. However, this model can be improved upon by using two-components, where an inner dusty disk is set from radii $\sim0.1$--10~AU (e.g. \citealt{2008ApJ...682L.125E, 2010ApJ...717..441E, 2011ApJ...732...42A, 2015A&A...579A.106V}) and dust gap between $\sim 10$--50~AU. Past work from \citet{2013A&A...559A..46B} has suggested the inner dust disk (assumed to have a dust depletion factor $\delta_\mathrm{dust}=10^{-5}$ w.r.t. the dust density of the outer disk) can significantly influence chemistry in the gap, particularly for CO emission. The inner disk can shield the cavity from direct stellar irradiation, decreasing gas temperatures in the gap and allowing CO and H$_2$ to survive at lower gas masses. This could also potentially allow HCO$^+$ to survive at lower gas masses (where H$_2$ and CO are necessary for HCO$^+$ formation). A11 estimated the inner dusty disk to have a surface density (and total gas$+$dust mass) depleted by $\delta_\mathrm{inner}=10^{-6}$ w.r.t. the outer disk. Similarly, \citet{2013A&A...559A..46B} used a dust depletion $\delta_\mathrm{dust}=10^{-5}$ to test the effects of an inner disk scenario (with and without gas depletion). Our study has also tested dust and gas depletion within the complete cavity region at radii 0.1--50~AU by varying dust-to-gas ratios and the cavity mass. We vary dust-to-gas ratios in \S\ref{dust_to_gas} from $10^{-2}$ down to $10^{-10}$ (without gas depletion), which corresponds to dust depletion factors $\delta_\mathrm{dust, cav} = 1$ down to $10^{-8}$ w.r.t. the dust surface density in the outer disk. This range of dust-to-gas ratios (and thus dust depletion factors) investigates the dust content within the inner disk (and gap) region, where this range includes the inner disk depletion factors also used in A11 and \citet{2013A&A...559A..46B}. Our best-fit cavity model (d:g$=10^{-6}$ or $\delta_\mathrm{dust, cav}=10^{-4}$) has a relative dust content that is a factor 100$\times$ higher than expected from the inner dusty disk in A11. However, as shown in Figure~\ref{fig:full_model_sed} and discussed in \S\ref{final_model}, the SED for the best-fit model is inflated at near- to mid-IR wavelengths. For the cavity model with a dust-to-gas ratio matching the inner disk dust depletion from A11 (i.e. $\delta_\mathrm{dust, cav}=10^{-6}$ or d:g$=10^{-8}$), the corresponding SED better matches the data but the HCO$^+$ line-wing emission is lower than the observations (likely because more UV emission is able to penetrate the disk with the lower dust content and dissociate CO and H$_2$ needed to form HCO$^+$). These findings indicate that shielding from dust within the cavity region (either from an inner disk or a small reservoir of dust within the full cavity) is important for modelling HCO$^+$ within the disk. A more surprising result is the lack of HCO$^+$ emission from the disk cavity when both the dust and gas are depleted in this region (see \S\ref{lower_ten}). In \S\ref{lower_ten}, the best-fit cavity models are depleted by a factor $10$ in both dust and gas so that the gas depletion is $\delta_\mathrm{gas, cav} = 10^{-1}$ and dust depletion is $\delta_\mathrm{dust, cav} = 10^{-5}$ w.r.t. the gas and dust in the outer disk. The relative dust content is still a factor 10$\times$ higher than what is expected from dust depletion in the inner disk in A11 and is equivalent to the dust depletion tested for an inner disk in \citet{2013A&A...559A..46B}. Even though there is still a sizeable reservoir of dust available within the cavity to shield the remaining gas from UV emission, the factor 10 in gas depletion has caused the HCO$^+$ line-wing emission to drop significantly lower than the observed HCO$^+$ line. Therefore, any depletion in gas density will not be widespread across the observed disk gap. The modelled dust depletion is consistent with past work, including \citet{2015A&A...579A.106V} which suggests dust depletion in the LkCa~15 cavity is on scales $\sim10^{-4}$ w.r.t. the ISM dust-to-gas ratio and \citet{2011ApJ...729...47Z} which suggests similar dust-to-gas mass ratios in the inner portions of the GM~Aur disk (ranging from $10^{-2}$ to $10^{-5}$ with respect to the ISM dust-to-gas ratio). Our dust-depleted fits to the LkCa~15 disk cavity support both observational and theoretical work that forming planets sculpt the cavity and affect dust grain evolution in the disk \citep{2012A&A...545A..81P, 2012A&A...538A.114P, 2013A&A...560A.105G, 2016A&A...585A..58V}. From these past studies, there is evidence that the planet carves out a smaller cavity in small dust grains ($\leq10$~$\mu m$) and gas. The pressure bump generated from the planet can filter larger grains at larger radii, creating the observed dust cavities or gaps in transitional and pre-transitional disks. This gap in gas does not appear to be steep, gradually decreasing over several AU, allowing accretion to continue onto the star. Recent results (e.g. \citealt{2015Natur.527..342S, 2012ApJ...745....5K}) suggest there are 2-3 accreting protoplanets at radii $\sim$15-19~AU in the disk cavity, though these planets are not necessarily sufficient to open the full 50~AU dust continuum hole. However, \citet{2012A&A...538A.114P} suggest a single $\sim15$~M$_J$ planet at a radius of 20~AU can generate a pressure gradient at 54~AU which is in better agreement with current observations of the LkCa~15 disk. As suggested above, spatially resolved observations, particularly from molecules like CO isotopologues (e.g. $^{13}$CO and C$^{18}$O) are necessary to further study the structure of the LkCa~15 gap, particularly the size of the gas cavity in the disk. Using the standard gas surface density derived in Section~\ref{modelling} based on A11, we calculate the LkCa~15 inner hole mass to be $\sim$0.03~M$_\odot$ or $\sim$30~M$_J$. In Section~\ref{lower_ten}, we determine that depleting the hole of gas by an order of magnitude ($\sim$3~M$_J$) results in substantially lower HCO$^+$ line-wing flux, indicating the gas mass is too low to account for the high-velocity HCO$^+$ emission. This result differs from \citet{2015A&A...579A.106V} which found a drop in the cavity gas surface density by a factor of 10 (in addition to the larger drop in dust density). However, there are differences between our method and the method implemented by \citet{2015A&A...579A.106V} to fit the disk cavity mass that make it difficult for a direct comparison. As described in Section~\ref{modelling}, our model relies on a surface density normalisation derived in A11, where fits were made to the SED and an 880~$\micron$ image. To fit the HCO$^+$ profile, we had to not only vary the characteristic scaling and tapering radius $R_c$ to fit the line peak, but we also had to alter the cavity scale height and dust-to-gas ratio to fit the line-wings. In contrast, \citet{2015A&A...579A.106V} used the SED and a 440~$\micron$ continuum image to fit the surface density normalisation and dust properties and then used $^{12}$CO~$6\rightarrow5$ to fit gas properties within the disk cavity. In addition to the differences in fitting the disk, \citet{2015A&A...579A.106V} uses optically thick $^{12}$CO~$6\rightarrow5$ emission from LkCa15, which makes the absolute gas density and mass uncertain. Furthermore, the dusty inner disk is poorly constrained in LkCa~15, which can shield the cavity. This can lower the gas temperature, allowing CO to survive down to lower gas masses \citep{2013A&A...559A..46B}. As explained above, our models show the HCO$^+$ line-wings can be fit using a standard gas surface density with increased scale height and decreased dust-to-gas ratio within the disk cavity. The models in \citet{2015A&A...579A.106V} depicted a relatively large, flat disk, where the full disk size is consistent with our own radius $R_{out}=400$~AU, but the surface density normalisation is a factor $\sim3.4$ larger than A11 and the scale height and flaring angle are smaller than our best-fit models (particularly for the disk cavity at $H_0/R_0=$0.06 and $\psi=0.04$) in addition to the decreased cavity gas density. Despite the structural differences in the disk models, our derived gas cavity mass ($\sim0.03$~M$_\odot$ constrained within an order of magnitude) is consistent with \citet{2015A&A...579A.106V} ($\sim0.007$~M$_\odot$) due to the discrepancies in the surface density normalisation. An important uncertainty in modelling disks is understanding the complex chemistry taking place, particularly with an ion like HCO$^+$. Due to the differences between models of the LkCa~15 disk from past work (e.g. A11; \citealt{2015A&A...579A.106V}) in addition to our work, a more detailed analysis is required to test which models can fit the large number of molecular line observations of LkCa~15 (e.g. \citealt{2001A&A...377..566V, 2006A&A...460L..43P}). This will not only better understand the detailed chemistry ongoing in the disk, but also place more rigid constraints on the disk structure and cavity mass. Further studies can incorporate new methods in \textsc{ProDiMo} for better understanding the UV opacity and heating within the disk from PAH re-emission (see Appendix~\ref{disk_opacity} for full details). Past work has suggested accretion flows or gas streamers could contain the standard ISM dust-to-gas ratio while keeping the dust emission optically thin in the disc hole \citep{2011ApJ...738..131D}. Even though our study models the disk with a typical morphology and standard gas surface density, the fits to the unresolved observations (which integrate over the entire disk) are unaffected by geometry. Our analysis strongly indicates the gas must be hotter and at high velocities corresponding to the smaller radii of the disk cavity. This can only be achieved if the dust is depleted, with a large gas scale height and a sufficient amount of gas present in the inner hole. This dense gas in the disk cavity can then maintain the observed accretion rate onto LkCa~15. Past work found accreting protoplanets LkCa~15b and c \citep{2012ApJ...745....5K, 2015Natur.527..342S} to have masses $ < 5$-10~M$_J$ between radii of $\sim15$--19~AU and accretion rates comparable to the star. Our calculated disk cavity mass would allow a $\sim1$~M$_J$ protoplanet at a radius of 20~AU to accrete at least $\sim0.5$~M$_J$ (assuming a $\sim1$~AU Hill radius). At a similar orbital radius to Uranus, the final protoplanet would be $>32$ times the mass of Uranus. Uncovering the morphology and chemistry of the cavity, including forming planets, will only be accessible in future ALMA observations (reaching $5\times10^{-4}$~M$_J$ with high spatial resolution; \citealt{2014ApJ...788..129I}).
16
9
1609.05894
1609
1609.02961_arXiv.txt
We present new medium-resolution spectroscopic observations of the black hole X-ray binary \mbox{Nova Muscae 1991} taken with X-Shooter spectrograph installed at the 8.2m-VLT telescope. These observations allow us to measure the time of inferior conjunction of the secondary star with the black hole in this system that, together with previous measurements, yield an orbital period decay of $\dot P=-20.7\pm12.7$~ms~yr$^{-1}$ ($-24.5\pm15.1$~$\mu $s per orbital cycle). This is significantly faster than those previously measured in the other black hole X-ray binaries A0620-00 and XTE J1118+480. No standard black hole X-ray binary evolutionary model is able to explain this extremely fast orbital decay. At this rate, the secondary star would reach the event horizon (as given by the Schwarzschild radius of about 32 km) in roughly 2.7~Myr. This result has dramatic implications on the evolution and lifetime of black hole X-ray binaries.
According to the standard theory, the evolution of black hole X-ray binaries (BHXBs) is dictated by angular momentum losses (AMLs), driven by magnetic braking~\citep[MB;][]{ver81}, gravitational radiation~\citep[GR;][]{lan62,tay82} and mass loss~\citep[ML;][]{rap82}. MB is assumed to be the main mechanism responsible for AMLs in BHXBs with short orbital periods of several hours~\citep{rap82}, but an adequate expression for MB still remains a matter of debate~\citep{pod02a,iva06b,yun08b}. The measurement of period variations can provide valuable information on the strength of these processes. The BHXB \mbox{Nova Muscae 1991} (GS Mus/GRS 1124--683) is a very interesting system because it has an orbital period slightly longer, 10.38~hr~\citep{oro96,cas97}, than the two other BHXBs \mbox{XTE J1118+480} and \mbox{A0620--00}, for which an orbital decay measurement has been obtained~\citep{gon14}. The black hole mass of \mbox{Nova Muscae 1991} has been recently refined to $M_{\rm BH}=11.0^{+2.1}_{-1.4}$~\Msun~\citep{wu16} after previous determinations of $M _{\rm BH}=7.2\pm0.7$~\Msun~\citep{gel04} and $M _{\rm BH}=5.8^{+4.7}_{-2.0}$~\Msun~\citep{sha97}. Therefore, the BH mass is significantly larger (see Table~\ref{ttpar}) than in \mbox{XTE J1118+480}, $M_{\rm BH}\sim$~7.5~\Msun~\citep{kha13}, and \mbox{A0620--00}, $M_{\rm BH}\sim$~6.6~\Msun~\citep{can10}. \begin{table*} \centering \begin{minipage}{140mm} \caption{Kinematical and dynamical binary parameters of \mbox{XTE J1118+480} and \mbox{A0620--00}} \begin{tabular}{lcccccc} \hline {Parameter} & {Nova Muscae 1991}& {Ref.}$^\star$ & {XTEJ1118$+$480} & {Ref.}$^\star$ & {A0620$-$00} & {Ref.}$^\star$ \\ \hline $v \sin i$~[\kmso] & $85.0\pm2.6$ & [1] & $96^{+3}_{-11}$ & [5] & $82\pm2$ & [9] \\ $i$~[\dego] & $43.2\pm2.7$ & [2] & $73.5\pm5.5$ & [6] & $51.0\pm0.9$ & [10] \\ $k_2$~[\kmso] & $406.8\pm2.7$ & [1] & $708.8\pm1.4$ & [7] & $435.4\pm0.5$ & [9] \\ $q=M_2/M_{\rm BH}$ & $0.079\pm0.007$ & [1] & $0.024\pm0.009$ & [8] & $0.060\pm0.004$ & [9] \\ $f(M)$~[\Msuno] & $3.02\pm0.06$ & [1]& $6.27\pm0.04$ & [7] & $2.762\pm0.009$ & [8] \\ $M_{\rm BH}$~[\Msun] & $11.0^{+2.1}_{-1.4}$ & [2] & $7.46^{+0.34}_{-0.69}$ & [8] & $6.61^{+0.23}_{-0.17}$ & [8] \\ $M_2$~[\Msuno] & $0.89\pm0.18$ & [2] & $0.18\pm0.06$ & [8] & $0.40\pm0.01$ & [8] \\ $a_c$~[\Rsuno] & $5.49\pm0.32$ & [2] & $2.54\pm0.06$ & [8] & $3.79\pm0.04$ & [8,11] \\ $R_2$~[\Rsuno] & $1.06\pm0.07$ & [2] & $0.34\pm0.05$ & [8] & $0.67\pm0.02$ & [8,11] \\ $P_{\rm orb,1}$~[d] & $0.432606(3)$ & [3] & $0.1699337(2)$ & [8] & $0.323014(4)$ & [12] \\ $P_{\rm orb,0}$~[d] & $0.432605(1)$ & [4] & $0.16993404(5)$ & [8] & $0.32301415(7)$ & [8] \\ $T_0$~[d] & $2448715.5869(27)$ & [4] & $2451868.8921(2)$ & [8] & $2446082.6671(5)$ & [8] \\ $\dot P_{\rm orb}$~[$s\;s^{-1}$] & $-6.56\pm4.03 \times 10^{-10}$ & [4] & $-6.01\pm1.81 \times 10^{-11}$ & [8] & $-1.90\pm0.26 \times 10^{-11}$ & [8] \\ $\dot P_{\rm orb}$~[ms~yr$^{-1}$] & $-20.7\pm12.7$ & [4] & $-1.90\pm0.57$ & [8] & $-0.60\pm0.08$ & [8] \\ $\dot P_{\rm orb}$~[$\mu$s~cycle$^{-1}$] & $-24.5\pm15.1$ & [4] & $-0.88\pm0.27$ & [8] & $-0.53\pm0.07$ & [8] \\ $\dot P_{\rm orb, MC,o}$~[ms~yr$^{-1}$] & $-21.1\pm12.7$ & [4] & $-1.98\pm0.56$ & [8] & $-0.63\pm0.08$ & [8] \\ $\dot P_{\rm orb, MC,c}$~[ms~yr$^{-1}$] & $-21.0\pm14.2$ & [4] & $-1.98\pm0.59$ & [8] & $-0.62\pm0.12$ & [8] \\ \hline \end{tabular} {\\ $^\star$~References: [1]~\citet{wu15}; [2]~\citet{wu16}; [3]~\citet{oro96}; [4]~This work; [5]~\citet{cal09}; [6]~\citet{kha13}; [7]~\citet{gon08b}; [8]~\citet{gon14}; [9]~\citet{nei08}; [10]~\citet{can10}; [11]~\citet{gon11}; [12]~\citet{mcc86} } \end{minipage} \label{ttpar} \end{table*} The standard theory of the evolution of LMXBs~\citep[e.g.][]{ver93,pod02a,tay82}, predicts an orbital period first derivative from AMLs due to MB and ML in short-period (SP-) BHXBs as small as $\dot P_{\rm MB,ML} \sim -0.02$~ms~yr$^{-1}$ whereas GR accounts only for $\dot P_{\rm GR} \le -0.01$~ms~yr$^{-1}$, according to the dynamical parameters of SP-BHXBs. Recently, \citet{gon12a,gon14} reported the first detection of orbital period variations in two SP-BHXB and found them to be significantly larger than expected from conventional AML theory. The period derivative measured are $\dot P = -1.9 \pm 0.6$~ms~yr$^{-1}$ for \mbox{XTE J1118+480} and $\dot P = -0.6 \pm 0.1$~ms~yr$^{-1}$ for \mbox{A0620-00}. Extremely high magnetic fields in the secondary star at about 10--30~kG might explain the fast spiral-in of the companion star. Alternatively, unknown processes or non-standard theories of gravity have been suggested~\citep[e.g.][]{yag12}. On the other hand, the detection of middle-infrared excesses from $Spitzer$~\citep{mun06} and WISE~\citep{wan14} have been interpreted as evidence for the presence of candidate circumbinary discs, or jets~\citep{gal07}, in these two BHXB systems. However, \citet{che15} have presented AML models including circumbinary discs but found that both the required mass transfer rate and circumbinary disc mass are far greater than the values inferred from observations. This makes unlikely that circumbinary discs are the main cause of the rapid orbital decay observed in these two BHXBs. \citet{gon14} suggested that the observed fast orbital decays in \mbox{XTE J1118+480} and \mbox{A0620-00} could show an evolutionary sequence where the orbital period decay begins to speed up as the orbital period decreases. In this work we present the detection of an extremely fast orbital period decay in the SP-BHXB \mbox{Nova Muscae 1991}, which in fact has a longer orbital period.
Short period BHXBs exhibit negative orbital period derivative but at different rate, with \mbox{Nova Muscae 1991} ($P_{\rm orb}\sim10.4$~hr) showing the fastest orbital decay. Conventional models of AMLs due to GR, MB and ML are far from being able to reproduce this behaviour. \citet{gon14} suggested an evolutionary sequence in which the orbital decays observed in SP-BHXBs would be faster as the companion star approaches the black hole. However, the case of \mbox{Nova Muscae 1991} rules out this hypothesis. \citet{gon12a,gon14} estimated the orbital period derivative, $\dot P_{\rm MB,ML}$, when considering only AMLs due to MB and ML. They demonstrated that $\dot P_{\rm MB,ML}$ cannot account for the fast orbital decay observed in \mbox{XTE J1118+480}. Standard prescriptions of MB, assuming $\gamma =2.5$~\citep{ver93} and adopting arbitrarily $\beta=-\dot M_{\rm BH}/\dot M_2=0.5$ \citep{pod02a}, and the specific angular momentum, $j_w=1$, carried away by the mass lost from the system, provide values of $\dot P_{\rm MB,ML} \sim -0.028$~ms~yr$^{-1}$ for \mbox{Nova Muscae 1991}. In the extreme and unrealistic case, $\gamma =0$, i.e the strongest possible MB effect, and $\beta=0$, i.e all the mass transferred by the secondary star is lost from the system, we only get $\dot P_{\rm MB,ML} \sim -0.11$~ms~yr$^{-1}$, which is 300 times smaller than the observed orbital period decay. At orbital periods shorter than 3~hr, standard theory predicts that GR begins to dominate, but its contribution is totally negligible for \mbox{Nova Muscae 1991}, $\dot P_{\rm GR} \sim -0.011$~ms~yr$^{-1}$. \citet{gon12a,gon14} suggested that extremely high magnetic fields at the surface of the secondary star, in the range $B_S \sim 0.4$-$30$~kG, could help to explain the fast orbital decay in \mbox{XTE J1118+480} and \mbox{A0620-00}. The mass transfer rates~\citep{kin96b} of these two systems and \mbox{Nova Muscae 1991} are estimated to be $\dot M_2\sim$~0.10, 0.46, 1.67~$\times 10^{-9}$~\Msun yr$^{-1}$, respectively. Assuming conservative mass transfer (i.e. no mass lost from the system, $\dot M_{\rm BH} = -\dot M_{\rm 2}$) and neglecting AML due to GR, we can make a rough estimate of the magnetic field, $B_S$, at the surface of the secondary required to accommodate the period decays observed in these three BHXBs~\citep[see][for further details]{jus06}. We find $B_S\sim$ [0.7, 2.3, 7.4], [16, 50, 160], [12, 38, 120] kG, respectively, for three values of the wind-driving energy efficiency factor, $f_\epsilon=$ [$10^{-1}$, $10^{-2}$, $10^{-3}$]~\citep{tav93}. The extremely high $B_S$ values estimated at low $f_\epsilon\sim10^{-3}$, could be compensated with higher mass transfer rates. However, for $f_\epsilon=0.1$, the magnitudes of the surface magnetic fields are still consistent with those in peculiar Ap stars~\citep{jus06}. As discussed in \citet{gon14} the secondary might have been able to retain the high magnetic field during the binary evolution. Although speculative, these high magnetic fields might be connected with the (rotation induced) chromospheric activity on the companion star, as proposed for \mbox{A0620--00} \citep{gon10} to explain the observed H$\alpha$ emission feature of the secondary star. H$\alpha$ emission from the companion has been also detected in \mbox{XTE J1118+480}~\citep{zur16} and \mbox{Nova Muscae 1991} \citep{cas97} and (Gonz\'alez Hern\'andez et al. in preparation). An alternative scenario is that we may be measuring orbital period modulations similar to what has been observed before in other type of binaries such as Algol objects, e.g. V471 Tau~\citep{ski88}, or cataclysmic variables~\citep[CVs][]{pri75,war88}. In these systems, orbital period modulations of the order of $\Delta P/P\sim10^{-5}-10^{-6}$ with different signs have been observed. Theoretical models invoking a variable gravitational quadrupole moment, due to internal deformations, produced by magnetic activity in the outer convection zone have been suggested as the mechanism responsible for those period changes~\citep{app87,app92}. This mechanism would also imply extremely high magnetic fields~\citep{gon14}, but it appears quite unlikely that the large orbital period decays seen in these BHXBs (all with negative sign) are the result of orbital period modulations. On the other hand, \citet{sch16} have recently speculated that nova eruptions may produce frictional AML in CVs, as a possible solution of the white-dwarf (WD) mass problem, the orbital period distribution and the space density of CVs. This consequential AML could also occur in BHXBs during outburst with significant mass ejections, as has been very recently seen in V404 Cygni~\citep{mun16}. Recently, \citet{che15} have proposed AML models including circumbinary discs as a possible way to explain the fast orbital period decays in these BHXBs. This work shows the evolution of orbital period and period derivative for \mbox{XTE J1118+480} during $\sim0.8-1.2$~Gyr (for initial $M_2\sim 3-1.5$~\Msuno) of its binary evolution. The model requires a current mass transfer rate of about $10^{-8}$~\Msun yr$^{-1}$, which is significantly larger than the current estimated mass transfer rate of $10^{-10}$~\Msun yr$^{-1}$. It also requires a circumbinary disc with a mass ($M_{\rm CD}\sim2-3\times10^{-4}$~\Msuno) which is significantly larger than the inferred from mid-IR observations, i.e. $\sim10^{-9}$~\Msun~\citep{mun06}. In addition, \citet{gal07} have suggested that non-thermal synchrotron emission from a jet could account for a significant fraction (or even all) of the measured excess mid-IR emission. The orbital decays observed in these BHXBs, if kept constant, predict that the secondary star will reach the event horizon as given by the Schwarzschild radius of $\sim$22, 19 and 32 km in about 12~Myr for \mbox{XTE J1118+480}, 70~Myr for \mbox{A0620--00}, and only 2.7~Myr for \mbox{Nova Muscae 1991}. This is an extremely short time-scale, although the companion's fate is probably not realistic. In a standard evolutionary scenario with AMLs driven by MB and GR, a minimum orbital period is expected~\citep[and observed in CVs, see][]{gan09}. The minimum period is caused by the response of the companion star to ML when it reaches the substellar limit~\citep{pac81}. It is unclear whether this scenario would be valid here but, in any case, the large orbital period decays measured clearly have very important implications on the evolution and lifetime of SP-BHXBs, which, according to the standard models, is typically $\sim5\times10^9$~yr. In addition, the companion stars with spectral types earlier than K5V show high Li abundances~\citep{mar96,gon04a,gon06}, similar to those of low mass stars of the young (120~Myr) Pleiades cluster~\citep{gar94}, which are not easily explained unless a Li preservation mechanism is invoked, on the basis of high rotation velocities of the companion stars kept during the evolution of tidally locked BHXBs~\citep{mac05,cas07b}. A shorter lifetime, of $\sim10^7-10^8$~yr, would alleviate this high-Li problem in BHXBs as well as the long-standing birthrate problem of millisecond pulsars and low-mass BHXBs~\citep{nay93,pod02a}. Future observations of these BHXBs, in particular, of \mbox{Nova Muscae 1991} will probably help to understand these extremely fast orbital period decays.
16
9
1609.02961
1609
1609.02150_arXiv.txt
We revisit calculations of nebular hydrogen Ly$\alpha$ and ${\rm HeII}\,\lambda1640$ line strengths for population III galaxies, undergoing continuous and bursts of star formation. We focus on initial mass functions (IMFs) motivated by recent theoretical studies, which generally span a lower range of stellar masses than earlier works. We also account for case-B departures and the stochastic sampling of the IMF. In agreement with previous works, we find that departures from case-B can enhance the Ly$\alpha$ flux by a factor of a few, but we argue that this enhancement is driven mainly by collisional excitation and ionization, and not due to photoionization from the $n=2$ state of atomic hydrogen. The increased sensitivity of the Ly$\alpha$ flux to the high-energy end of the galaxy spectrum makes it more subject to stochastic sampling of the IMF. The latter introduces a dispersion in the predicted nebular line fluxes around the deterministic value by as much as a factor of $\sim 4$. In contrast, the stochastic sampling of the IMF has less impact on the emerging Lyman Werner (LW) photon flux. When case-B departures and stochasticity effects are combined, nebular line emission from population III galaxies can be up to one order of magnitude brighter than predicted by `standard' calculations that do not include these effects. This enhances the prospects for detection with future facilities such as JWST and large, groundbased telescopes.
The first generation of stars, so called Population III (Pop III; hereafter) stars, played an important role in setting up the `initial conditions' for galaxy formation in our Universe: Pop III stars initiated Cosmic Reionization and deposited the first heavy elements into the interstellar (ISM) and intergalactic media (IGM). In addition, Pop III stars likely provided seeds for the formation of the massive black holes we observe today \citep[see, e.g., the reviews by][]{Bromm2013,Karlsson2013,Greif2015}. All these processes, in detail, depend on the physical properties of Pop III stars, such as their mass, temperature, spin, and stellar evolution. Most predictions of observational signatures of Pop III stars have focused on their spectral energy distribution (SED) \citep[see][for a detailed review]{Schaerer2014}. One of the most important parameters affecting the SED is the stellar mass, since it strongly affects the stars' total and ionizing luminosities. Unfortunately, the mass of Pop III stars, as well as the initial mass function (IMF), is still uncertain. Early works focused on the study of very massive objects \citep[several hundreds of solar masses; e.g.,][] {Bromm2001,Bromm2002,Abel2002,Schneider2002,Schneider2003}. Although it is still possible to form stars with masses of a few hundreds of solar masses in current calculations \citep[e.g.,][see also \cite{Krumholz2009}]{Omukai2002,Mckee2008, Kuiper2011, Hirano2014}, it is also likely to obtain low mass stars, sometimes of only a few tens of solar masses or less \citep[e.g.,][]{Turk2009, Stacy2010, Greif2011,Clark2011,Stacy2012,Stacy2014,Susa2014,Hirano2015,Hosokawa2015}. The preference for masses less than $\sim150$ M$_{\odot}$ is consistent with abundance patterns found in extremely metal-poor stars \citep[e.g.,][see also \cite{Keller2014}]{Umeda2005,Chen2016}, metal-poor Damped Lyman Alpha systems\footnote{Damped Lyman Alpha systems (DLAs) are absorption systems with hydrogen column densities above $2\times10^{20}\,{\rm cm^{-2}}$, where the gas in the core is in the atomic form due to self-shielding from the ionizing background radiation \citep[see][for a detailed review]{Wolfe2005}.} \citep{Pettini2002,Erni2006} and with the inferred number of pair instability supernovae \citep[PISN;][]{Heger2002} events \citep[e.g.,][]{Tumlinson2004, Tumlinson2006,Karlsson2008,Frebel2009,Aoki2014, Bennassuti2014,Frebel2015}. Direct searches for Pop III stars/galaxies have mostly focused on detecting strong hydrogen Ly$\alpha$ and ${\rm HeII}\, \lambda1640$ line emission \citep[e.g.,][]{Schaerer2008,Nagao2008}, the latter being associated with the spectrum of {\it massive} Pop III stars \citep[e.g.,][]{Tumlinson2000,Tumlinson2001,Bromm2001,Malhotra2002,Jimenez2006, Johnson2009,Raiter2010}. Interestingly, \citet{Sobral2015} have recently reported on the discovery of a high-redshift galaxy with unusually large Ly$\alpha$ and ${\rm HeII}\, \lambda1640$ line emission, which had led to speculation that we might have discovered a massive Pop III galaxy, although this is still a matter of intense debate \citep{Pallottini2015,Dijkstra2015,Smidt2016,Smith2016,Visbal2016,Xu2016}. In this work, we revisit calculations of the spectral signature of Population III galaxies. Our work differs from previous analyses in the following ways: (\textit{i}) We allow for stellar populations with lower stellar mass limits than previous studies. This is motivated by the more recent theoretical and observational preferences for lower mass Pop III stars. (\textit{ii}) We examine, for the first time, the effects of the stochastic sampling of the IMF to the photon flux and luminosities from Pop III stellar populations. As we will show, stochasticity effects can be significant, and they can be further amplified when departures from case-B recombination assumption are taken into account. Our paper is structured as follows: In Section \ref{sec:models} we show the calculations to model the stellar and nebular SEDs, and we also detail the method for the stochastic sampling of the IMF. We present and analyse our results in Section \ref{sec:results}, and we discuss them in Section \ref{sec:discussion} before concluding in Section \ref{sec:summary}.
\label{sec:summary} We have revisited the spectra of population III galaxies, specifically focusing on the nebular hydrogen Ly$\alpha$ and ${\rm HeII}\,\lambda1640$ lines. We have computed a series of stellar helium and hydrogen model atmospheres covering a mass range from ${\rm 9\,to\,1000\,M_{\odot}}$. We have used the stars to construct several populations, allowing for top-heavy and Salpeter slope IMFs, and different upper stellar mass limits. We have included upper mass limits that are lower than those in earlier works, motivated by recent studies favoring the formation of low mass objects, and, for the first time in studies of Pop III populations, we have considered the stochastic sampling of the IMF. We have obtained the nebular spectra using our own SEDs and the photoionization code \textit{Cloudy}. Finally, we have explored the departures from the case-B recombination assumption for our populations, and we have revisited their origin. Our results can be summarized as follows. \begin{itemize}[leftmargin=0pt,itemindent=20pt] \item The Ly$\alpha$ line flux is enhanced by a factor of $2 - 3$ compared to case-B, depending on the IMF and parameters of the nebula, in agreement with \cite{Raiter2010}. Our analysis shows that the origin of the Ly$\alpha$ departures from case-B is due mainly to energetic free electrons which collisionally excite and ionize hydrogen atoms. The photoionization from the first excited state, argued to be the major mechanism in previous works, is negligible. \item Stochastic sampling of the IMF produces large fluctuations in the ionizing photon flux, specially for the single-ionized helium (Q(HeII); more than a factor 100 for a total stellar mass of 1\,000 M$_\odot$), due to the strong dependence on the mass of the stars within the populations. For the case of hydrogen and the LW band, the distributions around the deterministic value are much narrower. The stochastic effects are more important for IMFs with high upper mass limits and also for those which disfavor the presence of massive stars, i.e., those with Salpeter slopes instead of flat IMFs. Stochasticity can enhance the Ly$\alpha$ flux up to a factor $\sim 3$ in populations with massive stars (upper mass limit IMF of 500 M$_\odot$). This, added to the case-B departure implies a boost by a factor of $\sim 8$ compared to the common calculations. For populations with less massive stars the total boost reaches values $\sim$5, depending on the IMFs and nebular parameters. For the case of the ${\rm HeII}\,\lambda1640$ line, stochasticity boosts the flux up to a factor $\sim 3$. Accounting for a lower total stellar mass, 100 M$_\odot$, the stochastic effects increase significantly showing the strong anti-correlation between these two parameters. In this case, the fluctuations found for the hydrogen (LW) ionizing photon fluxes can have implications to the ionizing (dissociating) power inferred for Pop III stars. On the other hand, for a total stellar mass of 10\,000 M$_\odot$, the stochastic effects are strongly reduced. \item{When considering the effect of time evolution for a single starburst we see that the distributions slightly broaden with time, specially for the case of flat IMFs. The stochastic effects are again more important for high stellar mass upper limit IMFs and for low values of the total stellar mass. Considering periodic bursts of star formation, we observe similar stochastic behaviours as before, and all the distributions flatten after $\lsim 10$ Myr. The distributions reach maximum intrinsic luminosity values of $L_{\rm \alpha}\sim 5\times10^{40}\,{\rm erg\, s^{-1}}$ and $L_{\rm 1640}\sim 10^{39}\,{\rm erg\, s^{-1}}$ for a total stellar mass of 1\,000 M$_\odot$. For the case of Ly$\alpha$, these values are a factor $\sim20$ below the sensitivity limits of the JWST at redshifts $z=7-10$, but this can improve considering the possible effects of gravitational lensing and/or higher total stellar mass. \cite{Visbal2016} have recently presented an analysis which allows for the formation of late ($z\sim7$), massive ($>10^6$ M$_{\odot}$) Pop III starbursts in $\sim 10^9$ M$_{\odot}$ dark matter halos, due to photoionization feedback effects by nearby galaxies. In addition, these stellar populations would likely reside in large ionized bubbles, thus illustrating that the neutral IGM may occasionally have a minor impact on the Ly$\alpha$ flux from Pop III galaxies \cite[see][]{Stark2016}. Stellar populations of such characteristics would be above the detection limits, even for the case of helium fluxes.} \end{itemize} We conclude by stressing that the joint effect of the stochastic sampling of the IMF and case-B departures analysed in this work can give rise to values of the Ly$\alpha$ and HeII$\, \lambda1640$ line fluxes which deviate significantly from the standard analytical calculations which ignore these effects. Our study, as well as previous works, shows that gravitational lensing is required to detect Pop III galaxies with JWST. However, the enhancement obtained in our results reduces significantly the required magnification.
16
9
1609.02150
1609
1609.07296_arXiv.txt
{Gravitational collapse of molecular cloud or cloud core/clump may lead to the formation of geometrically flattened, rotating accretion flow surrounding the new born star or star cluster. Gravitational instability may occur in such accretion flow when the gas to stellar mass ratio is high (e.g. over $\sim$10\%).} {This paper takes the OB cluster-forming region G10.6-0.4 as an example. We introduce the enclosed gas mass around its central ultra compact (UC) H\textsc{ii} region, addresses the gravitational stability of the accreting gas, and outline the observed potential signatures of gravitational instability.} {The dense gas accretion flow around the central UC H\textsc{ii} region in G10.6-0.4 is geometrically flattened, and is approximately in an edge-on projection. The position-velocity (PV) diagrams of various molecular gas tracers on G10.6-0.4 consistently show asymmetry in the spatial and the velocity domain. We deduce the morphology of the dense gas accretion flow by modeling velocity distribution of the azimuthally asymmetric gas structures, and by directly de-projecting the PV diagrams.} {We found that within the 0.3 pc radius, an infall velocity of 1-2 km\,s$^{-1}$ may be required to explain the observed PV diagrams. In addition, the velocity distribution traced in the PV diagrams can be interpreted by spiral arm-like structures, which may be connected with exterior infalling gas filaments. The morphology of dense gas structures we propose appears very similar to the spatially resolved gas structures around the OB cluster-forming region G33.92+0.11 with similar gas mass and size, which however is likely to be approximately in a face-on projection.} {The dense gas accretion flow around G10.6-0.4 appears to be Toomre unstable, which is consistent with the existence of large-scale spiral arm-like structures, and the formation of localize gas condensations. The proposed approaches for data analyses may be applied to the observations of Class 0/I low-mass protostars, to diagnose disk gravitational instability.}
Dense gas in star- or cluster-forming molecular cores/clumps may first condense onto a flattened rotating accretion flow or disk, before further accrete onto the (proto)stars. When the mass infall rate onto the flattened rotating accretion flow or disk is higher than the (proto)stellar accretion rate, the accumulated dense gas may trigger gravitational instability, leading to the formation of spiral arm-like gas structures and self-gravitating gas condensations (e.g. Vorobyov 2013; Vorobyov \& Basu 2015; Dong et al. 2016). Such gravitational instability may occur in the Class 0/I stage of low-mass protostars, when the protostellar masses and the stellar-to-disk mass ratios are small. The accretion disk gravitational instability may resolve the well-known {\it luminosity problem} of the low-mass protostars, and may explain the episodic protostellar accretion and the formation of wide-orbit massive planets (Machida et al. 2011; Dunham \& Vorobyov 2012; Liu et al. 2016). The similar phenomenon may be expected from the dense gas accretion flow surrounding the young OB stars (e.g. Sakurai et al. 2016). However, OB stars may disperse the accreting gas via radiative feedback, before the majority of dense gas can be accreted onto the (proto)stars or Keplerian rotating disks. \begin{figure*} \hspace{0.25cm} \includegraphics[width=18cm]{overview.eps} \caption{\footnotesize{ Gas structures in OB cluster-forming region G10.6-0.4. (A) IRAM-30m telescope observations of $^{13}$CO 2-1, integrated from $-$3 to 0.6 km\,s$^{-1}$ (Liu et al. in prep.) We omit integrating the blueshifted component in this panel to avoid confusing the gas structures due to blending. (B) Color composite image made from the VLA observations of the CS 1-0 line (orange) and the 3.6 cm continuum emission (blue), which were presented in Liu et al. (2011). The deficit of CS 1-0 emission at the center is due to absorption line against the bright UC H\textsc{ii} region. (C) Color composite image made from the SMA observations of three CH$_{3}$OH transitions, which trace 35 K (red), 60 K (green), and 97 K (blue) upper level energy, respectively. Inset overlays the VLA observations of the optical depth map of the NH$_{3}$ (3,3) satellite hyperfine line, which was reproduced from Sollins \& Ho (2005). (D) VLA (A-array configuration) image of NH$_{3}$ (3,3) satellite hyperfine line optical depth map (Sollins \& Ho 2005). } } \label{fig:overview} \end{figure*} Molecular cloud G10.6-0.4 (R.A.: 18$^{\mbox{h}}$10$^{\mbox{m}}$28$^{\mbox{s}}$.683, Decl.: $-$19$^{\circ}$55$'$49$''$.07) is a luminous ($\sim$10$^{6}$ L$_{\odot}$) OB cluster forming region at a distance of $\sim$6 kpc\footnote{Uncertainty of distance may be $\sim$20\%.}. The previous high angular resolution mapping survey of Lin et al. (2016) for $L\ge$10$^{6}$ $L_{\odot}$ star-forming molecular clouds suggested that G10.6-0.4 has an exceptionally centrally concentrated gas distribution, which may be conducive to a focused global cloud collapse. On the 5-10 pc scale, this cloud presents several approximately radially aligned dense gas filaments, which connect to the central $\sim$1 pc scale massive molecular clump (Figure \ref{fig:overview}A; see Liu et al. 2012a for more details). The higher angular resolution Very Large Array (VLA) and Submillimeter Array (SMA) observations of dense molecular gas tracers found that some large-scale filaments may converge to a rotating, $\sim$0.6 pc scale massive molecular envelope (Figure \ref{fig:overview}B, \ref{fig:overview}C; Omodaka et al. 1992; Ho et al. 1994; Liu et al. 2010a, 2010b, 2011). The most massive OB stars, and their associated ultra compact (UC) H\textsc{ii} region (Figure \ref{fig:overview}B; Ho et al. 1986; Ho \& Haschick 1986; Sollins et al. 2005) are deeply embedded in this rotationally flattened (Keto, Ho \& Haschick 1987, 1988; Guilloteau et al. 1988) massive molecular envelope, where the low density gas around the rotational axis has been photo-ionized, or were swept away by expanding ionized gas (Figure \ref{fig:overview}C; Liu et al. 2010b, 2011). The observations of high excitation molecular lines exclusively traced a $\sim$0.1 pc scale, $\gtrsim$100 K hot molecular toroid at the center of the massive molecular envelope (Liu et al. 2010b; Beltran et al. 2011), which harbors the luminous central OB cluster (Figure \ref{fig:overview}B, \ref{fig:overview}C). The 0$\farcs$1 resolution VLA observations of the NH$_{3}$ (3,3) satellite hyperfine line absorption shows that this hot toroid is extremely clumpy (Figure \ref{fig:overview}C, \ref{fig:overview}D; Sollins \& Ho 2005). The overall geometric configuration of the central $\sim$1 pc scale massive molecular clump resembles a scaled-up, low-mass star-forming core+disk system viewed edge-on. How the dense gas condensations in the hot toroid came into the existence remains uncertain. A intriguing possibility is the formation of the dense condensations due to gravitational instability developed in the mid-plane of the flattened rotating massive molecular envelope. If this is indeed the case, we expect the distribution of dense gas in the flattened accretion flow to present a significant azimuthal asymmetry. Wihin the 0.1-0.3 pc radii, the previous observations, limited by angular resolution or sensitivity, did not yet spatially resolve azimuthal asymmetry of dense gas structures. Nevertheless, it is possible to diagnose spatial asymmetry base on the velocity profiles of dense molecular gas tracers, which is analogous to the common applications of the optical and infrared spectroscopic observations towards low-mass protostars (e.g. Takami et al. 2016). In particular, the earlier centimeter band observations of molecular line absorption against the central UC H\textsc{ii} region have suggested a {\it red excess} in the velocity field, which was interpreted as a signature of redshifted infall (Ho \& Haschick 1986; Keto et al. 1987, 1988; Keto 1990; Sollins \& Ho 2005). The follow-up interferometric observations of multiple molecular line tracers in the millimeter band have confirmed the red excess in the central $\sim$0.06 pc ($\sim$2$''$) region, however, did not detect the blueshifted counterpart of the infalling gas (Liu et al. 2010a, 2011). The dense molecular gas immediately around the central UC H\textsc{ii} region appears to have an azimuthally asymmetric distribution. In addition, the position-velocity (PV) diagrams of those molecular lines suggest that the non-axisymmetric dense gas structures can be extended to $\pm$5$''$-10$''$ (0.15-0.3 pc) radii. This paper elaborates two techniques for diagnosing the asymmetry of dense gas structures based on the spatially resolved line spectra. In addition, we propose a link between the spatial asymmetry and the gravitational instability. Although we take the case of the OB cluster-forming region G10.6-0.4 as an example, we argue that the proposed strategy of data analysis/interpretation is general for both low-mass and high-mass star- or cluster-forming regions. Our discussion begin with a toy model, which describes the major features in the observed molecular line PV diagrams in Section \ref{sec:toy}. Section \ref{sec:deproject} discusses the probable dense gas morphology based on direct de-projections. Section \ref{sec:toomre} analyzes the enclosed mass profile around the central UC H\textsc{ii} region, and derives the Toomre Q parameter as a function of radius. In section \ref{sec:discussion}, we compare G10.6-0.4 with a spatially resolved example G33.92+0.11, which is likely to be in close to face-on projection. \begin{figure} \hspace{-0.3cm} \includegraphics[width=9.2cm]{temp_best_pv.eps} \vspace{-1.3cm} \caption{\footnotesize{ Position-velocity diagram of the CH$_{3}$OH 5$_{0,5}$--4$_{0,4}$ E transition in OB cluster forming region G10.6-0.4 (contour; see Figure 3 of Liu et al. 2011 for the comparisons with other CH$_{3}$OH excitation levels and the NH$_{3}$ (3,3) hyperfine lines), and the model PV curve (red curve). The pv cut is centered at the coordinates of R.A. = 18$^{\mbox{h}}$10$^{\mbox{m}}$28.64$^{\mbox{s}}$ and Decl = $-$19$^{\circ}$55$'$49.22$''$ with position angle pa = 140$^{\circ}$. The positive angular offset is defined in southeast. Contour levels are 0.36 Jy beam$^{-1}$$\times$[1, 2, 3, 4, 5, 6, 7]. The resolution of this CH$_{3}$OH observation is 1$''$.5$\times$1$''$.3, and the rms noise level is 0.06 Jy beam$^{-1}$. The dashed line marks the cloud systemic velocity of $-$3 kms$^{-1}$. The face-on view of the model filamentary spiral structure is also plotted on the same physical scale (blue curve; assuming observers are viewing this structure from the left, and assuming the same spatial scales in the horizontal and vertical directions). }} \label{fig:spiral} \end{figure}
\label{sec:discussion} \subsection{An overall picture for the inner 0.3 pc radius of G10.6-0.4}\label{sub:g10} The overall morphology of accretion flow within the 0.3 pc radius from the central OB cluster of G10.6-0.4, may resemble a gravitational unstable, flattened rotating disk, where the highest mass stars are located at the center of the system (see Figure \ref{fig:schematic} for a schematic picture). In this system, molecular gas mass dominates the overall enclosed mass, and the accretion flow becomes gravitationally unstable. Large (0.1-0.3 pc) scale spiral arm-like structures form, which may subsequently fragment into localize dense gas cores or condensations. Individual of these gas cores or condensations may form intermediate- or low-mass stars, which can emanate molecular outflows our jets (Liu et al. 2011). The gravitational instability in this accretion flow is consistent with the detection of multiple UC H\textsc{ii} region within the mid-plane of it (e.g. the satelliate OB clusters in Figure \ref{fig:overview}B; see also Ho \& Haschick 1986; Liu et al. 2011). In addition, the very high angular resolution (0$\farcs$1) resolution of the 1.3 cm free-free continuum emission have resolved several ionized arcs eastern of the central OB cluster (Sollins \& Ho 2005). The fact that the ionized arcs were only observed from the east of the central OB cluster implies that (1) there is a molecular gas gap in between the central OB cluster and the ionized arcs, such that the ionized arc can be illuminated from the central OB cluster, and (2) the western side from the central OB cluster is either very well shielded by dense gas, or may have very different gas morphology from the eastern gas structures. These are consistent with the highly azimuthally asymmetric molecular gas structures around the central OB cluster. The asymmetric gas structures, or the spiral arm-like structures, may help redistribute the angular momentum, which may consistently explain the observed infall velocity (Keto et al. 1987; Keto 1990; present work), and the rapidly decreasing specific angular momentum from outer to inner radii (Liu et al. 2010a). We note that in the de-projection, not all assumptions of velocity field and enclosed stellar mass which can explain the observed PV diagrams, are physically plausible. For example, assuming a 175 $M_{\odot}$ enclosed stellar mass and less dominant rotational motion (i.e. $\psi$$>$25$^{\circ}$) will imply that the blue and redshifted gas components within the $\pm$0.1 pc offset positions, are more separated from the central OB cluster than how they are presented in Figure \ref{fig_deproject1}. The de-projected structures within the $\pm$0.1 pc offset with such assumptions will become more (artificially) elongated, or even separated in the line-of-sight, which cannot confine the central UC H\textsc{ii} region. Increasing the enclosed stellar mass for several times will significantly enlarge the permitted line-of-sight velocity range for explaining the observed PV diagrams, and will increase the Toomre Q parameter such that the system appear (more) stable. However, the increased stellar mass will also imply the further de-projected line-of-sight distances of gas components from the central OB cluster, which is not plausible. The enclosed stellar mass at the center of G10.6-0.4 therefore cannot be very different from the range of 69-175 $M_\odot$, to allow sufficient permitted velocity range for explaining the observed PV diagram while avoiding the artificial elongation in the line-of-sight. The dense gas morphology around G10.6-0.4 we derived may be plausible. We note that the previous SMA, VLA, and Atacama Large Millimeter Array (ALMA) studies on the similar system G33.92+0.11 have spatially resolved dense, spiral arm-likes gas structures and many localized dense cores, which are orbiting the centralized UC H\textsc{ii} region (Liu et al. 2012b; Liu et al. 2015). In addition, in G33.92+0.11, the resolved localized dense cores appear to be self-gravitating and are forming stars, which were diagnosed from the high velocity SiO 5-4 jets (Minh et al. 2016). A dust (and gas) spiral were also resolved within the $\sim$0.2 pc radius around the OB cluster-forming region NGC 7538 IRS. 1, although it is not yet certain whether this spiral arm-like structure was formed due to gravitational instability, or was due to the interaction with molecular outflows (Wright et al. 2014). The spatial non-uniformity of the gravitational unstable accretion flow may directly link to the time variability of accretion rate, and the variations of the radio fluxes and sizes of the UC H\textsc{ii} region (e.g. Galv\'an-Madrid et al. 2009). It may be similar to a scaled-up model of episodic low-mass (proto)stellar accretion via gravitational unstable disk (e.g. Vorobyov \& Basu 2015; Sakurai et al. 2016). \subsection{Physics and applications}\label{sub:physics} Important applications of the proposed analysis techniques include the observations of the inner few thousand AU accretion flows around the young OB star or star cluster, and the observations of the inner few tense AU of the Class 0/I, or FU Orionis disks (e.g. Dong et al. 2016; Liu et al. 2016). For low-mass Class 0/I objects, these observations are the key to address the role of gravitational instability in assisting the (proto)stellar accretion (Vorobyov et al. 2015; Klassen et al. 2016). For the young and massive OB clusters, the observations of how localized dense gas cores/condensations form in the gravitational unstable gas converging flow, and the observations of the detailed gas kinematics, may provide clues for how to realize the high-mass end of the stellar initial mass function (IMF). In particular, the observations of Larson (1982) have suggested a correlation between the maximum stellar mass in a cluster ($m_{\mbox{\scriptsize{max}}}$) and the mass of the natal molecular cloud ($M_{\mbox{\scriptsize{cloud}}}$), such that $m_{\mbox{\scriptsize{max}}}$=0.33\,$M_{\mbox{\scriptsize{cloud}}}^{0.43}$. Based on statistical analyses on the observed $m_{\mbox{\scriptsize{max}}}$ from a sample of young (age$<$4 Myr) stellar clusters, Kroupa \& Weidner (2005), Weidner \& Kroupa (2006), and Weidner et al. (2010) demonstrated a tight correlation between $m_{\mbox{\scriptsize{max}}}$ and the cluster mass $M_{\mbox{\scriptsize{ecl}}}$, which indicated that $m_{\mbox{\scriptsize{max}}}$ of $>$100 $M_{\odot}$ stellar clusters cannot be represented by random samplings of the canonical stellar IMF. Especially for the $>$100 $M_{\odot}$ stellar clusters, the masses of their highest mass stars may be determined by rather deterministic and trackable physical processes. Weidner \& Kroupa (2006) proposed that star clusters form in an ordered fashion, starting with the lowest-mass stars until feedback can outweigh the gravitationally induced formation process. An alternative interpretation may be that the highest mass one or few stars form in relatively special environments of the natal molecular cloud. For the cases of OB cluster-forming regions like G10.6-0.4 or G33.92+0.11 (Liu et al. 2015), this environment is the most likely to be the $<$1 pc scale flatten accretion flows deeply embedded at their centers. The masses of the highest mass star(s) are thereby determined by the accretion processes and fragmentation within such environment. Taking Figure \ref{fig:schematic} for instance, the (first) highest mass star may only form in the unique {\it central massive core}, which with the aid of the global contraction, can quench the UC H\textsc{ii} region for longer time. The {\it satellite cores} are also massive, and can form satellite OB clusters and localized UC H\textsc{ii} regions (e.g. Figure \ref{fig:overview}B). However, the satellite cores are more easily dispersed by stellar feedback so will form less massive stars than the central molecular core. To begin our understanding of the highest mass end of the stellar IMF in massive clusters, and the understanding of the correlation between $m_{\mbox{\scriptsize{max}}}$ and $M_{\mbox{\scriptsize{ecl}}}$ and $M_{\mbox{\scriptsize{cloud}}}$, it is therefore very important to increase the number of case studies which the details of density structures and kinematics are resolved. The known gravitational unstable disk-like objects are rare, and therefore is hard to obtain a sample of face-on sources. In addition, they are likely to be embedded by exterior dense gas structures, such that the observations of face-on sources are not necessarily easy to interpret, due to blending. Some inclination of the source helps differentiate the gravitationally accelerated gas structures in the inner region from the exterior gas structures, from the velocity domain. However, most of these sources are relatively distance, such that even the Atacama Large Millimeter Array (ALMA) observations with a few hours of exposure time would only marginally resolve the gas structures. For some sources, it may eventually require to diagnose azimuthal asymmetry from the velocity domain. \begin{figure} \hspace{0.3cm} \includegraphics[width=8.5cm]{g33_idraw.eps} \caption{\footnotesize{ Schematic picture for the distribution of dense molecular and ionized gas in a gravitationally unstable, OB cluster-forming molecular clump. Un-shield molecular gas condensations can be externally illuminated and show varieties of ionized features. On the other hand, bright point-like emission sources are expected to be embedded by dense molecular lumps. In low angular resolution observations, this variety of structures will be smeared into an integrated UC H\textsc{ii} region. }} \label{fig:schematic} \end{figure} \subsection{Uncertainty, caveats, and ways out}\label{sub:caveats} We have provided examples of interpreting the asymmetric PV diagrams based on modeling and direct de-projection. Simple models can provide a good intuition about the morphology of the target source. For the target sources which consist of complicated structures, modeling may however lose objectiveness. Direct de-projection based on a velocity model may systematically reconstruct structures of the target sources. However, the results of de-projection still require careful examination and interpretation. In particular, for a perfectly edge-on source with pure rotational gas motions, de-projection is seriously subject to the near and far side degeneracy. Moreover, in the de-projected image, gas structures with broad linewidth may become artificially elongated in the spatial direction parallel with the line-of-sight. These problems are fundamental, similar to the ambiguity faced when deriving the spiral arm structures of the Milky Way based on CO line surveys. For the (marginally) resolved, geometrically thin gas structures (e.g. some protoplanetary disks), de-projecting the PPV image cubes instead of the PV diagrams to the position-position-position (PPP) space may greatly suppress the degeneracy. The observations of multiple gas temperature tracers may further help locate gas structures from the dominant heating sources. To avoid artificially stretching gas structures in the direction of the line-of-sight, we may de-compose the spatially compact and extended structures in the PV diagram before de-projecting (e.g. applying high-/low-pass filters or performing wavelet transform), and then require the de-projected compact gas structures to have similar or identical sizescales in the line-of-sight and the perpendicular dimensions. Practically how this can be done depends also on the spatial and spectral resolutions of the observations, which may require some tests utilizing synthesized images from numerical simulations, which is beyond the scope of the present paper. Finally, either modeling or de-projection require assumptions about the velocity field. When the gravitational force is dominated by a centralized compact source (e.g. a (proto)star or a high-mass molecular core), it may be fair to assume that the velocity field is axi-symmetric. Without a dominant centralized gravitational source, the velocity field may still be approximately axi-symmetric if the is sufficient relaxation of gas motions (e.g. the case of spiral galaxies). Observations of very complicated system (e.g. binary, triple, or multiple systems) should be interpreted with more care.
16
9
1609.07296
1609
1609.09242_arXiv.txt
{Atomic interferometry can be used to probe dark energy models coupled to matter. We consider the constraints coming from recent experimental results on models generalising the inverse power law chameleons such as $f(R)$ gravity in the large curvature regime, the environmentally dependent dilaton and symmetrons. Using the tomographic description of these models, we find that only symmetrons with masses smaller than the dark energy scale can be efficiently tested. In this regime, the resulting constraints complement the bounds from the E\"otwash experiment and exclude small values of the symmetron self-coupling. } \begin{document}
Dark energy \cite{Copeland:2006wr} has proved to be elusive since its discovery some fifteen years ago \cite{Riess:1998cb, Perlmutter:1998np}. The only tangible proof of its existence follows from a host of cosmological observables ranging from the original SNIa supernovae data to the more recent results obtained by the Planck mission \cite{Ade:2015xua} and the observation of baryonic acoustic oscillations \cite{Aubourg:2014yra}. A better understanding of its nature would wish to complement indirect evidence with experimental data in the laboratory \cite{Adelberger:2003zx, Lamoreaux:1996wh,Nesvizhevsky:2003ww,Jain:2013wgs} in a way akin to what has been attempted over the last decades for dark matter. Effects of dark energy on small scale experiments require the presence of a coupling to matter, and as a result the dark energy models with possible experimental tests in the laboratory fall within the class of dark energy/modified gravity theories \cite{Clifton:2011jh}. They have been recently classified according to the type of screening mechanism that shields the dark energy interaction with matter in our local, i.e. solar, environment \cite{Joyce:2014kja}. There are three broad families of such models: the ones subject to the Vainshtein \cite{Vainshtein:1972sx} or K-mouflage mechanisms \cite{Babichev:2009ee}, or the generalised chameleon property \cite{Khoury:2003aq,KhouryWeltman,Damour:1994zq,Pietroni:2005pv,Olive:2007aj,Hinterbichler:2010es,Brax:2010gi,Brax:2012gr}. The latter is the one which will concern us in this paper. In a nutshell, the chameleon screening occurs in regions of space where the Newtonian potential is large enough. This can occur in two typical ways. The first, and this is the original chameleon mechanism, is where the dark energy field becomes massive enough in the presence of matter\cite{Khoury:2003aq}. The second is the Damour-Polyakov property \cite{Damour:1994zq} whereby the interaction coupling between dark energy and matter becomes very small in dense matter. Both types of models can be mathematically described using a tomographic method \cite{Brax:2011aw} whereby the coupling function and potential can be reconstructed from the sole knowledge of the density dependence of both the mass and the interaction coupling. Laboratory tests of dark energy have been considered in the last ten years with wide-ranging techniques, see for instance \cite{Rider:2016xaq,Burrage:2016lpu} and a summary of the bounds on chameleons \cite{Burrage:2016bwy} and modified gravity models \cite{Lombriser:2014dua}. Stringent constraints \cite{Mota:2006ed,Mota:2006fz,Brax:2008hh} follow from torsion pendulum experiments such as E\"otwash \cite{Hoyle:2004cw,Kapner:2006si} where the presence of new forces can be tested \cite{Upadhye:2012qu}. Another promising technique uses the potential deviation from the Casimir interaction \cite{Lamoreaux:1996wh,Decca:2007yb} between two plates \cite{Brax:2007vm,Almasi:2015zpa}. Forthcoming experiments such as CANNEX may potentially exclude all models of the inverse chameleon type \cite{Brax:2010xx}. Finally neutrons can be efficiently used \cite{Nesvizhevsky:2003ww}. First of all, the energy levels of neutrons in the terrestrial gravitational field have been measured \cite{Nesvizhevsky:2003ww,Jenke:2014yel} and deviation from this pattern would signal the existence of new interactions of the inverse chameleon type \cite{Brax:2011hb,Brax:2013cfa,Schmiedmayer:2015cqa}. Neutron interferometry can also be implemented as new interactions of the chameleon type \cite{Brax:2013cfa} would induce a phase shift and therefore a change in the interferometric patterns \cite{Lemmel:2015kwa}. More recently, atomic interferometry \cite{Burrage:2014oza} has been suggested as a new technique for probing dark energy. Experimental results have already been obtained \cite{Hamilton:2015zga} and constraints on inverse power law chameleons deduced \cite{Elder:2016yxm,Schlogel:2015idt}. In this paper, we will generalise this analysis to all models described by the tomographic method and therefore subject to either the chameleon or the Damour-Polyakov screening mechanisms \cite{Brax:2014zta}. This captures interesting models such as $f(R)$ gravity in the large curvature regime \cite{Hu:2007nk}, the environmentally dependent dilaton \cite{Brax:2010gi} and the symmetron \cite{Hinterbichler:2010es}. We find that the only models which can be efficiently tested by atomic interferometry are the symmetrons with masses falling below the dark energy scale. Symmetrons with mass larger than the present Hubble rate are known to have relevant implications cosmologically, but cannot be tested by this method \cite{Brax:2011aw}. On the other hand, symmetrons with masses of order of the dark energy scale are within reach of the E\"otwash types of experiments \cite{Upadhye:2012rc}. Here we find that atomic interferometry can probe symmetrons with masses a few orders of magnitude below the dark energy scale, typically with a range in vacuum smaller than a few centimeters. The paper is arranged as follows. In section 2, we recall details of the tomographic method and its link to models such as inverse power law chameleons, $f(R)$ gravity in the large curvature regime, the environmentally dependent dilaton and the symmetron. In section 3, we provide analytical details about scalar fields in a cylinder as suited to analyse current experimental data. In section 4, we apply the tomographic method to atomic interferometry experiments. Finally in section 5, we give details about constraints on models and we focus on symmetrons. Conclusions are in section 6. There are three appendices where we compute the field profile in the cavity, the force profile and scalar charge and the E\"otwash bounds using the tomographic method for the symmetron.
We have studied how dark energy models coupled to matter subject to the chameleon and Damour-Polyakov screening mechanisms can be tested by atomic interferometry experiments. We have used the tomographic description of these models. Apart from inverse power law chameleons whose coupling to matter must be less than $10^5$, we find that symmetrons with masses in the sub meV region, corresponding to ranges shorter than a few centimeters can be adequately constrained in a portion of their parameter space left untouched by torsion pendulum experiments such as E\"otwash. In particular we find that the symmetrom self coupling must be bounded from below and therefore cannot be arbitrarily small. Future experiments with better sensitivities will certainly lead to improvements on the bounds presented here. This will also help in constraining and maybe even excluding certain chameleon or symmetron models. In the future dark energy coupled to matter may even be eventually detected by such experiments. Having such tests of dark energy in the laboratory, independently of any cosmological signature, is certainly a necessity in order to understand better the nature of the dark interactions of the Universe.
16
9
1609.09242
1609
1609.01153_arXiv.txt
We analyze the effect of gravitational back reaction on cosmic string loops with kinks, which is an important determinant of the shape, and thus the potential observability, of string loops which may exist in the universe today. Kinks are not rounded off, but may be straightened out. This means that back reaction will only cause loops with kinks to develop cusps after some potentially large fraction of their lifetimes. In some loops, symmetries prevent even this process, so that the loop evaporates in a self-similar fashion and the kinks are unchanged. As an example, we discuss back-reaction on the rectangular Garfinkle-Vachaspati loop.
16
9
1609.01153
1609
1609.08558_arXiv.txt
The Gamma-Ray Imager/Polarimeter for Solar flares (GRIPS) instrument is a balloon-borne telescope designed to study solar-flare particle acceleration and transport. We describe GRIPS's first Antarctic long-duration flight in January 2016 and report preliminary calibration and science results. Electron and ion dynamics, particle abundances and the ambient plasma conditions in solar flares can be understood by examining hard X-ray (HXR) and gamma-ray emission (20 keV to 10 MeV). Enhanced imaging, spectroscopy and polarimetry of flare emissions in this energy range are needed to study particle acceleration and transport questions. The GRIPS instrument is specifically designed to answer questions including: What causes the spatial separation between energetic electrons producing hard X-rays and energetic ions producing gamma-ray lines? How anisotropic are the relativistic electrons, and why can they dominate in the corona? How do the compositions of accelerated and ambient material vary with space and time, and why? GRIPS's key technological improvements over the current solar state of the art at HXR/gamma-ray energies, the Reuven Ramaty High Energy Solar Spectroscopic Imager (RHESSI), include 3D position-sensitive germanium detectors (3D-GeDs) and a single-grid modulation collimator, the multi-pitch rotating modulator (MPRM). The 3D-GeDs have spectral FWHM resolution of a few hundred keV and spatial resolution $<$1 mm$^3$. For photons that Compton scatter, usually $\gtrsim$150 keV, the energy deposition sites can be tracked, providing polarization measurements as well as enhanced background reduction through Compton imaging. Each of GRIPS's detectors has 298 electrode strips read out with ASIC/FPGA electronics. In GRIPS's energy range, indirect imaging methods provide higher resolution than focusing optics or Compton imaging techniques. The MPRM grid-imaging system has a single-grid design which provides twice the throughput of a bi-grid imaging system like RHESSI. The grid is composed of 2.5 cm deep tungsten-copper slats, and quasi-continuous FWHM angular coverage from 12.5-–162 arcsecs are achieved by varying the slit pitch between 1–-13 mm. This angular resolution is capable of imaging the separate magnetic loop footpoint emissions in a variety of flare sizes. In comparison, RHESSI's 35-arcsec resolution at similar energies makes the footpoints resolvable in only the largest flares.
\label{sec:intro} The Gamma-Ray Imager/Polarimeter for Solar Flares (GRIPS) instrument is a balloon-borne solar observatory which flew on its first mission during the 2015/16 Antarctic summer season. GRIPS provides imaging (12.5--162 arcseconds), spectroscopy ($\sim$30kev to $\gtrsim$10MeV) and polarimetry of high energy emission during solar flares. Observations from the GRIPS instrument address questions concerning the acceleration and transport of particles during solar flares. GRIPS is a NASA Heliophysics Low Cost Access to Space mission led by Pascal Saint-Hilaire at UC Berkeley's Space Science Laboratory (SSL). Most of the telescope and its support systems were designed, built and tested at SSL with contributions from Lawrence Berkeley National Laboratory (LBNL), NASA Goddard Space Flight Center, and UC Santa Cruz. The GRIPS payload (Figure~\ref{fig:payload}) flew on a long-duration balloon over Antarctica on January 19--30, 2016. During GRIPS' $\sim$12-day flight, 21 C-class flares occurred. Because of where the payload landed and the lateness of the season, recovery resources were limited. All flight data for the primary instrument and its three piggyback instruments were recovered, but the instrument itself will remain in the Antarctic deep field during the 2016 winter and will be recovered in the following (2016/17) summer season. All systems performed in flight as designed. This paper reports on the details of GRIPS' first flight and provides updates on critical system changes and milestones since the previous GRIPS papers. \cite{Shih2012, Duncan2013} \begin{figure} [!h] \centering \includegraphics[scale=0.04]{GRIPS_payload.jpg} \caption{\label{fig:payload}The fully integrated GRIPS instrument payload. The gondola is suspended from "The Boss" launch vehicle while the GRIPS and Columbia Scientific Ballooning Facility (CSBF) teams complete pre-flight checklists during a launch attempt.} \end{figure} \subsection{Science Goals} Solar flares can release $10^{32}$--$10^{33}$ ergs within 10s--1000s of seconds, imparting up to tens of percent of the energy into particles\cite{Emslie2012}. During these events, electrons can be accelerated to 100s of MeV and ions to 10s of GeV. Large-scale field reconfigurations resulting from magnetic reconnection in the corona are thought to power flares, but the precise mechanisms that convert the stored magnetic energy into particle kinetic energy are poorly understood. Accelerated particles either escape into interplanetary space or rain down into the solar atmosphere where they lose energy as they interact with the increasing particle densities. The particles emit photons in a variety of processes\cite{Murphy2005, Murphy2007} which give clues to the underlying acceleration and transport mechanisms. High resolution imaging and spectroscopy of the hard X-ray/gamma-ray energy range during flares is required to build a complete picture of these processes. The Reuven Ramaty High Energy Solar Spectroscopic Imager (RHESSI)\cite{Lin2002}, launched in 2002, first opened the door to imaging and detailed spectroscopy in this energy range. RHESSI observations linked the high-energy photon emission to observed spatial structures like magnetic fields, verifying the overall geometry of the standard flare model. In the RHESSI and SMM datasets ion and relativistic electron fluences were seen to be correlated, showing that flares accelerate ions and relativistic electrons proportionally\cite{Shih2009} and suggesting that these populations are accelerated together, and possibly by similar mechanisms. However, imaging of two of the largest flares of the last solar cycle show that the centroids of ion and relativistic electron emission were significantly displaced from one another\cite{Hurford2003, Hurford2006}. This result is surprising; ions and electrons that are accelerated in the same region are thought to be transported out of the acceleration area along the same field lines, implying that they would enter the chromosphere together and have similar emission source locations. Figure~\ref{fig:discrepancy} shows the correlated emission as well as the observed spatial displacement. Unfortunately RHESSI's reduced imaging capabilities at gamma-ray energies limits these observations to the few largest and best observed flares of the past solar cycle. RHESSI and other solar observatories have revealed a new set of flare particle questions, which GRIPS was designed to address: \begin{itemize} \item What caused the observed spatial separation between the sites of relativistic electron and ion energy deposition in flare footpoints? Does this separation occur in all flares? \item Do all flares accelerate ions? \item Are relativistic electrons isotropic or beamed in the corona? \item What is the composition/variance of accelerated material in space and time? \end{itemize} \begin{figure} [!h] \centering \includegraphics[scale=0.6]{discrepancy.pdf} \caption{\label{fig:discrepancy}\emph{Left:} The correlation between the relativistic electron bremsstrahlung and the 2.2~MeV line fluence indicates that the same process may be accelerating protons and electrons. The correlation extends over 3 orders of magnitude and two different missions.\\ \emph{Right:} RHESSI gamma-ray image in two different energy bands of the GOES class X17 flare on October 28, 2003, overlaid on a TRACE 195 {\AA} image. The electron-associated bremsstrahlung emission is shown in red contours, and the imaged 2.2 MeV line is shown in blue contours. The electron- and ion-associated centroids are separated from one another by $\sim$17" in each footpoint. Inset is a comparison of the GRIPS and RHESSI minimum beam widths. At gamma-ray energies RHESSI can only separate footpoints in the largest flares. The GRIPS resolution is fine enough to separate footpoints in a variety of flare sizes.} \end{figure}
16
9
1609.08558
1609
1609.06129_arXiv.txt
Adiabatic oscillation frequencies of stellar models, computed with the standard mixing-length formulation for convection, increasingly deviate with radial order from observations in solar-like stars. Standard solar models overestimate adiabatic frequencies by as much as $\sim$20$\mu$Hz. In this letter, we address the physical processes of turbulent convection that are predominantly responsible for the frequency differences between standard models and observations, also called `surface effects'. We compare measured solar frequencies from the MDI instrument on the SOHO spacecraft with frequency calculations that include three-dimensional (3D) hydrodynamical simulation results in the equilibrium model, nonadiabatic effects, and a consistent treatment of the turbulent pressure in both the equilibrium and stability computations. With the consistent inclusion of the above physics in our model computation we are able to reproduce the observed solar frequencies to $\lesssim$ 3$\mu$Hz without the need of any additional ad-hoc functional corrections.
{High-quality measurements of} stellar oscillation frequencies of {thousands} of solar-like stars are now available from the NASA space mission $Kepler$, and from the French satellite mission CoRoT (Convection Rotation and planetary Transits). In order to exploit these data for probing stellar interiors, accurate modelling of stellar oscillations is required. However, adiabatically computed frequencies are increasingly overestimated with increasing radial order (see dot-dashed curve in Fig.~\ref{fig:MDI-1D.a-1DPM.a-NL}). These effects have become known as `surface effects' \citep[e.g.,][]{Brown84, Gough84, Balmforth92b, RosenthalEtal99, Houdek10, GrigahceneEtal12}. Semi-empirical corrections to adiabatically computed frequencies proposed by e.g., \citet{KjeldsenEtal08}, \citet{BallGizon14}, \citet{SonoiEtal15}, \citet{BallEtal16} are purely descriptive and provide little physical insight. Here, we report on a self-consistent model computation, which reproduces the observed solar frequencies to within $\sim3\mu$Hz, and, for the first time, without the need of any ad-hoc functional corrections. It represents a purely physical explanation for the `surface effects' by considering (a) a state-of-the-art 3D--1D patched % mean model, (b) nonadiabatic effects, (c) a consistent treatment of turbulent pressure in the mean and pulsation models, and (d) a depth-dependent modelling of the turbulent anisotropy in both the mean and oscillation calculations. Convection modifies pulsation properties of stars principally through three effects: \begin{itemize} \item[(i)]effects through the turbulent pressure term in the {hydrostatic equation} ({structural} effect), and its pulsational perturbation in the momentum equation (modal effect); \item[(ii)]opacity variations brought about by large convective temperature fluctuations, affecting the mean stratification; this {structural} effect is also known as `convective back-warming' \citep[e.g.,][]{TrampedachEtal13}; \item[(iii)]nonadiabatic effects, additional to the pulsational perturbed radiative heat flux, through the perturbed convective heat flux (modal effects) in the thermal energy equation. % \end{itemize} We follow \citeauthor{RosenthalEtal99}'s~(\citeyear{RosenthalEtal99}) idea of replacing the outer layers of a 1D solar envelope model by an averaged 3D simulation, and adopt the most advanced and accurate 3D -- 1D matching procedure available today \citep{TrampedachEtal14a, TrampedachEtal14b} for estimating {structural} effects on adiabatic solar frequencies. Furthermore, we use a 1D nonlocal, time-dependent convection model for estimating the modal effects of nonadiabaticity and convection dynamics. Additional modal effects can be associated with the advection of the oscillations by spatially varying turbulent flows in the limit of temporal stationarity \citep{Brown84, ZhugzhdaStix94, BhattacharyaEtal15}. This `advection picture' is related to the `dynamical picture' of including temporally varying turbulent convection fluctuations in the limit of spatially horizontal homogeneity. Because these two pictures describe basically the same effect but in two different limits, i.e. they are complementary, only one of them should be included. We do so by adopting the latter. \begin{figure} \includegraphics[width=\columnwidth]{./plot_frequ5_iii_19Jul16} \caption{Inertia-scaled frequency differences between MDI measurements (Sun) of acoustic modes with degree $l=20$--$23$ and model computations as functions of oscillation frequency. The scaling factor $Q_{nl}$ for a mode with radial order $n$, is obtained from taking ratios between the inertia of modes with $l=23$ and radial modes, interpolated to the $l=23$ frequencies \citep[e.g.,][]{AertsEtal10}. The dot-dashed curve shows the differences for baseline model, `Sun\,-\,{A' (cf. Section~\ref{sec:1Dbmodel})}, reflecting the results for a standard solar model computation. The dashed curve plots the residuals for the patched model which includes turbulent pressure and convective back-warming in the mean model, i.e. `Sun\,-\,{B} (cf. Section~\ref{sec:31Dmodel})'. The solid curve illustrates the differences from the modal effects of nonadiabaticity and perturbation to the turbulent pressure, i.e. `{D\,-\,C}'\, (cf. Section~\ref{sec:1Dmodel}).} \label{fig:MDI-1D.a-1DPM.a-NL} \end{figure} \vspace{-5pt}
The adiabatic frequency corrections (Section~\ref{sec:adcalturb}) arising from modifications to the stratification of the mean model are obtained from an appropriately averaged 3D simulation for the outer convection layers (Section~\ref{sec:31Dmodel}). The frequency corrections associated with modal effects arising from nonadiabaticity, including both the perturbations to the radiation and convective heat flux, and from convection-dynamical effects of the perturbation to the turbulent pressure, are estimated from a 1D nonlocal, time-dependent convection model including turbulent pressure (Section~\ref{sec:1Dmodel}). \subsection{Adiabatic frequency corrections from modifications to the mean structure} \label{sec:mean_structure} Frequency differences between MDI data (Sun) and our baseline model, `Sun - {A}', are depicted in Fig.~\ref{fig:MDI-1D.a-1DPM.a-NL} by the dot-dashed curve, illustrating the well-known `surface effects' for a standard solar model with a frequency residual up to $\sim20\,\mu$Hz. The effect on the adiabatic frequencies by adopting an averaged 3D simulation for the outer convection layers is illustrated by the dashed curve in Fig.~\ref{fig:MDI-1D.a-1DPM.a-NL}. It shows the frequency difference between MDI data and the patched model, `Sun - {B}'. The patched model underestimates the frequencies by as much as $\sim10\,\mu$Hz. The change from overestimating the frequencies with the baseline model, `{A}' (dot-dashed curve), to underestimating the frequencies with the patched model, `{B}' (dashed curve), is mainly due to effects of turbulent pressure $\pt$ in the equation of hydrostatic support~(\ref{eq:hydstat}) and from opacity changes (convective back-warming) of the relatively large convective temperature fluctuations in the superadiabatic boundary layers. \subsection{Modal effects from nonadiabaticity and convection dynamics} \label{sec:model_effects} Additional to the {structural} changes we also consider the modal effects of nonadiabaticity and pulsational perturbation to turbulent pressure $\Ldel\pt$. We do this by using the 1D solar envelope model of Section~\ref{sec:1Dmodel}, which includes turbulent pressure, and which is calibrated such as to have the same max($\pt$) as the 3D solar simulation (see Fig.~\ref{fig:max-pt}). To assess the modal effects we compute for this nonlocal envelope model nonadiabatic and adiabatic frequencies. The frequency differences between these two model computations, i.e. `{D\,-\,C}', is plotted in Fig.~\ref{fig:MDI-1D.a-1DPM.a-NL} with a solid curve, and illustrates the modal effects of nonadiabaticity and turbulent pressure perturbations $\Ldel\pt$. These modal effects (solid curve in Fig.~\ref{fig:MDI-1D.a-1DPM.a-NL}) produce frequency residuals that are similar in magnitude to the frequency residuals between the Sun and the patched model, `Sun\,-\,{B}' (dashed curve). This suggests that the underestimation of the adiabatic frequencies due to changes in the mean model, `{B}', is nearly compensated by the modal effects. The remaining overall frequency difference between the Sun and models that include both {structural} and modal effects, i.e. the difference between the dashed and solid curves in Fig.~\ref{fig:MDI-1D.a-1DPM.a-NL}, is illustrated in Fig.~\ref{fig:MDI-1D[all].na} by the solid curve, showing a maximum frequency difference of $\sim3\,\mu$Hz. Also depicted, for comparison, is the dot-dashed curve from Fig.~\ref{fig:MDI-1D.a-1DPM.a-NL}, which shows the frequency difference for the baseline model {`A'}, representing the result for a standard solar model calculation. We conclude that, if both {structural} and modal effects due to convection and nonadiabaticity are considered together, it is possible to reproduce the measured solar frequencies satisfactorily (solid curve in Fig.~\ref{fig:MDI-1D[all].na}) without the need of any ad-hoc correction functions. Moreover, the calibrated set of convection parameters in the 1D nonlocal model calculations reproduces the {turbulent-pressure profile of the 3D simulation in the relevant wave-propagating layers} (Fig.~\ref{fig:max-pt}), the correct depth of the convection zone, and solar linewidths {over the whole measured frequency range} (Fig.~\ref{fig:BiSON.HWHM-1D.eta}). Although we have not used the same equilibrium model for estimating the {structural} (`{B}') and the modal effects (`{D}'), we believe that this remaining inconsistency is minute on the estimated modal effects, because of the satisfactory reproduction of the $\pt$ profile in the nonlocal equilibrium model `{D}' (see Fig.~\ref{fig:max-pt}). However, we do plan to address this in a future paper.
16
9
1609.06129
1609
1609.05016_arXiv.txt
{When exploring the stability of multiplanet systems in binaries, two parameters are normally exploited: the critical semimajor axis $a_c$ computed by Holman and Wiegert (1999) within which planets are stable against the binary perturbations, and the Hill stability limit $\Delta$ determining the minimum separation beyond which two planets will avoid mutual close encounters. Both these parameters are derived and used in different contexts, i.e. $\Delta$ is usually adopted for computing the stability limit of two planets around a single star while $a_c$ is computed for a single planet in a binary system.} {Our aim is to test whether these two parameters can be safely applied in multiplanet systems in binaries or if their predictions fail for particular binary orbital configurations. } {We have used the frequency map analysis (FMA) to measure the diffusion of orbits in the phase space as an indicator of chaotic behaviour. } {First we revisited the reliability of the empirical formula computing $a_c$ in the case of single planets in binaries and we find that, in some cases, it underestimates by 10--20\% the real outer limit of stability and it does not account for planets trapped in resonance with the companion star well beyond $a_c$. For two--planet systems, the value of $\Delta$ is close to that computed for planets around single stars, but the level of chaoticity close to it substantially increases for smaller semimajor axes and higher eccentricities of the binary orbit. In these configurations $a_c$ also begins to be unreliable and non--linear secular resonances with the stellar companion lead to chaotic behaviour well within $a_c$, even for single planet systems. For two planet systems, the superposition of mean motion resonances, either mutual or with the binary companion, and non--linear secular resonances may lead to chaotic behaviour in all cases. We have developed a parametric semi--empirical formula determining the minimum value of the binary semimajor axis, for a given eccentricity of the binary orbit, below which stable two planet systems cannot exist. } {The superposition of different resonances between two or more planets and the binary companion may prevent the existence of stable dynamical configurations in binaries. As a consequence, care must be devoted when applying the Holman and Wiegert (1997) criterion and the Hill stability against mutual close encounters for a multiplanet system in binaries. }
\label{intro} According to \cite{horch2014}, an estimated fraction of about 40 to 50\% of exoplanets are in binary star systems. The dynamical properties of planets in binaries, in terms of eccentricity distribution, do not appear significantly different from those of planets around single stars. It is thus expected that planet--planet scattering is effective in leading to physical collisions between planets, ejection, and eccentricity pumping in binaries as in single stars. \cite{marzari2005} have shown that there are some statistical differences in the outcome of the chaotic phase which strongly depend on the eccentricity of the binary system for any given value of the binary semimajor axis. For example, in a system of three planets on initially unstable orbits, the number of surviving systems with two planets in a stable configuration at the end of the scattering phase declines with the binary eccentricity. In this paper we focus on the conditions for the stability/instability of multiplanet systems in binaries. In this study of the dynamics of two planets orbiting the primary star of a binary system we test whether the criterion for the Hill stability, against mutual close approaches, is affected by the presence of the companion star. The initial threshold separation $\Delta$ between two planets around a single star granting their long term dynamical stability is given by $\Delta \leq 2 \sqrt 3 R_{Hill}$, where $\Delta = a_2 - a_1$ is the difference between the semimajor axes of the two planets and $R_{Hill}$ is the mutual Hill sphere defined as \begin{equation} R_{Hill} = \left(\frac{m_1 + m_2} {3 M_{\odot}}\right )^{1/3} \left( \frac{\left(a_1 + a_2\right)} {2}\right) \label{eq:cha2} \end{equation} This criterion was derived from \cite{glad93} from the work of \cite{marchal82} for circular and coplanar orbits and it was numerically confirmed by \cite{CH96}. More complex equations are available when the planets are on initially eccentric and inclined orbits \citep{donnison06,veras2013}, but we will concentrate in this paper on bodies initially on almost circular orbits approximately lying on the same plane. In this paper we investigate the validity of the above mentioned stability condition in binary systems using the same numerical approach of \cite{mar20142pia} based on the application of the frequency map analysis (hereafter FMA). The FMA method \citep{lask93, sine96, marz03} allows us to outline the stability regions in the phase space with short term numerical integrations by computing the diffusion of the main frequencies of the system. This allows a massive exploration of the region close to the Hill stability separation to test the influence of the binary orbital parameters on its value. Before testing the stability of two planets in binaries we apply the method to the case of a single planet as a test bench. In this way we can compare the outcome of the FMA exploration with the empirical formula of \cite{holman97} which defines a critical semimajor axis $a_c$ within which the orbit of a planet is assumed to be stable against the binary perturbations. We then concentrate on systems with two planets and test their stability as a function of the binary parameters such as semimajor axis and eccentricity of the stellar pair and their mass ratio. In Section 2 we briefly summarize the numerical model used to explore the stability of planetary systems in binaries. In Section 3 we test the method on single planet systems in binaries and compare the results with the semi--empirical formula of \cite{holman97} and \cite{ines}. In Section 4 we study two--planet systems in binaries and examine the validity of the Hill criterion for different parameters of the binary. In Section 5 we exploit a statistical approach to derive a semi--empirical formula predicting the minimum semimajor axis of the binary allowing a significant stability region for two planets as a function of the binary eccentricity, mass ratio and inner planet semimajor axis. We summarize and discuss our findings in Section 6.
Using the FMA, a fast tool to identify chaotic orbits, we have investigated the stability of planetary orbits in binary systems. As a test bench, we have first revisited the \cite{holman97} empirical formula which calculates a critical semimajor axis $a_c$ beyond which instability occurs for single planets in binaries. We show that this formula underestimates the real stability limit by as much as 20\%, depending on the binary orbit. This is due to the low sampling rate used by \cite{holman97} which had to explore a wide range of parameters of the binary ($a_B$, $e_B$, and $\mu$) to perform a reliable fit based on these parameters. As a consequence, for each individual set of $a_B, e_B, \mu$ they did not integrate a number of different systems large enough to fully explore the phase space. We have focused on a limited number of binary parameters, but thanks to the FMA we have performed a very dense sampling finding that the stability limit in semimajor axis is larger than that given by the \cite{holman97} formula. The fraction of stable orbits decreases quickly beyond $a_c$ but they still represent a significant sample of stable configurations. In addition to these orbits, extending just beyond $a_c$, there are additional dynamical systems which are trapped in resonance with the companion star, are stable over 5 Gyr, and have semimajor axis well beyond $a_c$. When applied to the same systems, the Hill stability criterion applied to the binary companion and planet, on the other hand, predicts accurately the stability limit if the eccentricity of the binary is low but it becomes inaccurate for higher values of $e_B$ and it significantly overestimates the stability boundary. The use of the FMA has also allowed us to discover unstable systems within $a_c$ when the orbit of the companion star is eccentric. These systems densely populate the region with $a_p < a_c$ and their chaoticity is probably due to the presence of non--linear secular resonances \citep{mich2004,libert2005} induced by the companion star. When applied to two--planet systems, the FMA method shows that the separation between two planets leading to instability remains close to the Hill stability limit defined by \cite{glad93}. However, when $a_B$ decreases and $e_B$ grows, the perturbations of the companion star progressively make two--planet systems more chaotic. The inner border is populated by an increasing number of unstable systems while the outer border is destabilized by overlap of mean motion and secular (linear and non--linear) resonances. The limiting case is when most planetary systems are chaotic independently of the separation between the two planets as in the case with $a_B = 50$ AU and $e_B=0.4$ (the inner planet has semimajor axis $a_1=3$ AU in all our models). In this configuration mean motion resonances and secular resonances (linear and non--linear) easily superimpose leading to chaos and instability. This implies that the value of $a_c$ cannot be carelessly used in multiplanet systems since the combined mutual planetary and binary perturbations may lead to chaotic behaviour well within $a_c$ in particular for eccentric binaries. To derive a semiempirical formula that allows us to compute, at least at a rough approximation level, the value of the binary semimajor axis $a_B^l $ below which two planets cannot coexist around the central star, we have run a series of models with different values of $e_B, \mu, a_1$ with the goal of finding scaling relationships in these variables. Of course, owing to the huge amount of CPU load required to explore such a large parameter space, our formula is limited. However, it covers a range of parameters which are the most frequently encountered in binary systems, according to \cite {duma}. It can be used when planning observation strategies while looking for planets in binary systems.
16
9
1609.05016
1609
1609.02400_arXiv.txt
Using the ALMA archival data of both $^{12}$CO\,(6--5) line and 689\,GHz continuum emission towards the archetypical Seyfert galaxy, NGC\,1068, we identified a distinct continuum peak separated by 14\,pc from the nuclear radio component S1 in projection. The continuum flux gives a gas mass of $\sim 2 \times 10^5$ \Msun\ and bolometric luminosity of $\sim 10^8$ \Lsun, leading to a star formation rate of $\sim$ 0.1 \Msun\ yr$^{-1}$. Subsequent analysis on the line data suggest that the gas has a size of $\sim 10$\,pc, yielding to mean H$_2$ number density of $\sim 10^5$ cm$^{-3}$. We therefore refer to the gas as ``massive dense gas cloud": the gas density is high enough to form a ``proto starcluster" whose stellar mass of $\sim 10^4$ \Msun. We found that the gas stands a unique position between galactic and extraglactic clouds in the diagrams of start formation rate (SFR) vs. gas mass proposed by Lada et al. and surface density of gas vs. SFR density by Krumholz and McKee. All the gaseous and star-formation properties may be understood in terms of the turbulence-regulated star formation scenario. Since there are two stellar populations with the ages of 300\,Myr and 30\,Myr in the 100 pc-scale circumnulear region, we discuss that NGC\,1068 has experienced at least three episodic star formation events with a tendency that the inner star-forming region is the younger. Together with several lines of evidence that the dynamics of the nuclear region is decoupled from that of the entire galactic disk, we discuss that the gas inflow towards the nuclear region of NGC\,1068 may be driven by a past minor merger.
\label{s:intro} NGC\,1068 is one of the nearest archetypical Seyfert galaxies in the nearby Universe \citep{Sey43, KW74}, making it an ideal laboratory towards understanding active galactic nuclei (AGNs) \citep{AM85}; the distance of 15.9 Mpc \citep{KH13} is adopted throughout this paper. Therefore, a number of observational studies at various wavelengths have been made to understand the nature of AGN phenomena in NGC\,1068 [e.g., \citet{Cecil02, SB12, Mezcua15, LR16, Wang12}].\par Another important issue is the so-called starburst-AGN connection since a number of Seyfert galaxies have an intense circumnuclear ($\sim 100$\,pc scale) star forming region around its AGN [e.g., \citet{Sim80, Wilson91, SB96} and references therein]. Although there is no general consensus for this issue, both the nuclear ($\sim 10$\,pc scale) starburst and the AGN activity commonly needs efficient gas inflow to the circumnuclear and nuclear regions. Therefore, it is expected that intense star formation events around the nucleus of an AGN-hosting galaxy will provide us useful hints to understand the triggering mechanism of AGNs. This issue is also important when we investigate the coevolution between galaxies and super massive black holes (SMBHs); i.e., the positive correlation between the spheroidal and SMBH masses in galaxies \citep{KH13, HB14}.\par Since NGC\,1068 has intense circumnuclear star forming regions around its AGN, it also provides us an important laboratory for this issue. It has been suggested that NGC\,1068 has two stellar populations in the 100\,pc-scale circumnuclear region; around the nucleus \citep{SB12}; one is the relatively young stellar population with an age of 300\,Myr extending over the 100-pc scale circumnuclear region, and the second one is the ring-like structure at $\approx$ 100 pc from the nucleus with an age of 30\,Myr. Since the inner 35 pc region is dominated by an old stellar population with an age of $>$ 2\,Gyr, it is suggested that the two episodic intense star formation events occurred in the circumnuclear region of NGC\,1068 although their origins have not yet been understood. At the western part in the ring, molecular Hydrogen emission, H$_2$ S(1), is detected with a shell-like structure \citep{Sch00, Vale12}. Since this emission often probes the shock-heated gas, either a super-bubble or an AGN feedback effect or both have been discussed as its origin to date \citep{SB12, GB14a, GB14b}, In either case, a certain asymmetric perturbation could drive the intense star formation event 30\,Myr ago. \par If there is a certain physical relationship between circumnuclear and nuclear star formation events and the triggering AGN, it is intriguing to investigate the star formation activity in much inner region in NGC\,1068. For this purpose, it is essential to attain high spatial resolution down to pc-scale both at dust continuum emission and thermal molecular lines, which allow us to diagnose not only gas kinematics but also gas physics. In this context, Atacama Large Millimeter/Submillimeter Array (ALMA) has been extensively used to study atomic and molecular gas and dust properties of NGC\,1068 in detail \citep{GB14a, GB14b, GB16, Ima16, Izu16}.\par Among these brand-new ALMA observations, we emphasize potential importance of the newly detected 689\,GHz continuum source located close to the central engine of NGC\,1068 observed by \citet{GB14a} (project-ID: $\#$2011.0.00083.S), although it was not identified as an independent source by the authors (see their Figure 3). In addition, the continuum source has not been separately identified as an object in their CO\,(6--5) map either. \citet{GB14a} interpreted that the molecular gas associated with the continuum source represents a portion of the circumnuclear region rather than an independent source; see their Figure 4c. Taking account of the proximity to the nucleus [the nuclear radio component S1 \citep{G04}], we consider that this source must be playing an import role to form the observed complicated properties of the nuclear region of NGC\,1068. In order to address the nature of the continuum source and its role in the dynamics of the nuclear region, we analyzed their ALMA data.\par
To shed light on the nature of both circumnuclear and nuclear star formation in conjunction with the AGN activity, we analyzed ALMA archival data on both CO\,(6—5) line and 689\,GHz continuum emission towards the archetypical nearby Seyfert galaxy NGC\,1068 ($d$ =15.9\,Mpc). The ALMA data were originally taken by Garcia-Burillo and colleagues [see details of their observations in \citet{GB14a}]. In this work, we focused on the 689\,GHz local continuum peak in the vicinity of the nucleus, located at 14\,pc (\timeform{0.18"}) NNE from the nucleus. Although the continuum peak of our interests was already found in the analysis by \citet{GB14a}, no discussion was given in their paper. Since a near-nuclear gas condensation such as the newly identified ``massive dense gas cloud" is generally expected to physically affect the nuclear activity of a galaxy, we thoroughly investigated the physical properties of the source. Our findings can be summarized as follows.\par \begin{enumerate} \item The 689 GHz continuum flux gives a gas mass and bolometric luminosity (see Table \ref{tbl:Ps} for the values), allowing us to estimate to a SFR of $\sim$ 0.1 \Msun\ yr$^{-1}$. We estimated size of the gas to be $\sim 10$\,pc in diameter by means of two methods (\S\ref{ss:DP} and \S\ref{ss:SFandGproperty}). Because both results have a reasonable consistency, we obtained a mean H$_2$ number density of $\sim 10^5$ cm$^{-3}$. Therefore, this continuum peak can be identified as a ``massive dense gas cloud”. \item The gas density is high enough to form a ``proto starcluster" with a total stellar mass of $M_\ast\sim 10^4$ \Msun. We argue that this gas cloud will evolve to a nuclear star cluster around the nucleus of NGC\,1068. \item The gas cloud is identified as a missing link between galactic and extragalactic gas clouds in the previously known scaling relations of [a] SFR vs. gas mass proposed by \citet{Lada12}, and [b] surface density of gas vs. SFR density by \citet{KM05}. All the gaseous and star-formation properties (Table \ref{tbl:Ps} and Figure \ref{fig:SFR}) may be understood in terms of the turbulence-regulated star formation scenario proposed by \citet{KM05}. \item Since there are two stellar populations with the ages of 300\,Myr and 30\,Myr in the 100 pc-scale circumnulear region, we discuss that NGC\,1068 has experienced at least three episodic star formation events with a tendency that inner star-forming region is young in its age of star formation. Given the evidence for the gas dynamics in the nuclear region, the nuclear region of NGC\,1068 is suggested to be decoupled from that of the entire galactic disk. We propose that the gas inflow towards the nuclear region of the galaxy may be driven by a past minor merger. \end{enumerate} We sincerely acknowledge the anonymous referee whose comments significantly helped to improve quality of our analysis and discussion. The authors sincerely acknowledge Charles Lada, Mark Krumholz, and the Copyright \& Permissions Team of the AAS journal for their kind permission to use their figures in this work (Figure \ref{fig:SFR}). We would also like to thank Michael R. Blanton for providing us with his SDSS color composite image of NGC\,1068 shown in Figure\,\ref{fig:guidemap}a and Fumi Egusa for her generous support in handling the CO (6--5) image data. This work was financially supported in part by JSPS (YT; 23244041 and 16H02166). This paper makes use of the ALMA data of ADS/JAO.ALMA$\#$2011.0.00083.S. ALMA is a partnership of ESO (representing its member states), NSF (USA) and NINS (Japan), together with NRC (Canada), NSC and ASIAA (Taiwan), and KASI (Republic of Korea), in cooperation with the Republic of Chile. The Joint ALMA Observatory is operated by ESO, AUI/NRAO and NAOJ. \clearpage \begin{figure}[ht!] \includegraphics[angle=0,scale=.54]{f1.eps} \caption{ (a) An optical color image of NGC\,1068 based on the Slocan Digital Sky Survey data (courtesy of Michael R. Blanton). The image size is \timeform{14.22'}$\times$\timeform{12.11'}, and the north is up and east is left. The horizontal white bar at the bottom-right corner indicates a linear scale of 10\,kpc at $d$\,=\,15.9 Mpc. (b) An overlay of the 689\,GHz continuum map (contours) on the velocity centroid map of the CO\,(6--5) emission (color) in the \timeform{7.5"} squared region centered at the previously known AGN, S1. (c) The positions of the 689\,GHz peak (star) and the known AGN, S1, (double circles) shown on the continuum map (yellow contour) and total integrated intensity map of the CO\,(6--5) emission (color). All the contour intervals are 3\sgm\ steps, starting from the 3\sgm\ level, while the dashed contour shows the $-3$\sgm\ level. See the text in \S\ref{s:dt} for the image noise level. The light-blue dashed-circle with a radius of \timeform{0.45"} indicates the region where we performed our position-velocity (PV) diagram analysis shown in Figure \ref{fig:pv}a. The slicing axis for the PV diagrams is shown by the light-blue dashed-line. \label{fig:guidemap}} \end{figure} \clearpage \begin{figure}[ht!] \includegraphics[angle=0,scale=.48]{f2.eps} \vspace{0truemm} \caption{ Velocity channel maps of the CO\,(6--5) emission (color contours) in the \timeform{0.5"} square nuclear region of NGC\,1068. The 689\,GHz continuum emission map is also shown in each panel by grey scale plus thin contours. The continuum peak and the nuclear radio source S1 are shown by white star and black double circles, respectively. Color square in each panel indicates the peak pixel position of the CO\,(6--5) emission. All the filled squares have S/N $\geq$ 4 while the open ones have 3$<$ S/N $<$4. The dotted-ellipse in each panel indicates the aperture adopted for producing the CO spectrum in Figure \ref{fig:sp}. Note that the colors of contours correspond to those of the three vertical bars in Figures \ref{fig:sp} and \ref{fig:pv}b. Contour intervals are 3\sgm\ steps, starting from the 3\sgm\ level. \label{fig:chmaps}} \end{figure} \clearpage \begin{figure}[ht!] \includegraphics[angle=0,scale=.4]{f3.eps} \vspace{32truemm} \caption{Interferometric spectrum of the CO\,(6--5) emission towards the peak position of the 689\,GHz continuum in flux density, $S_\nu$, obtained by integrated the emission in $I_\nu$ over the beam solid angle. The horizontal blue, green, and red bars indicate their velocity ranges where the detection of signals is assessed either in the spectrum or PV diagram (Figure \ref{fig:pv}b). The continuum emission with the mean $S_\nu$ of 15\,mJy was subtracted in the spectrum shown in this figure. Here the mean value was calculated over the line-emission free channels of the spectrum, and the RMS noise level of the spectrum in $S_\nu$ is 9.9 mJy with the velocity resolution of 14\,\kms. \label{fig:sp}} \end{figure} \clearpage \begin{figure}[hb!] \includegraphics[angle=0,scale=.5]{f4.eps} \vspace{0truemm} \caption{(a) PV diagram of the CO (6--5) emission towards the central region of NGC\,1068, produced with the velocity channel maps shown in Figure \ref{fig:chmaps} along the PA$=$\timeform{120D} line passing the 689\,GHz continuum peak (see the light-blue line in Figure\,\ref{fig:guidemap}c). The upper horizontal axis indicates position offset from the 689\,GHz continuum peak, and the right vertical axis does the velocity offset with respect to the systemic velocity of the galaxy, $v_\mathrm{sys}$(LSR) $= 1127$ \kms\ \citep{GB14a}. Contours are plotted by an interval of 3\sgm\ starting from the 3\sgm\ level. The spatial resolution of \timeform{0.27"}, which corresponds to 21\,pc when projected along the slicing axis, and the velocity resolution are shown by the horizontal and vertical thick-bars, respectively, at the top-left corner of the panel. The dashed rectangle region presents the area shown in panel (b). (b) Close-up view of the PV diagram towards the 689\,GHz continuum source. Contours are plotted by an interval of 1.5\sgm\ starting from the 2\sgm\ level. The dashed-curves are the Keplerian rotation curves for the enclosed masses of $3\times 10^6$ \Msun\ assuming rotation velocity of 65 \kms\ at radii of 3\,pc (see text). The vertical red, green, and blue bars at the right-hand side indicate velocity ranges as shown in Figure \ref{fig:sp}. \label{fig:pv}} \end{figure} \clearpage \begin{figure}[b!] \includegraphics[angle=0,scale=.29]{f5.eps} \vspace{0truemm} \caption{Comparison of the peak pixel position of the 689\,GHz continuum (star) and those obtained from the CO (6--5) velocity channel maps. The light-blue, yellow, and magenta squares with values of their LSR velocities show the peak pixel positions of the blue, green, and red components shown in Figure \ref{fig:sp}, respectively. Note that all the squares shown here have S/N-ratio of higher than 4, as shown by the filled squares in Figure \ref{fig:chmaps}. The double-colors squares, the light-blue and yellow one with the double-labels of 1080 and 1136 indicate the two velocity channels peak at the same position. Similarly the light-blue filled square enclosed by the magenta square labeled with 1094 and 1219 show the peak position of the two velocity components. \label{fig:pixpos_summary}} \end{figure} \clearpage \begin{figure}[ht!] \includegraphics[angle=0,scale=.49]{f6.eps} \vspace{0truemm} \caption{Comparisons of star formation properties of the ``massive dense gas cloud" (star) with the galactic and extra-galactic clouds in the diagrams of (a) star formation rate (SFR) vs. gas mass, $M_\mathrm{gas}$, and (b) star formation rate density $\dot{\Sigma}_\mathrm{g}$ (\Msun\ yr$^{-1}$ kpc$^{-2}$) vs. surface density of gas, $\Sigma_\mathrm{g}$ (\Msun\ pc$^{-2}$). The plots of (a) and (b) are taken from \citet{Lada12} and \citet{KM05}, respectively, with permissions of the AAS. The percentages labeled with the dashed lines in (a) indicate dense gas fraction, $f_\mathrm{d}$ [see \citet{Lada12} for detail]. The dashed- and solid lines in (b) are the best-fit curve by \citet{K98} and the model one by \citet{KM05} [see \citet{KM05} for detail]. \label{fig:SFR}} \end{figure} \clearpage \begin{center} \begin{table} \tbl{Properties of the ``massive dense gas cloud" in the nuclear region of NGC\,1068}{% \begin{tabular}{@{}cc@{\qquad}ccc@{}} \hline\noalign{\vskip3pt} \multicolumn{1}{c}{Property} & Symbol & Value & \multicolumn{1}{c}{Unit} & Section \\ [2pt] \hline\noalign{\vskip3pt} R.A. & $\alpha_\mathrm{J2000}$ & 02:42:40.714 & h:m:s & \ref{ss:ContPeak} \\ Decl. & $\delta_\mathrm{J2000}$ & $-00$:00:47.79 & \timeform{D}: \timeform{'}: \timeform{"} & \ref{ss:ContPeak} \\ Size & 2\Reff\ & $\sim\,5$ & pc & \ref{ss:DP}, \ref{ss:SFandGproperty} \\ Temperature\footnotemark[$*$] & $T_\mathrm{d},\,T_\mathrm{gas}$ & 50 -- 70 & K & \ref{ss:SFandGproperty} \\ Gas plus dust mass & $M_\mathrm{gas}$ & $(5\pm 3)\times 10^5$ & \Msun\ & \ref{ss:SFandGproperty} \\ Column density & $\langle N_\mathrm{H_2}\rangle$ & $(7\pm 2)\times 10^{23}$ & cm$^{-2}$ & \ref{ss:SFandGproperty} \\ Surface gas density & $\log\,\Sigma_\mathrm{g}$ & $ 3.9\pm 0.2$ & \Msun\ pc$^{-2}$ & \ref{ss:nature} \\ Volume density & $n_\mathrm{H_2}$ & $\sim 1\times 10^5$ & cm$^{-3}$ & \ref{ss:SFandGproperty} \\ Bolometric luminosity & $L_\mathrm{bol}$ & (0.4 -- 4)$\times 10^8$ & \Lsun\ & \ref{ss:SFandGproperty} \\ Stellar mass & $M_\ast$ & a few $\times 10^4$ & \Msun\ & \ref{ss:SFandGproperty} \\ Star formation rate & SFR & (0.4 -- 3.2)$\times 10^{-1}$ & \Msun\ yr$^{-1}$ & \ref{ss:SFandGproperty} \\ Star formation rate density & $\log\,\dot{\Sigma}_\mathrm{g}$ & $2.9\pm 0.2$ & \Msun\ yr$^{-1}$ kpc$^{-2}$& \ref{ss:nature} \\ (Mach number)$^2$ & $M_\mathrm{vir}/M_\mathrm{vir}^\mathrm{thm}$ & (2 -- 3)$\times 10^3$ & $\cdot\cdot\cdot$ & \ref{ss:SFandGproperty}, \ref{ss:origin}\\ \hline\noalign{\vskip3pt} \end{tabular}} \label{tbl:Ps} \begin{tabnote} \hangindent6pt\noindent \hbox to6pt{\footnotemark[$*$]\hss}\unskip% Assumption. \end{tabnote} \end{table} \end{center} \clearpage
16
9
1609.02400
1609
1609.04474.txt
We study statistical properties of synchrotron polarization emitted from media with magnetohydrodynamic (MHD) turbulence. We use both synthetic and MHD turbulence simulation data for our studies. We obtain the spatial spectrum and its derivative with respect to wavelength of synchrotron polarization arising from both synchrotron radiation and Faraday rotation fluctuations. In particular, we investigate how the spectrum changes with frequency. We find that our simulations agree with the theoretical predication in Lazarian \& Pogosyan (2016). We conclude that the spectrum of synchrotron polarization and it derivative can be very informative tools to get detailed information about the statistical properties of MHD turbulence from radio observations of diffuse synchrotron polarization. Especially, they are useful to recover the statistics of turbulent magnetic field as well as turbulent density of electrons. We also simulate interferometric observations that incorporate the effects of noise and finite telescope beam size, and demonstrate how we recover statistics of underlying MHD turbulence.
\label{sect:intro} Turbulence with embedded magnetic field is almost everywhere in the universe on a wide variety of scales, such as interstellar medium (Elmegreen \& Falgarone 1996; Elmegreen \& Scalo 2004) and intracluster medium (Ensslin \& Vogt 2006; Lazarian 2006). Substantial theoretical progress in relating MHD turbulence with astrophysical processes has been made. The areas include star formation (Larson 1981; McKee \& Tan 2002; Elmegreen 2002; Mac Low \& Klessen 2004; Ballesteros-Paredes et al. 2007; McKee \& Ostriker 2007), accretion disks (Balbus \& Hawley 2002), the solar wind (Hartman \& McGregor 1980; Podesta 2006; Wicks et al. 2012), magnetic reconnection (Lazarian \& Vishniac 1999; Lazarian et al. 2015) and cosmic rays (Schlickeiser 2002). Turbulence is a chaotic phenomenon. However, it allows for a very simple statistical description. The landmark achievements include the famous Kolmogorov statistical theory (Kolmogorov 1941) as well as Goldreich \& Sridhar MHD turbulence theory (Goldreich \& Sridhar 1995). The latter is the theory relevant to most magnetized astrophysical fluids, including magnetic fields responsible for the most of the Galactic and extragalactic synchrotron emission\footnote{We note parenthetically that the first attempts to formulate the MHD turbulence theory can be traced back to the classical works of Iroshnikov (1964) and Kraichnan (1968). Later advances include Montgomery \& Turner (1981), Shebalin et al. (1983), and Higdon (1984). For the further advancements of the theory and its testing one can refer to a number of papers that include Lazarian \& Vishniac (1999), Cho \& Vishniac (2000), Maron \& Goldreich (2001), Cho et al. (2002). The extension of MHD turbulence theory for compressible turbulence can be found in Lithwick \& Goldreich (2001), Cho \& Lazarian (2002a,b, 2003), and Kowal \& Lazarian (2007). Recent reviews on the subject include Brandenburg \& Lazarian (2013) and Beresnyak \& Lazarian (2015).}. As no in situ measurements of turbulence is possible beyond the very limited volume of the interplanetary medium and the solar wind, it is challenging to obtain turbulence statistics from observations. This area has been a focus of intensive theoretical and observational research for a number of decades with a significant progress achieved recently (Muhch 1958; Munch \& Wheelon 1958; Burkhart et al 2012; Brunt \& Heyer 2013; Chepurnov et al. 2010, 2015). A lot of studies have focused on the velocity field statistics (see Lazarian 2009 for a review and references therein). However, for studies of MHD turbulence it is even more important to know the statistics of the complementary measure, namely, magnetic field. In this paper, we discuss how to obtain the statistical properties of magnetic field from observations of synchrotron polarization. The simplest possible model of turbulence is Kolmogorov phenomenology, which provides a simple scaling relation for both incompressible and mildly compressible hydrodynamic turbulence cascade (Kolmogorov 1941, 1962). Suppose that energy is injected at a scale L, called the energy injection scale or the outer scale of turbulence. Then, the injected energy cascades to smaller scales with negligible energy losses and reaches the dissipation scale. The range between the energy injection scale and the dissipation scale is called inertial range and the Kolmogorov model predicts a $P_{3D}(\textbf{k})=P_{3D}(k) \propto k^{-11/3}$ three-dimensional (3D) power spectrum in the inertial range. Here \textbf{k} is the wave-vector in 3D space, i.e., $\textbf{k}=(k_x,k_y,k_z)$, and $k=\sqrt{k_x^2+k_y^2+k_z^2}$. The spectra of many astrophysical quantities typically exhibit 3D power spectra with power law indices close to $-11/3$ (Armstrong et al. 1995; Leamon et al 1998; Chepurnov \& Lazarian 2009; Lazarian et al. 2002), corresponding to Kolmogorov spectrum. If a quantity $s$ has a 3D power spectrum $k^{m}$ and we observe the quantity $S$ that is integrated along the line of sight (LOS), i.e.,~$S=\int s dl$, then the quantity $S$ has a 2D power spectrum of $P_{2D}(\textbf{K})=P_{2D}(K) \propto K^{m}$ (i.e.,~$|\tilde{S}_{\textbf{K}}|^2 \propto K^{m}$), where $\tilde{S}_{\textbf{K}}$ is the 2D Fourier transform of $S$, and \textbf{K} is the wave-vector in 2D space. If the LOS is along the z-direction, then $\textbf{K}=(k_x,k_y)$ and $K=\left|\textbf{K}\right|=\sqrt{k_x^2+k_y^2}$. In this case, the 1D spectrum for the 2D data $E_{2D}(K)$ (see Appendix B for definition) is proportional to $K^{m+1}$. When the eddy motions perturb magnetic field lines and produce magnetic fluctuations, the perturbations leave imprints of turbulence statistics on magnetic field. Therefore, we can study turbulence by observing statistics of magnetic field. One of the easiest way to estimate the magnetic field direction and its strength in the plane perpendicular to the LOS (i.e., the plane of the sky) is via polarization resulting from synchrotron radiation. Relativistic electrons interact with magnetic field and emit synchrotron radiation, which is sensitive to both the strength and direction of magnetic field. The total intensity of synchrotron emission depends on the number density of electrons and the strength of the magnetic field perpendicular to the LOS: $I \propto \int N_{0}B_{\perp}^{(p +1)/2}\omega ^{-(p-1)/2} dz$, where the LOS is along the $z$ direction, $\omega$ is gyro-frequency, and p is the power-law index of electron energy distribution. In addition, since the synchrotron radiation is linearly polarized and the direction of polarization is perpendicular to the plane-of-the-sky magnetic field, we can infer the statistics of the plane-of-the-sky magnetic field from observations of synchrotron polarization. However, Faraday rotation of the direction of polarization makes the interpretation of synchrotron polarization more complicated. Faraday rotation can be obtained from measurements of the polarization angles $\chi$ at several wavelengths and quantified by the rotation measure (RM) defined by integration of electron density times the strength of the LOS magnetic field along the LOS. Since polarized synchrotron emission and Faraday rotation are related with magnetic field perpendicular to and parallel to the LOS, respectively, they can yield the information about the corresponding components of magnetic field in the region. In this paper, we concentrate on the statistical properties of synchrotron polarization. The spectrum of polarized synchrotron emission is of great importance for understanding of many astrophysical phenomena (see Beck 2015). The observed spectra of synchrotron emission and polarization reveal a range of power-law spectra (de Oliveira-Costa et al. 2003). It is clear that the spectrum of polarized synchrotron emission reflects that of magnetic fluctuations. This was usually demonstrated for the particular index of galactic cosmic rays, namely $\gamma~(\equiv (p+1)/2)=2$, which provides the dependence of the synchrotron signal proportional to magnetic field squared. More recently, the statistics of fluctuations for arbitrary $\gamma$ was obtained in Lazarian \& Pogosyan (2012; hereinafter LP12), where it was found how the change of $\gamma$ affects the spectral amplitude and was shown that the spectral slope of fluctuations is not changed. This opened the possibility of the statistical description of synchrotron emission for arbitrary $\gamma$\footnote{The numerical study in Herron et al. (2015) successfully tested the predictions in LP12 for fluctuating synchrotron intensities.}. Therefore, on the basis of this finding, we can say that, without Faraday rotation, the spectrum of polarized synchrotron emission should reveal the spectrum of the plane-of-the-sky magnetic field integrated along the LOS for any power law distribution of galactic cosmic ray electrons. However, it was not clear how the observed spectrum of synchrotron polarization and underlying magnetic spectrum are related in the presence of Faraday rotation. This was the focus of the analytical study in Lazarian \& Pogosyan (2016; henceforth LP16). This study invoked a number of inevitable simplifying approximations and therefore it is important to test the LP16 predictions numerically. With the ability to recover the statistics of the underlying magnetic field as well as underlying density of electrons it is possible to use the wealth of the synchrotron polarization data to understand the sources and sinks of turbulent energy in the Milky Way galaxy as well as for other galaxies and even clusters of galaxies. This research can help to constrain the driving and understand how energy is injected on large scales and transferred to smaller scales. Furthermore, our understanding of the spectrum can make it possible to produce a precise polarization map arising from magnetized turbulence and remove the synchrotron foreground in future CMB polarization observations. We expect that high sensitivity of new generation telescopes, e.g., Square Kilometer Array (SKA) and LOw Frequency ARray (LOFAR), will carry numerous information to map synchrotron polarization (Beck \& Wielebinski 2013). For example, cosmic ray electrons with relatively low energies ($\sim$GeV) originated from supernova remnants in the Galactic disk generate synchrotron emission at low frequencies. LOFAR can detect their propagation and evolution process at low frequencies, which result in fluctuations of polarization, and their relation to the properties of turbulent magnetic field. Moreover, the observational facility, such as the VLA and Australian Square Kilometre Array Pathfinder (ASKAP), can produce considerable observation for making sensitive polarization images of the entire sky. Polarization Sky Survey of the Universe's Magnetism (POSSIM; Gaensler 2010) conducted with ASKAP is ongoing project on Faraday structure determination, which covers a frequency range from 1100MHz to 1400MHz (Sun et al 2015). In future we will be able to reconstruct the polarization spectra of synchrotron emission and Faraday rotation by comparing with those observations, and obtain detailed magnetic field statistics from polarization observations with SKA at both lower frequencies and higher frequencies in the Milky Way and intracluster medium. In this paper, we investigate spectral behavior of polarized synchrotron fluctuations in the presence of Faraday rotation. Since the effects of Faraday rotation is proportional to $\lambda^2$, where $\lambda$ is the wavelength, we show how the spectrum changes as the wavelength increases. We describe numerical methods in Section \ref{sect:numeric}. We present results in Section \ref{sect:results}, discussions in Section \ref{sect:discussion}, and summary in Section \ref{sect:summary}. In Section \ref{sect:results}, we include calculations for interferometric observations. %===================================================================== % Numerical Method %=====================================================================
\label{sect:discussion} \subsection{Numerical tests of theory} \subsubsection{Polarization from spatially-coincident synchrotron emission and Faraday rotation regions} In the paper we successfully tested the predictions of LP16 that using synchrotron polarization fluctuations it is possible to recover both magnetic field and density statistics. We tested both the predictions for the spectrum of polarization $P$ and its derivative $dP/d\lambda^2$ and showed that these statistics are complimentary. In particular, the latter is focused more on the fluctuations of Faraday rotation.\footnote{The measure $dp/d\lambda^{2}$ is shown in LP16 to recover the statistics of Faraday rotation in the case of weak Faraday rotation. In this limit LP16 showed that the correlations of polarization recover only the statistics of the underlying statistics of magnetic field responsible for synchrotron fluctuations.} For our testing we used synthetic data the spectrum of which we varied to test the theoretical predictions. We explored the effects of varying the wavelength of measurements on our studies of turbulence statistics. We found a number of numerical effects related to the finite numerical resolution of our synthetic turbulent datasets. In particular, we found that when $\lambda^{2} \sim \frac{K_{max}}{2\pi\left<n_{e}\left| \textbf{B}_{\parallel}\right|\right>}$, structures are decorrelated with small scales and derive depolarization showing that the spectrum of polarization is getting proportional to $K$. We tested theoretical predictions in the most complicated case considered in LP16, namely, when the volume that is emitting polarized synchrotron emission is also providing Faraday rotation due to the presence of electrons. The cases when only one effect is present in the turbulent volume are the limiting cases of the condition that we tested\footnote{The fact that these cases are simpler does not mean that they are not interesting. In a variety of astrophysically relevant setting the different regions may be dominated either by emission or Faraday rotation rather than their mixture. For instance, in a recent paper Xu \& Zhang (2016) successfully used LP16 theory to re-analyze the Faraday polarization data. They provided an interpretation of the data that is different from earlier interpretations based on the ad hoc approximations of thin Faraday screen.}. In addition, we tested the prediction in an earlier paper by LP12 that the slope of the synchrotron spectrum does not depend on the spectral index $\gamma$ of relativistic electrons. Similar studies for synchrotron intensities were performed in Herron et al. (2016), while our study is dealing with the fluctuations of synchrotron polarization. This testing is important as $\gamma$ varies in astrophysical objects. Our results show that using expressions in LP12 one can study turbulence in synchrotron emitting volumes for arbitrary $\gamma$. \subsubsection{Polarization from spatially-separated synchrotron emission and Faraday rotation regions} \begin{figure}[th] \centering \includegraphics[scale=.52,angle=0]{pw2d_wava_tworegions} \caption{\textit{Ring-integrated} 1D spectrum $E_{2D}(K)$ arising from separated regions of polarized synchrotron emission and Faraday rotation. The 3D power spectrum $P_{3D}(k)$ is proportional to $k^{-11/3}$ for both electron number density and \textbf{B} in the foreground medium that produces Faraday rotation. Different curves correspond to different wavelengths. } \label{fig:tworegion} \end{figure} In this paper, we did not consider the case in which synchrotron emitting region and Faraday rotation region are spatially separated. However, there can be a situation when synchrotron originates in one distinct region while Faraday rotation acts on synchrotron radiation in another region. Figure \ref{fig:tworegion} shows the spectra arising from separated regions of polarized synchrotron emission and Faraday rotation. We assume that background polarized synchrotron radiation is wavelength-independent and passes through a foreground medium that produces Faraday rotation. To obtain the background polarized radiation, we first generate 3D density and magnetic field on a grid of $512^{3}$ points (see section \ref{sect:simulation} for details) and calculate polarized synchrotron emission without Faraday rotation. The result of the calculation is polarized radiation on a grid of $512^{2}$, which is used as background polarized synchrotron radiation. The spectrum of the radiation follows Kolmogorov one (see the black solid curve). We also generate foreground density and magnetic field on a grid of $512^{3}$ points using different seeds for random numbers. We set $B_{0}=1$ along the LOS in the foreground medium and Faraday rotation is dominated by the mean field. The spectra of foreground magnetic field and density also follow Kolmogorov ones. We can clearly see depolarization in Figure \ref{fig:tworegion}. In fact, the overall behavior of spectra in Figure \ref{fig:tworegion} is very similar to that in Figure \ref{fig:synFR}(a). Therefore, roughly speaking, the results for spatially-coincident case can be applicable to the case in which synchrotron emitting region and Faraday rotation region are spatially separated. Note, however, that observed spectrum of polarized synchrotron emission arising from the spatially-separated regions may be more complicated. For example, the width of the Faraday rotation region may also affect the observed spectrum (see LP16). \subsection{Importance of synchrotron studies} Magnetic turbulence is essential for key astrophysical processes. Thus a number of techniques have been suggested to study it. While some of them are purely empirical, i.e., based on the comparison of the synthetic observations and the underlying turbulence within the numerical simulations (see Rosolowsky et al. 1999, Padoan et al. 2001, Brunt et al. 2003, Heyer et al. 2008, Gaensler et al. 2011, Toefflemire et al. 2011, Burkhart et al. 2012, Brunt \& Heyer 2013, Burkhart \& Lazarian 2015), others are based on the theoretical description of turbulence statistics. For instance, in a series of papers by Lazarian \& Pogosyan (2000, 2004, 2006, 2008) the statistics of intensity fluctuations within Position-Position-Velocity (PPV) data cutes was described for the Doppler shifted spectral lines from turbulent volumes. These studies provide the way to recover the statistics of density and velocity (see Lazarian 2009 for a review as well as Padoan et al. 2009, Chepurnov et al. 2010, 2015). The magnetic field statistics provides the complimentary essential piece of information and the studies in LP16 were aimed at obtaining a theory-motivated way to study magnetic turbulence from observations. Our successful testing of some of the suggested techniques paves the way to the application of these techniques to observational data. Our study is complementary to that in a recent paper by Zhang et al. (2016), where the other analytical expressions from LP16 were tested. Indeed, Zhang et al. (2016) tested the fluctuations of the polarization variance with the wavelength. The input data for such studies is the polarized synchrotron radiation collected along a single line of sight but with the wavelength being changed. In contrast, in this paper we studied the statistics of two point correlations for the same wavelength. Studying synchrotron variations is not only important for astrophysical applications, e.g., for understanding better processes of cosmic ray propagation, transport of heat, star formation etc., but also for observational cosmology. Indeed, synchrotron polarization fluctuation presents an important foreground for search of enigmatic B-modes of cosmological origin. Thus our testing of how these fluctuation varies with the wavelength as well as with the variations of $\gamma$ are of prime applicability to such a search. \subsection{Complementary ways of studying turbulence} Our present work opens ways for studying turbulence using synchrotron polarization. Such studies are complementary to the spectroscopic Doppler-shift studies of fluctuations. Those studies can be performed using Doppler shifted lines using Velocity Channel Analysis (VCA, Lazarian \& Pogosyan 2000, 2004; Kandel, Lazarian \& Pogosyan 2016a), Velocity Coordinate Spectrum (VCS, Lazarian \& Pogosyan 2006, 2008) and Velocity Centroids (Lazarian \& Esquivel 2003; Esquivel \& Lazarian 2005, 2010, Burkhart et al. 2014; Kandel, Lazarian \& Pogosyan 2016; Cho \& Yoo 2016). Combining the techniques it is possible to see how the turbulence varies from one media, that can be sampled by spectral lines to another, that can be sampled by synchrotron fluctuations. This can answer important questions related to whether the turbulence in the interstellar medium presents on big cascade or whether different phases of the ISM maintain their individual turbulent cascades. %============================================================== % Summary %============================================================== We successfully tested analytical expressions in LP16 and proved that the techniques suggested there can be used to analyze observed fluctuation of polarized synchrotron radiation in order to restore the statistics of underlying magnetic turbulence. Our numerical results testify that such a study can be performed \begin{itemize} \item for arbitrary spectral index of relativistic electrons, \item in the presence of Faraday rotation and depolarization caused by turbulent magnetic field, \item in the settings when only Faraday rotation is responsible for the polarization fluctuations, \item in the presence of effects of finite beamsize and noise, and \item in case the data are obtained with an interferometer with measurement performed for just a few baselines. \end{itemize} We believe that our present study paves the way for the successful use of LP16 techniques with observational data. %============================================================ % acknowledgements %============================================================
16
9
1609.04474
1609
1609.00319_arXiv.txt
We investigate the orbital and rotational evolution of the CoRoT-7 two-planet system, assuming that the innermost planet behaves like a Maxwell body. We numerically resolve the coupled differential equations governing the instantaneous deformation of the inner planet together with the orbital motion of the system. We show that, depending on the relaxation time for the deformation of the planet, the orbital evolution has two distinct behaviours: for relaxation times shorter than the orbital period, we reproduce the results from classic tidal theories, for which the eccentricity is always damped. However, for longer relaxation times, the eccentricity of the inner orbit is secularly excited and can grow to high values. This mechanism provides an explanation for the present high eccentricity observed for CoRoT-7\,b, as well as for other close-in super-Earths in multiple planetary systems.
Close-in planets undergo tidal interactions with the central star, which shrink and circularize the orbits on time-scales that depend on the orbital distances, but also on the physical properties of the interacting bodies. The rotation of short-period planets is also modified and reaches a stationary value in time-scales usually much shorter than the orbital evolution \citep[e.g.,][]{Hut_1981, Ferraz-Mello_etal_2008, Correia_2009, Rodriguez_etal_2011}. The tidal interaction ultimately results in synchronous motion (the orbital and rotation periods become equal), \bfx{which is the only possible} state when the orbit is circularized \citep[e.g.,][]{Hut_1981, Ferraz-Mello_etal_2008}. However, as long as the orbit has some eccentricity, the rotation can \bfx{stay} in non-synchronous configurations. % In general, planets with a \bfx{primarily rocky} composition have a permanent equatorial deformation or frozen-in figure \citep[e.g.,][]{Goldreich_Peale_1966, Greenberg_Weidenschilling_1984}, which contributes with a conservative restoration torque \bfx{on their figures}. In the context of the two-body problem, the gravitational interaction of an asymmetric planet with the star drives the planet rotation into different regimes of motion, including oscillations around exact spin-orbit resonances (SOR). When dissipative effects are taken into account, the oscillations are damped and the planet rotation can be trapped in exact resonance \citep[e.g.,][]{Goldreich_Peale_1966, Correia_Laskar_2009}. Although the orbital and spin evolution are connected through the total angular momentum conservation, they are commonly studied separately due to the different time-scales involved in their evolution. However, it has been shown that for close-in planets the tidal evolution of the coupled orbit-rotation is important and should not be disassociated \citep{Correia_etal_2012, Correia_etal_2013, Rodriguez_etal_2012, Rodriguez_etal_2013, Greenberg_etal_2013}. All studies cited above assumed simplified tidal models, usually using constant or linear tidal deformations \citep[e.g.,][]{Darwin_1880, Mignard_1979}, for which the tidal dissipation is constant or proportional to the corresponding frequency of the perturbation. A more realistic approach to deal with the dependency of the phase lag with the tidal frequency is to assume a viscoelastic rheology \citep[e.g.,][]{Efroimsky_2012, Remus_etal_2012b, Ferraz-Mello_2013, Correia_etal_2014}. These rheologies have been shown to reproduce the main features of tidal dissipation \citep[for a review of the main viscoelastic models see][]{Henning_etal_2009}. One of the simplest models of this kind is to consider that the planet behaves like a Maxwell material\footnote{The Maxwell material is represented by a purely viscous damper and a purely elastic spring connected in series \citep[e.g.,][]{Turcotte_Schubert_2002}.}. In this case, the planet can respond as an elastic solid or as a viscous fluid, depending on the frequency of the perturbation. \citet{Correia_etal_2014} studied the orbital and rotational evolution of a single close-in planet using a Maxwell viscoelastic rheology. However, instead of decomposing the tidal potential in an infinite sum of harmonics of the tidal frequency \citep[e.g.,][]{Kaula_1964, Mathis_Poncin-Lafitte_2009, Efroimsky_2012}, they compute the instantaneous deformation of the planet using a differential equation for its gravity field coefficients. They have shown that when the relaxation time of the deformation is larger than the orbital period (which is likely the case for rocky planets), spin-orbit equilibria arise naturally at half-integers of the mean motion, without requiring to take into account the permanent equatorial deformation. The method by \citet{Correia_etal_2014} has several advantages for studying the tidal evolution of planetary systems: 1) it works for any kind of perturbation, even for the non-periodic ones (such as chaotic motions or transient events); 2) the model is valid for any eccentricity and inclination value, we do not need to truncate the equations of motion; 3) it simultaneously reproduces the deformation and the dissipation on the planet. Therefore, this model seems to be the most appropriate to also study the impact of gravitational perturbations of companion bodies in the orbit of the inner planet. Indeed, we show here that the eccentricity of the inner body can increase due to a feedback mechanism between the tidal deformation of the planet and the orbital forcing. In this paper we provide a simple model for the coupled orbital and spin evolution of an exoplanet with a companion (Sect.\,\ref{secmodel}), and apply it to the CoRoT-7 planetary system (Sect.\,\ref{corot}). We then give an explanation for the \bfx{non-zero presently observed} eccentricity values (Sect.\,\ref{pumping}), and derive some conclusions (Sect.\,\ref{disc}).
\label{disc} In this paper we have studied the coupled orbital and spin evolution of the CoRoT-7 two-planet system using a Maxwell viscoelastic rheology for the inner planet. This rheology is characterized by a viscous relaxation time, $\tau$, that can be seen as the characteristic average time that the planet requires to achieve a new equilibrium shape after being disturbed by an external forcing. We studied the past evolution of the system adopting different values for the relaxation time of CoRoT-7\,b, ranging from a few hours up to one century ($10^{-3}-10^2$\,yr). In all situations, the spin evolves quickly until it is captured in some SOR. It then follows through a successive temporarily trappings in SORs, which are progressively destabilized as the eccentricity decays. \bfx{Several works on tidal evolution usually assume synchronous motion for the rotation of the close-in companions, as this is the natural outcome resulting from tidal interactions. Nevertheless, for large values of the relaxation times, which is likely the case for most terrestrial planets, we note that the rotation can remain trapped into high-order SORs for tens of Myr.} % We observed that there are two different regimes for the orbital evolution. For small $\tau$ values (0.01$-$0.1\,yr), the eccentricity of both orbits is rapidly damped, in agreement with previous results \citep[e.g.,][]{Ferraz-Mello_etal_2011, Rodriguez_etal_2011, Dong_Ji_2012}. However, for large $\tau$ values ($10-10^2$\,yr), the inner planet eccentricity is pumped to higher values, whereas the outer planet eccentricity is simultaneously damped due to the orbital angular momentum conservation. The inner orbit eccentricity pumping % was already reported in previous works that used the linear model instead of the Maxwell one \citep{Correia_etal_2012, Correia_etal_2013, Greenberg_etal_2013}. In these works, the effect resulted from a forced excitation of the $J_2$ due to oscillations in the rotation rate. This mechanism works as long as the rotation is close to the pseudo-synchornous state and undergoes variations due to the eccentricity forcing \citep[see][]{Correia_2011}. Although the pseudo-synchornous state can be expected for gaseous planets, for rocky planets the spin always ends up trapped in a SOR due to the permanent equatorial deformation. Thus, for this class of planets, the pumping mechanism identified by \citet{Correia_etal_2012} does not work. The eccentricity pumping described in this paper also results from a forced excitation of the $J_2$ of the planet, but due to the tidal deformation. Indeed, the equilibrium $J_2$ has a rotational (Eq.\,(\ref{j2r})) and a tidal contribution (Eq.\,(\ref{max2})), but inside a SOR the rotational contribution is nearly constant, while the tidal one still undergoes variations due to the term in $r_1^{-3}$. The pumping effect is an important mechanism that may help to explain the non-zero eccentricity presently observed for the orbit of CoRoT-7\,b. Due to the computational cost of the numerical simulations, we were not able to perform here a large set of runs for different planetary systems. However, we have shown that at least for the CoRoT-7 system unexpected behaviors can occur when we take into account the coupled orbital and spin evolution. \bfx{In particular, the non-zero eccentricities observed for many other close-in super-Earths in multiple planetary systems, may be explained by similar pumping mechanisms.} Since the Maxwell model is more realistic than the constant$-Q$ and the constant time lag models, the results described in this paper provide a more accurate picture for the diversity of behaviors among planetary systems that interact by tides. Alternative viscoelastic rheologies to the Maxwell model exist, such as the Standard Anelastic Solid model \citep[e.g.,][]{Henning_etal_2009} or the Andrade model \citep[e.g.,][]{Efroimsky_2012}. These rheologies may also be able to reproduce the pumping effect on the inner orbit eccentricity. Note, however, that in order to observe the excitation in $J_2$ due to the eccentricity forcing, we need to use a time-dependent rheological law similar to expression (\ref{max1}) that allows feedback effects. \bfx{In this study we considered coplanar orbits and the spin of the planet orthogonal to the orbits (zero obliquity). Although multi-planet systems usually present low mutual inclinations of about $1^\circ$ on average \citep{Figueira_etal_2012, Tremaine_Dong_2012}, this value can be large enough to perturb the long-term evolution of the obliquity \citep{Laskar_Robutel_1993, Correia_Laskar_2003I}. Our model can be easily extended to non-planar configurations (for planets with some obliquity and evolving in inclined orbits), provided that we additionally take into account the deformation of the $C_{21}$ and $S_{21}$ gravity field coefficients in the gravitational potential, as explained in \citet{Boue_etal_2016}.} \subsection*
16
9
1609.00319
1609
1609.07155_arXiv.txt
An accretion outburst onto a neutron star transient heats the neutron star's crust out of thermal equilibrium with the core. After the outburst the crust thermally relaxes toward equilibrium with the neutron star core and the surface thermal emission powers the quiescent X-ray light curve. Crust cooling models predict that thermal equilibrium of the crust will be established $\approx 1000 \, \mathrm{d}$ into quiescence. Recent observations of the cooling neutron star transient \mxb, however, suggest that the crust did not reach thermal equilibrium with the core on the predicted timescale and continued to cool after $\approx 2500 \, \mathrm{d}$ into quiescence. Because the quiescent light curve reveals successively deeper layers of the crust, the observed late time cooling of \mxb\ depends on the thermal transport in the inner crust. In particular, the observed late time cooling is consistent with a low thermal conductivity layer near the depth predicted for nuclear pasta that maintains a temperature gradient between the neutron star's inner crust and core for thousands of days into quiescence. As a result, the temperature near the crust-core boundary remains above the critical temperature for neutron superfluidity and a layer of normal neutrons forms in the inner crust. We find that the late time cooling of \mxb\ is consistent with heat release from a normal neutron layer near the crust-core boundary with a long thermal time.
\label{sec:intro} An accretion outburst onto a neutron star transient triggers non-equilibrium nuclear reactions \citep{sato1979,bisnovatyi1979} that deposit heat in the neutron star's crust \citep{haensel1990, haensel2003, haensel2008}. Accretion-driven heating brings the crust out of thermal equilibrium with the core; when accretion ceases, the crust cools toward thermal equilibrium with the core and powers the quiescent light curve \citep{brown1998,ushomirsky2001,rutledge2002}. Brown \& Cumming (2009) discussed the basic idea that observations at successively later times into quiescence probe successively deeper layers in the crust with increasingly longer thermal times. In particular, about a year into quiescence the shape of the cooling light curve is sensitive to the physics of the inner crust at mass densities greater than neutron drip $\rho \gtrsim \rho_{\rm drip} \approx 4 \times 10^{11} \, \mathrm{g \ cm^{-3}}$ \citep{page2012}. Among the modeled cooling transients, \mxb \ \citep{wijnands2003, wijnands2004, cackett2008} was thought to be unique in that its crust appeared to reestablish its long-term thermal equilibrium with the core after $\approx 1000 \, \mathrm{d}$ into quiescence \citep{brown2009}. Recent observations of \mxb \ \citep{cackett2013}, however, indicate that the crust continued to cool after $\approx 2500 \, \mathrm{d}$ to reach a new low temperature when observed $\approx 4000 \, \mathrm{d}$ into quiescence. Although the drop in count rate could be explained by a change in absorption column, for example due to a build up of an accretion disk in the binary, it is also consistent with a drop in neutron star effective temperature. Horowitz et al.~(2015) show that the late time drop in temperature in \mxb \ could be caused by a low thermal conductivity layer at the base of the inner crust at a mass density $\rho \gtrsim 8 \times 10^{13} \, \mathrm{g \ cm^{-3}}$. The low thermal conductivity may be a consequence of nuclear pasta, which forms when nuclei are distorted into various complex shapes at high densities in the inner crust \citep{ravenhall1983,hashimoto1984,oyamatsu1993}. Nuclear pasta has been studied using quantum molecular dynamics simulations \citep{maruyama1998,watanabe2003} and semi-classical molecular dynamics simulations \citep{horowitz2004,horowitz2008, schneider2013}, but the thermal properties of nuclear pasta remain uncertain. Horowitz et al.~(2015) discovered a possible mechanism for lowering the electrical and thermal conductivity of pasta, finding spiral defects in molecular dynamics simulations of pasta that could act to scatter electrons. They demonstrate that a signature of the low conductivity pasta layer would be in the thermal behavior of the crust and they show that models of crust cooling in \mxb\ that include a low conductivity pasta layer can account for the observed drop in count rate. Similarly, a low electrical conductivity layer has been suggested by \citet{pons2013} to explain the puzzling cutoff in the spin period distribution of pulsars at $P\sim 10$ seconds, and they suggest the low electrical conductivity layer may be associated with a nuclear pasta phase deep in the crust. The quasi-free neutrons that coexist with nuclear pasta in the deep inner crust also impact late time crust cooling \citep{page2012}. The critical temperature $T_c$ of the $^1$S$_0$ neutron singlet pairing gap is expected to increase from zero near neutron drip to a maximum value near $T_c \gtrsim 10^9 \, \mathrm{K}$ before decreasing again at high mass densities where the repulsive core of the neutron interaction removes the tendency to form pairs. Calculation of the critical temperature, however, is complicated by the influence of the nuclear clusters, and a wide range of predictions for $T_c(\rho)$ have been made in the literature (e.g., see the plot in Page \& Reddy 2012 and references therein). One of the uncertain aspects of the pairing gap is whether the $^1$S$_0$ gap closes before or after the crust-core transition \citep{chen1993}. If the gap closes before the crust-core transition and there is a low thermal conductivity pasta layer, a layer of normal neutrons will persist near the base of the crust where $T>T_c$, significantly increasing its heat capacity. Here we show that a normal neutron layer with a large heat capacity leaves a signature in the cooling curve at late times and a crust cooling model with normal neutrons gives the best fit to the quiescent cooling observed in \mxb. The months to years long flux decays following magnetar outbursts have also been successfully fit with crust thermal relaxation models (e.g.,~\citealt{lyubarsky2002,pons2012,scholz2014}). Many uncertainties remain, including the origin of the X-ray spectrum, the nature of the heating event that drives the outburst, and the role of other heat sources such as magnetospheric currents \citep{Beloborodov2009}. Despite this, magnetar flux decays are interesting because the decay can span a large range of luminosity, and because multiple outbursts from the same source can be studied. The outburst models typically require energy injection into the outer crust of the star, but a significant amount of energy is conducted inward to the core. Late time observations as the magnetar's crust relaxes may then probe the thermal properties of the inner crust. We investigate the role of a low thermal conductivity pasta layer and normal neutrons in cooling neutron stars in more detail in this paper. In Section~\ref{sec:model}, we outline our model of the crust cooling in \mxb, highlighting the important role of the density dependence of the neutron superfluid critical temperature near the crust-core transition. In Section 3, we discuss late time cooling in other sources, including the accreting neutron star \ks\ and the magnetar \sgr. We conclude in Section~\ref{sec:discussion}.
\label{sec:discussion} We have examined the late time quiescent cooling of the neutron star transient \mxb\ where cooling was observed for $\approx 4000\,\mathrm{d}$ into quiescence prior to its renewed outburst activity \citep{negoro2015}. The quiescent cooling probes successively deeper layers of the neutron star's crust with increasingly longer thermal times and the late time cooling $\gtrsim 1000\, \mathrm{days}$ into quiescence depends on the thermal transport properties of the inner crust. In particular, late time cooling in \mxb\ requires a low thermal conductivity layer with $\Qimp\ \gtrsim 20$ at mass densities $\rho \gtrsim 8 \times 10^{13} \, \mathrm{g \ cm^{-3}}$ where nuclear pasta is expected to appear \citep{horowitz2015}. The pasta layer maintains a temperature difference of $\Delta T \approx 3 \times 10^7 \, \K$ between the inner crust and core during the outburst. As a consequence, normal neutrons with a long thermal time appear at the base of the crust that cause late time cooling if the neutron singlet pairing gap closes in the crust. Without normal neutrons at the base of the crust, as is the case if the neutron singlet pairing gap closes in the core, the crust reaches thermal equilibrium with the core after $\approx 3000 \, \mathrm{d}$ and late time cooling is removed. Page \& Reddy (2012) pointed out that differences in $T_c(\rho)$ and the resulting presence or absence of a layer of normal neutrons at the base of the crust could affect the cooling curves at late times $\approx 1000$ days into cooling. We find a much larger effect and on a longer timescale here because the low thermal conductivity of the nuclear pasta layer keeps the inner crust much hotter during the outburst. During quiescence, the base of the crust remains at a higher temperature than the core for $\approx 5000 \, \mathrm{days}$ (see Equation~[\ref{eq:deltaT}]). The temperature difference between the crust and core results in a slow decline of the quiescent light curve after $\gtrsim 1000 \, \mathrm{days}$, as can be seen in Figure~\ref{fig:figure1}. We also investigated the late time cooling of \ks\ which was observed $\approx 14.5 \, \mathrm{yrs}$ into quiescence \citep{cackett2013}. Although the quiescent light curve in this source can be fit without a low conductivity pasta layer \citep{merritt2016}, we find a comparable fit with a $\Qimp = 20$ pasta layer and using a neutron singlet pairing gap that closes in the crust (see Figure~\ref{fig:ks1731}). That both \mxb\ and \ks\ fits prefer a pasta layer with $\Qimp=20$ suggests that the inner crust composition may be similar in accreting neutron stars regardless of their initial crust composition, as was found in a study of the accreted multi-component crust \citep{gupta2008}. We studied \sgr, a magnetar with late time observations of two outbursts. Based on the previous outburst in 2008, the source may not yet have fully thermally relaxed and could show further cooling. We investigated a low conductivity pasta region as a way to prolong the cooling, but found that the flattening of the luminosity at times $\gtrsim 1000$ days could be explained only if energy was deposited directly into the inner crust. This is because in magnetars the energy is assumed to be deposited rapidly rather than over many thermal times as in accreting neutron stars. Nevertheless, if the core temperature is low $T_{\rm core}\lesssim 7 \times 10^7 \, \K$, variations in inner crust physics affect the light curve and should be included in models. Furthermore, the need for energy to be deposited in the inner crust constrains models for transient magnetic energy release in magnetars (e.g.,~\citealt{Li2016,Thompson2016}), and argues against only heating the crust externally (e.g.,~\citealt{li2015}). Late time cooling in \mxb\ requires that the $^1$S$_0$ neutron singlet pairing gap close in the crust. As a result, superfluid neutrons are confined to the inner crust shallower than the pasta layer at $\rho \lesssim 8 \times 10^{13} \, \mathrm{g \ cm^{-3}}$ where $T \ll T_c$. By contrast, a recent study of pulsar glitches suggests that the neutron superfluid extends from the crust into the core continuously \citep{andersson2012}. Recent calculations of the neutron effective mass in a non-accreted ($\Qimp = 0$) crust suggest that $\mnstar \gg m_n$ at the base of the crust \citep{chamel2005,chamel2012}. In this case, a larger fraction of free neutrons are entrained in the inner crust and the neutron superfluid must then extend into the core to supply adequate inertia for pulsar glitches \citep{andersson2012}. We note, however, that the above calculation for the neutron effective mass is likely inappropriate for the impure crust compositions found in the accreting transients studied here. Therefore, we here assume $\mnstar \approx m_n$ as found in \citet{brown2013} in the absence of effective mass calculations in an accreted crust.
16
9
1609.07155
1609
1609.09148_arXiv.txt
We develop a model for a possible origin of hard very high energy spectra from a distant blazar. In the model, both the primary photons produced in the source and secondary photons produced outside the source contribute to the observed high energy $\gamma$-rays emission. That is, the primary photons are produced in the source through the synchrotron self-Compton (SSC) process, and the secondary photons are produced outside the source through high energy protons interaction with the background photons along the line of sight. We apply the model to a characteristic case was the very high energy (VHE) $\gamma$-ray emissions in distant blazar 1ES 1101-232. Assuming a suitable electron and proton spectra, we obtain excellent fits to observed spectra of distant blazar 1ES 1101-232. This indicated that the surprisingly low attenuation of high energy $\gamma$-rays, especially for the shape of the very high energy $\gamma$-rays tail of the observed spectra, can be explained by secondary $\gamma$-rays produced in interactions of cosmic-ray protons with background photons in the intergalactic space.
\label{sec:intro} Blazars, a special class of active galactic nuclei (AGN), exhibit that the continuum emission, which arise from the jet emission taking place in AGN whose jet axis is closely aligned with the observer's line of the sight, is dominated by nonthermal emission as well as rapid and large amplitude variability(Urry \& Padovani 1995). The broad spectral energy distributions (SED) from the radio to the $\gamma$-rays bands is dominated by two components, indicating two humps. It is widely admitted that the first hump is produced from electron synchrotron radiation, the peaks range from the infrared-optical up to X-rays regime for different blazars (Urry 1998). The second hump, the peaks in the GeV to TeV $\gamma$-ray band, probably is produced from inverse Compton scattering of the relativistic electrons either on the synchrotron photons(Maraschi et al. 1992) or on some other photon populations(Dermer et al. 1993; Sikora et al. 1994). As a open issue, high energy $\gamma$-ray is produced by mesons and leptons through the cascade initiated by proton-proton or proton-photon interactions(e.g. Mannheim \& Biermann 1992; Mannheim 1993; Phol \& Schlickeiser 2000; Aharonian 2000; M$\ddot{\rm u}$cke \& Protheroe 2001). Observations of very high energy (VHE) $\gamma$-ray indicate that more than 40 blazars radiate $\gamma$-rays in the TeV energy regions(e.g. Aharonian et al. 2005; Cui 2007; Wagner 2008). It is believed that the primary TeV photons from the distant TeV blazars should exhibit clear signatures of absorption due to theirs interactions with the extragalactic background light (EBL) to produce electron-positron($e^{\pm}$)(e.g. Nikishov 1962; Gould \& Schreder 1966). However, the observed spectra do not show a sharp cutoff at energies around 1 TeV (Aharonian et al. 2006; Costamante et al. 2008; Acciari et a. 2009). One characteristic case was the very high energy (VHE) $\gamma$-ray emissions in distant blazar 1ES 1101-232, which detected by the high energy stereoscopic system (HESS) array of Cherenkov telescopes(Aharonian et al. 2006a; 2007a). The VHE $\gamma$-rays data result in very hard intrinsic spectra with a peak in the SED above 3 TeV, through corrected for absorption by the lowest level EBL (Aharonian et al. 2007a). A similar behavior has also been detected in other TeV blazars such as 1ES 0229+200 (Aharonian et al. 2007b), 1ES 0347-121 (Aharonian et al. 2007c) and Mkn 501 (Neronov et al. 2011). Generally, the lack of absorption features either are simplest to assume that there is no absorption (Kifune 1999; Stecker \& Glashow 2001; De Angelis et al. 2009) or can be explained by the lower levels of EBL(Aharonian et al. 2006b; Mazin \& Raue 2007; Finke \& Razzaque 2009). Alternatively, the hard spectra can be expected, if the $\gamma$-rays from distant blazars may be dominated by secondary $\gamma$-rays produced along the line of sight by the interactions of cosmic rays protons with the background photons(Essey \& Kusenko 2010; Essey et al. 2010; Essey et al. 2011). Active galactic nuclei (AGN) are believed to be the most powerful sources of both $\gamma$-rays and cosmic rays. The recently observed results by Cherenkov telescopes indicated that interactions of cosmic rays emitted by distant blazars with photon background along the line of sight can produce $\gamma$-rays (Essey \& Kusenko 2009). Motivated by above arguments, in this paper, we study the possible origin of hard spectra in TeV blazars. The high energy emission from TeV blazars consists of two components: one, the primary $\gamma$-rays component, come from the source, and the other, secondary $\gamma$-rays component, come from proton interactions with the EBL photons along the line of sight. Throughout the paper, we assume the Hubble constant $H_{0}=70$ km s$^{-1}$ Mpc$^{-1}$, the matter energy density $\Omega_{\rm M}=0.27$, the radiation energy density$\Omega_{\rm r}=0$, and the dimensionless cosmological constant $\Omega_{\Lambda}=0.73$.
\label{sec:discussion} Generally, the proton-proton ($p-p$) interactions do not offer efficient $\gamma$-rays production mechanisms in the jet. This mechanism could be effectively realized only in a scenario assuming that $\gamma$-rays is produced in dense gas clouds that move across the jet (e.g. Morrison et al. 1984; Dar \& Laor 1997), such as, in order to interpret the reported TeV flares of Markarian 501 by $\rm \pi^{0}$-decay $\gamma$-rays produced at $p-p$ interactions, for any reasonable acceleration power of protons $\rm L_{p}\le 10^{45}~erg~s^{-1}$, the density of the thermal plasma in the jet should exceed $\rm 10^{6}~cm^{-3}$ (Aharonian 2000). On the other hand, at the conditions of existence of extremely high energy, $\rm E>10^{19}~eV$, and in presence of large magnetic field, $\rm B>>1~G$, the synchrotron radiation of the protons becomes a very effective channel of production of high energy $\gamma$-rays. In our calculations, in order to reproduce the observed spectra of distant blazar 1ES 1101-232, we adopt a lower protons energy and magnetic field. these postulations lead to a longer ages of proton with $\rm t_{sy}=4.5\times10^{4}B_{100}^{-2}E_{19}^{-1}~s$ (Aharonian 2000), where $\rm B_{100}=B/100~G$, $\rm E_{19}=E/10^{19}~eV$, and fainter radiation in the jet. The proton induced cascade process (Mannheim 1993; 1996) is another attractive possibility for production of high energy $\gamma$-rays. This process relates the observed $\gamma$-rays radiation to the development of pair cascades in the jet triggered by secondary photo-meson products produced at interactions of accelerated protons with low frequency synchrotron radiation in the source or EBL photons outside the source. For a low energy target photon field, the photo-meson cooling time of protons can be estimated using the approximate formula $\rm t_{p\gamma}\sim1/<\sigma_{p\gamma}K_{p\gamma}>cn_{ph}(\nu>\nu_{th})$ (Begelman et al. 1990), where $\rm <\sigma_{p\gamma}K_{p\gamma}>\sim 0.7\times10^{-28}~cm^{2}$ is the photo-meson production cross section and inelasticity parameter averaged over the resonant energy range (e.g. Stecker 1968; M$\ddot{\rm u}$cke et al. 1999). We simply approximate the broad synchrotron spectral component by a power-law function with the energy-flux index $\rm \alpha=1$ and denoting its luminosity by $\rm L_{s}$, we have $\rm n_{ph}(\nu>\nu_{th})\sim L_{s}E_{p}/(4\pi m_{\pi}m_{e}c^{5}R^{3}\delta^{4})$ (e.g. Sikora 2010). Thus, for the parameters of 1ES 1101-232, the photo-meson cooling time can not be significantly less than light travel timescales $\rm R/c\sim 10^{7}~s$. We argue that the uncooled protons can escape from the emission region, and then interact with the background photons along the line of sight. In this paper, we develop a model for a possible origin of hard very high energy spectra from a distant blazar. Though, several models which could explain very hard intrinsic blazar spectra in the $\gamma$-ray band have already been proposed (Katarzynski et al. 2006; B$\rm \ddot{o}$ttcher et al. 2008; Aharonian et al. 2008; Lefa et al. 2011; Yan et al. 2012). In the model, both the primary photons produced in the source and secondary photons produced outside the source contribute to the observed high energy $\gamma$-rays emission. That is, the primary photons are produced in the source through the SSC process, and the secondary photons are produced outside the source through high energy protons interaction with the background photons along the line of sight. Assuming a suitable electron and proton spectra, we obtain excellent fits to observed spectra of distant blazar 1ES 1101-232. This indicated that the surprisingly low attenuation of high energy $\gamma$-rays, especially for the shape of the very high energy $\gamma$-rays tail of the observed spectra, can be explained by secondary $\gamma$-rays produced in interactions of cosmic-ray protons with background photons in the intergalactic space(Essey \& Kusenko 2010; Essey et al. 2010; 2011). The properties of the model mentioned above should have been testable in the multi-wavelength observations on TeV blazar. Costamante (2012) argue that in several cases we have already seen the superposition of two different emission components at high electron energies, with a new components emerging over a previous SED. Since, in our case, the secondary $\gamma$-rays are produced outside of the host galaxy, this expects to harder spectra in TeV energy band than GeV energy band. These should be verified in the future multi-wavelength observations. Otherwise, the neutrino populations can be expected in $\rm p\gamma$ interactions, we leave this possibility for next work and IceCube observations.
16
9
1609.09148
1609
1609.08652_arXiv.txt
The luminosity of young giant planets can inform about their formation and accretion history. The directly imaged planets detected so far are consistent with the ``hot-start" scenario of high entropy and luminosity. If nebular gas passes through a shock front before being accreted into a protoplanet, the entropy can be substantially altered. To investigate this, we present high resolution, 3D radiative hydrodynamic simulations of accreting giant planets. The accreted gas is found to fall with supersonic speed in the gap from the circumstellar disk's upper layers onto the surface of the circumplanetary disk and polar region of the protoplanet. There it shocks, creating an extended hot supercritical shock surface. This shock front is optically thick, therefore, it can conceal the planet's intrinsic luminosity beneath. The gas in the vertical influx has high entropy which when passing through the shock front decreases significantly while the gas becomes part of the disk and protoplanet. This shows that circumplanetary disks play a key role in regulating a planet's thermodynamic state. Our simulations furthermore indicate that around the shock surface extended regions of atomic -- sometimes ionized -- hydrogen develop. Therefore circumplanetary disk shock surfaces could influence significantly the observational appearance of forming gas-giants.
Giant planets are thought to form either via core-accretion \citep{Pollack96} or gravitational instability scenario \citep{Boss97}. To get a handle on which formation scenario led to an observed gas giant, the post-formation entropy of the planet was initially thought to distinguish between the two cases \citep{Burrows97,Marley07}. Traditionally, planets formed by core accretion were thought to have a low luminosity and entropy ($\la$9.5 $\mathrm{k_B}$/baryon) -- corresponding to the so called ``cold-start" scenario -- whereas gravitational instability was thought to lead to giant planets having a high luminosity and entropy -- the ``hot-start" scenario ($\ga$9.5 $\mathrm{k_B}$/baryon, \citealt{Marley07}). Recent studies, however, pointed out that the situation is more complex. The entropy of the planets is affected by whether the accretion of gas onto the planet happens through a supercritical shock front, as indicated by one-dimensional spherically symmetric models \citep{Marley07}. If the gas forming the planet passes through such a entropy-reducing shock where a significant part of the accretion luminosity can be radiated, the planet itself will have a low entropy, consistent with the ``cold-start" scenario, regardless which formation mechanism builds the gas giant \citep{Mordasini12}. Moreover, the mass of the solid planetary core can alter the post-formation entropy of the planet as well, with higher mass cores leading to hotter planets \citep{Mordasini13,bodenheimerdangelo2013}. Finally, \citet{OM16} showed that the presence of a circumplanetary disk around the planet will funnel hot gas to the planet that can inflate the outer layers of the gas giant, enhancing the planet's entropy. Observations of directly imaged planets allow to measure the luminosities of young \citep[e.g.,][]{Marois08,lagrangebonnefoy2010} and recently also of still forming embedded planets \citep[e.g.,][]{KI12,Quanz15}. This makes it possible to estimate their entropies \citep{MC14} and conclude whether they are consistent with the ``cold-start'' or ``hot-start" scenarios. A handful of directly imaged gas giants is available for study today, and most seem to be luminous and consistent \citep{MC14,OM16} with the ``hot-start" scenario (which could in principle also be an observational bias as fainter planets are more difficult to detect). However, gravitational instability as a formation mechanism appears unlikely in several of these cases due to various factors, such as the rather small semi-major axis or a rather low circumstellar disk mass \citep{FR13}. Furthermore, the luminosity estimations of forming embedded planets from direct imaging observations can be contaminated by the luminosity of the circumplanetary disk or accretion \citep{Zhu15,Szulagyi16}, enhancing the observed overall luminosity, that can make the planet look like a ``hot-start" case. The observation that the currently known directly observed planets seem to be consistent with a ``hot-start" scenario, but some of them may formed via core accretion (e.g., $\beta$ Pic b, \citealt{mordasinimolliere2015}) indicates that solely the low versus high entropy state of an observed gas giant cannot conclusively distinguish between the two planet formation scenario. This is partially due to the lack of theoretical studies of the thermodynamics of giant planet formation which predict the post-formation entropy based on multi-dimensional, radiation-hydrodynamical simulations. In this paper we therefore present a thermodynamical study based on 3D, radiative hydrodynamic simulations of forming gas giants with various masses embedded in circumstellar disks. The planets form circumplanetary disks or circumplanetary envelopes around them depending on the gas temperature in the planet vicinity \citep{Szulagyi16}. As described e.g. in \citet{Szulagyi14}, the accretion of the gas happens from the vertical direction through the planetary gap. This is because the top layers of the circumstellar disk try to close the gap opened by the giant planet, and, as gas enters the gap, it falls nearly freely onto the circumplanetary disk's surface and onto the polar regions of the protoplanet. The gas shocks at this surface, and then becomes the part of the disk, where it eventually spirals down to the planet. In this work we study the change of important thermodynamic quantities like the entropy or ionization state of the gas as it passes through the shock front.
In this paper we present a study of the thermodynamics found in global three-dimensional radiative hydrodynamical simulations of embedded accreting giant planets of 1, 3, 5, 10 $\mathrm{M_{\jup}}$. As described in detail in \citet{Szulagyi14}, the accretional gas flow from the circumstellar disk to the planet is the following. The gas acts to close the gap opened by the planet in the circumstellar disk, especially so in the high co-latitute regions. Gas enters the gap region, and then falls nearly freely in a vertical influx onto the circumplanetary disk and protoplanet. Because this vertical inflow is supersonic (MACH = 6.2, 8.1, 10.3 for the 3, 5, 10 $\mathrm{M_{\jup}}$ planets, respectively), it shocks on the surface of the circumplanetary disk and on the polar region of the protoplanet before becoming part of the disk and eventually reaching the protoplanet. In this work we showed that the gas undergoes a significant reduction of the specific entropy (typically more than 3 $\mathrm{k_B}$/baryon) while passing through the shock front that is found to be supercritical. The vertical influx has a very high entropy which after the shock in the disk mid-plane reaches a minimum value. We found that the circumplanetary disk consists of gas of significantly lower entropy than the vertical influx and the lowest entropies are found in a spherical small envelope around the planet within the inner parts of the circumplanetary disk. We conclude that shocks play a key role in regulating the post-formation entropy. Because the shock front on the circumplanetary disk is hot, optically thick (in our grey-approximation), and this luminous region is extended (100-250 $\mathrm{R_{\jup}}$), it can contribute strongly to the bolometric luminosity of a directly imaged planet if the gas giant is still accreting. Therefore it is important to disentangle the luminosity of the shock front on the upper layer of the circumplanetary disk from the luminosity of the protoplanet itself beneath the shock surface. Our radiative hydrodynamic simulations compute temperatures taking into account radiative cooling with the inclusion of dust opacities. However, the use of ideal gas EOS does not take into account ionization and dissociation. This can lead to too high temperatures, and means that we cannot yet exactly predict the post-formation entropies. To estimate where dissociation and ionization could occur, we used the CEA code to determine potential H and H+ regions. We found that for 3-10 $\mathrm{M_{\jup}}$ planets dissociation occurs in front of the polar shock surface indicating that non-ideal effect could be important. In the 10 Jupiter-mass simulation the shock surface is so hot to also produced H+, which means that there could be extended $H-\alpha$ emission from this region, which may be detectable, as seen in LkCa 15 b \citep{Sallum15}. Shock surfaces on the surface of the circumplanetary disk and the polar region of the protoplanet could therefore also be important for the observational appearance of forming giant planets.
16
9
1609.08652
1609
1609.01259_arXiv.txt
Stochastic acceleration of particles under a pressure balance condition can accommodate the universal $p^{-5}$ spectra observed under many different conditions in the inner heliosphere. In this model, in order to avoid an infinite build up of particle pressure, a relationship between the momentum diffusion of particles and the adiabatic deceleration in the solar wind must exist. This constrains both the spatial and momentum diffusion coefficients and results in the $p^{-5}$ spectrum in the presence of adiabatic losses in the solar wind. However, this theory cannot explain the presence of such spectra beyond the termination shock, where adiabatic deceleration is negligible. To explain this apparent discrepancy, we include the effect of charge exchange losses, resulting in new forms of both the spatial and momentum diffusion coefficients that have not previously been considered. Assuming that the turbulence is of a large-scale compressible nature, we find that a balance between momentum diffusion and losses can still readily lead to the creation of $p^{-5}$ suprathermal tails, including those found in the outer heliosphere. \\ {\it Keywords\/}{: acceleration of particles - solar wind - turbulence - diffusion}
\label{sec:intro} Within the heliosphere and beyond, particles with energies above their expected thermal energies, so-called suprathermal particles, are ubiquitous. Data from \textit{ACE} \cite{fisk12} and \textit{Wind} \cite{day09} among others demonstrate that their spectra commonly take a form close to $ f \propto p^{-5}$, where $f$ is the isotropic phase space distribution function and $p$ is the particle momentum. This spectrum is found both in quiet time and disturbed conditions, near and far from shocks, and in the inner and outer heliosphere. This implies that such a spectrum is independent of local plasma conditions, and that a theory which is not sensitive to the local environment is necessary. As these tail particles are observed both in quiet times and in more extreme conditions \cite{fisk12}, their acceleration is typically attributed to a stochastic process. Various stochastic theories have been considered in the literature as possible explanations for the origin of these tail particles. One of the primary difficulties in any application of a stochastic theory is the treatment of spatial diffusion. In some instances, spatial diffusion is neglected or considered unimportant compared to other transport processes \cite{zhang12}. In other models, spatial diffusion is treated in an atypical manner. For example, in a series of papers by Fisk and Gloeckler \cite{fisk06, fisk07, fisk08, fisk09, fisk10, fisk12, fisk13, fisk14}, a pump mechanism is developed, where tail particles gain their energy from a continuous ``pumping'' of energy from core particles. This approach naturally leads to the creation of $p^{-5}$ spectra; however, it requires approximating spatial diffusion by a loss term of the form $-f/\tau_E$, where $\tau_E$ is the escape time from a compression region. The validity of this approximation has been discussed in the literature (e.g. \cite{jok10}). Recently, a new approach has been adopted by several authors, a so-called ``pressure balance'' condition \cite{zhang10,zhang12} \cite{ant13}. As particles stochastically accelerate in the presence of turbulence, their bulk pressure increases. As this turbulence is a finite source of energy and particle pressure, the process cannot continue indefinitely. However, if the increase in particle pressure is ``balanced'' by a source of pressure reduction, such as adiabatic deceleration, then momentum diffusion can be sustained. If we assume that underlying processes for the excitation and dissipation of plasma turbulence constrain the relationship between spatial and momentum diffusion in the presence of adiabatic losses, this condition allows us to determine the particle spectrum. As an example, consider one of the first applications of this pressure balance condition \cite{zhang12}. Here, the authors considered the stochastic acceleration of particles in a bi-modal plasma, consisting of regions of compressible turbulence and particle acceleration, and regions of no turbulence and no acceleration. Neglecting spatial diffusion, but including charge exchange losses, and assuming a momentum diffusion coefficient of the form $D(p)=D_0 p^2$ , the pressure balance condition allows for an estimation of $D_0$. Under many different circumstances, this leads to the creation of momentum power law spectra in both the turbulent and non-turbulent regions with power law indices of $-5$. This pressure balance condition between momentum diffusion and adiabatic cooling was also applied in \cite{ant13}, herein referred to as ASZ2013. For the first time, pressure balance was applied in the presence of spatial diffusion, albeit in the absence of losses. Once again, this resulted in the creation of power law spectra with spectral indices of $-5$ at large momenta. However, as was discussed in ASZ2013, this pressure balance cannot be sustained in the outer heliosphere, where adiabatic cooling is considered negligible. While charge exchange losses have been included in \cite{zhang12} and spatial diffusion has been included in \cite{ant13}, both effects under pressure balance have not been considered together in the literature. It is the purpose of this paper to examine the role of both processes on the resulting spectrum and, in particular, the presence of suprathermal tails past the termination shock.
In order to explain the apparent universal $p^{-5}$ tails observed through the heliosphere, we have appealed to stochastic acceleration as the possible explanation of their existence. Many different forms of stochastic acceleration exist, depending on the type of turbulence involved. However, \cite{zhang13} have demonstrated that if the turbulence is composed of small scale magnetohydrodynamic waves, stochastic acceleration is not fast enough to overcome the affect of adiabatic cooling. Instead, we have appealed to large-scale modes; in particular, fluctuations of a compressible nature. This form of acceleration is maximised when both the spatial and momentum diffusion coefficients are related via equation \eqref{eqn:bal_kappa}. Adopting a so called ``pressure balance'' concept between momentum diffusion and adiabatic deceleration to obtain these coefficients, we found that power law spectra with $-5$ spectra indices naturally arise throughout the heliosphere in environments far from sources and where convection and spatial diffusion are considered negligible. As was seen in Section \eqref{sec_loss}, the inclusion of both these mechanisms and charge exchange losses can lead to a steepening of the spectra, particularly at low momenta. However, for sensible choices of the free parameters, boundary conditions and injection term, we found in Section \eqref{sec_in} that these features are suppressed. Except for the unlikely case of very strong turbulence, $p^{-5}$ spectra are still obtained above the injection momentum. If we only consider pressure balance between both momentum diffusion and adiabatic cooling, it alone cannot explain the presence of the suprathermal tails in the outer heliosphere, where cooling is negligible. Instead, in Section \eqref{sec_down}, we considered a balance between momentum diffusion and charge exchange losses. Again, for realistic values of the the solar wind Mach number, $p^{-5}$ spectra are obtained in the outer heliosphere, independent of both the loss rate and distance form the termination shock. However, in order to obtain the spatial diffusion coefficient of equation \eqref{eqn:bal_kappa}, an unlikely momentum independent spatial diffusion coefficient was assumed. Dropping this assumption results in a complicated integro-differential equation for the particle pressure. It would be interesting to see if a workaround could be found to obtain a spatial diffusion coefficient that is both momentum and spatially dependent using this notion of pressure balance. We have also approximated the more exact spatial dependence of the loss time as found in \cite{zhang12}, Figure 9 therein. However, if we assume that losses are by charge exchange, then this loss time is also energy (and therefore momentum) dependent (see \cite{zhang12}, Figure 2 therein). Once again, this leads to similar problems in adopting the pressure balance notion as is found with a momentum dependent spatial diffusion coefficient. Also, again according to Figure 9 of \cite{zhang12}, the Mach number $M_A$ is not spatially independent as we assumed; rather, it varies throughout the heliosphere. However, as we discovered in Section \eqref{sec_in}, the resulting spectra are not sensitive to this choice of $M_A$ except in unlikely cases of very small values corresponding to very strong fluctuations. We therefore do no believe the inclusion of a spatially dependent Mach number will have much affect on our results. One particular feature of the suprathermal tail that cannot be explained by our theory is that of the observed step feature (see \cite{fahr12} - Figure 1 therein). This sharp drop at the injection momentum has not been obtained in any of our analyses. However, the bimodal treatment used in \cite{zhang12} naturally lead to the creation of this feature. It would also be interesting to see if a bimodal approach to our work could also lead to the creation of this step feature. Finally, we have applied this notion of pressure balance to only one particular branch of turbulence, namely large-scale compressions, in only one particular setting, namely the heliosphere. An application of this notion to explain other unresolved cosmic ray phenomena, both within the heliosphere and indeed elsewhere, may lead to interesting insights. \vspace{10mm}\\ C.K. and P.D. acknowledges support of this work by the IRC, formerly IRCSET, through grant R11673. \appendix
16
9
1609.01259
1609
1609.04821_arXiv.txt
{The long standing anomaly in the positron flux as measured by the {\fontfamily{cmss}\selectfont {PAMELA}} and {\fontfamily{cmss}\selectfont {AMS-02}} experiments could potentially be explained by dark matter (DM) annihilations. This scenario typically requires a large ``boost factor'' to be consistent with a thermal relic dark matter candidate produced via freeze-out. However, such an explanation is disfavored by constraints from CMB observations on energy deposition during the epoch of recombination. We discuss a scenario called late-decaying two-component dark matter (LD2DM), where the entire DM consists of two semi-degenerate species. Within this framework, the heavier species is produced as a thermal relic in the early universe and decays to the lighter species over cosmological timescales. Consequently, the lighter species becomes the DM which populates the universe today. We show that annihilation of the lighter DM species with an enhanced cross-section, produced via such a non-thermal mechanism, can explain the observed {\fontfamily{cmss}\selectfont {AMS-02}} positron flux while avoiding CMB constraints. The observed DM relic density can be correctly reproduced as well with simple $s$-wave annihilation cross-sections. We demonstrate that the scenario is safe from CMB constraints on late-time energy depositions during the cosmic ``dark ages''. Interestingly, structure formation constraints force us to consider small mass splittings between the two dark matter species. We explore possible cosmological and particle physics signatures in a toy model that realizes this scenario.}
\label{sec:intro} The cosmic ray positron flux measured at Earth shows an excess with respect to the secondary positrons, produced by cosmic rays spallations on the interstellar medium (ISM), at energies above 10 GeV. This was confirmed through a measurement of a rising positron fraction (defined as the ratio of the $e^+$ to $(e^+ + e^-)$ flux) up to 100 GeV by the {\fontfamily{cmss}\selectfont {PAMELA}} \cite{Adriani:2008zr} and {\fontfamily{cmss}\selectfont {FERMI}} satellites \cite{FermiLAT:2011ab}. Interestingly, there has been no observed excess in the corresponding cosmic antiproton flux~\cite{Adriani:2010rc, PhysRevLett.117.091103}. Recently, high precision data up to 500 GeV for the positron fraction has been released by the {\fontfamily{cmss}\selectfont{AMS-02}} detector onboard the International Space Station (ISS) \cite{Accardo:2014lma}. One of the most intriguing features of the {\fontfamily{cmss}\selectfont{AMS-02}} data is that the positron fraction does not appear to be increasing for energies above $\sim 200$~GeV \cite{Accardo:2014lma}. \vspace{2mm} Quantifying the positron excess from the positron fraction typically requires a detailed modelling of the background secondary positrons as well as the primary electron flux. Regardless of the background modelling uncertainties, it is beyond reasonable doubt that there is an excess of a primary positron component beyond the secondary astrophysical background from spallations \cite{Serpico:2008te}, although there is no consensus yet on our understanding of other astrophysical sources of positrons. \vspace{2mm} Indeed, purely astrophysical interpretations for a new primary positron component have been proposed such as nearby supernovae \cite{Fujita:2009wk, Kohri:2015mga} or a population of pulsars localized near the Earth \cite{DiMauro:2014iia, Hooper:2017gtd, Joshi:2017ogv}. In fact, the spectra can also be explained without considering additional sources by realizing the spiral structure of the Milky Way in 3-D propagation models \cite{Gaggero:2013rya}, and secondary production in shockwaves of supernovae remnants (SNRs) \cite{Blasi:2009hv, Mertsch:2009ph}. Although the pulsar interpretation provides a good fit to the data for reasonable choices of spectral parameters and spatial distributions (see for example \cite{Cholis:2013psa} or \cite{Boudaud:2014dta} for a recent appraisal), we shall not discuss it further and refer the interested reader to the literature \cite{Hooper:2008kg, Serpico:2011wg, Linden:2013mqa}. Instead, we argue the case for an annihilating dark matter (DM) interpretation of the positron excess\footnote{Several interpretations of the PAMELA positron excess were proposed in the context of annihilating DM in Refs.~\cite{Cholis:2008hb, Cirelli:2008pk, Nomura:2008ru, Harnik:2008uu, Fox:2008kb, Grajek:2008pg} and decaying DM in Refs.~\cite{Ibarra:2008jk, Nardi:2008ix, Arvanitaki:2008hq, Ishiwata:2009vx}. For some recent attempts at explaining the positron excess with a decaying DM interpretation, see for e.g. \cite{Belotsky:2015gsa, Cheng:2016slx}.}. \vspace{2mm} Weakly Interacting Massive Particles (WIMPs) are among the most popular DM candidates, as they can arise `naturally' within models extending the Standard Model (SM) beyond the electroweak symmetry breaking (EWSB) scale. The typical scenario for production of WIMPs in the early universe is via the freeze-out mechanism \cite{Kolb:1990vq}. The velocity-averaged $s$-wave annihilation cross-section (only cross-section henceforth) to achieve a DM thermal relic density consistent with the current value $\Omega_{\text{DM}}h^2 = 0.1193 \pm 0.0014$ \cite{Ade:2015xua}, is computed to be $\left<\sigma v\right>_{\text{TR}} {=} \,\, 2.2 \times 10^{-26}\, \text{cm}^3/\text{s}$ \cite{Steigman:2012nb}, which coincidentally is typical for a ${\sim}$~$\text{TeV}$ scale dark matter particle with SM-like weak couplings. This coincidence has been dubbed the ``WIMP miracle''. \vspace{2mm} Indirect detection signals of WIMP annihilation provide a complementary probe to searches for dark matter at colliders and direct detection experiments. An excess in the positron flux has long been touted as a `smoking gun' signature for WIMP dark matter annihilations in the Milky Way galactic halo \cite{Kamionkowski:1990ty}. An analysis of the energy range of the positron excess as well as the observed spectral shape and normalization allows for a good fit to a ${\sim}$~$\text{TeV}$ scale dark matter particle with a velocity-averaged cross-section $\left<\sigma v\right> \sim 10^{-24} \hyphen 10^{-21} \, \text{cm}^3/\text{s}$ annihilating predominantly into a $W^+ W^-$ pair, or into SM leptons \cite{Cirelli:2008pk}. Note that this cross-section is ${\sim}$~$10^2 \hyphen 10^5$ times larger than the standard freeze-out cross-section. \vspace{2mm} There are two ways to resolve this discrepancy within the standard WIMP paradigm. The first way is to assume a velocity dependent Sommerfeld enhancement factor to the cross-section. The approximately inverse dependence of this factor on velocity leaves the early universe cross-section (during the epoch of freeze-out) essentially unchanged but leads to a large enhancement in the present day annihilation cross-section due to the low DM velocities ($v\sim10^{-3}c$) that are present in our galaxy today. It has been shown that DM interactions mediated by SM bosons \cite{Hisano:2004ds, Cirelli:2007xd}, or by a new light mediator \cite{ArkaniHamed:2008qn, Pospelov:2007mp} can give rise to this kind of enhancement. The second approach is to consider astrophysical ``boost factors'' to the annihilation flux, where overdense regions in the local galactic halo may enhance the DM annihilation rate \cite{Hooper:2008kv} over the canonical assumption of a smooth dark matter distribution. However in this case, the boost factors have been found to be typically of $\mathcal{O}(10)$ \cite{Pieri:2009je}, which is an insufficient enhancement. \vspace{2mm} The annihilating DM interpretation of the positron excess is subject to two types of constraints -- from present day astrophysical observations and from early universe constraints arising from cosmic microwave background (CMB) observations. \vspace{2mm} Astrophysical constraints arise from multi-messenger observations of data from gamma-ray detectors and radio telescopes \cite{Bertone:2008xr, Cirelli:2016mrc, Cirelli:2009dv, Ackermann:2012rg, Ackermann:2013yva, Ackermann:2015zua}. Observations of diffuse gamma rays, gamma rays from dwarf galaxies and radio observations give rise to constraints on annihilating dark matter scenarios. Uncertainty in the intervening astrophysics: DM density profiles \cite{Nesti:2013uwa}, galactic magnetic field modelling \cite{2011AIPC.1381, Jansson:2009ip} and propagation parameters \cite{Genolini:2015cta}, weakens most of these indirect detection constraints allowing some room for a DM interpretation of the positron excess. \vspace{2mm} However, early universe constraints from observations of CMB temperature and polarization anisotropy spectra reflecting conditions near the epoch of recombination \cite{Galli:2009zc, Slatyer:2009yq} provide far more robust and stringent constraints. Naive $s$-wave DM cross-sections that explain the positron excess are strongly excluded by these CMB observations. For models with Sommerfeld enhancement, the DM velocity near the epoch of recombination ($v \approx 10^{-8} \, c$) implies that the cross-section is saturated to its maximum value -- greatly exceeding the constraint set by the CMB observations. \vspace{2mm} Thus, in light of the CMB constraints, an annihilating DM interpretation of the positron excess has fallen out of favor. In this work we examine the assumptions behind the CMB constraints and we show that these constraints can be relaxed in the event of a non-thermal production mechanism for the dark matter particle. We propose a late decaying two-component DM (LD2DM) scenario where the dark sector consists of two semi-degenerate dark sector particles which we denote as $\chi_1$ (heavier) and $\chi_2$ (lighter)\footnote{Such a class of models has been proposed in the literature \cite{Cen:2000xv, Abdelqader:2008wa, SanchezSalcedo:2003pb, Peter:2010au, Bell:2010fk} primarily to address the departure, on small scales, of Cold Dark Matter (CDM) simulations from observations \cite{deBlok:2009sp, Klypin:1999uc, Moore:1999nt, Moore:1994yx, Flores:1994gz, Salucci:2000ps, BoylanKolchin:2011de}.}. Both species are assumed to have simple $s$-wave annihilation cross-sections. The lighter dark matter particle $\chi_2$ is the stable DM candidate that populates the universe today and annihilates with a large annihilation cross-section to SM particles in order to explain the observed positron excess. The large annihilation cross-section of $\chi_2$ ensures that it was underproduced in the early universe. The heavy species $\chi_1$ is assumed to have the required thermal freeze-out cross-section with which it annihilates to a radiation bath. Thus, in the early universe $\chi_1$ plays the role of dark matter with the correct relic density. The heavy species $\chi_1$ decays on a cosmological timescale through the process, \begin{equation} \chi_1 \rightarrow \chi_2 + \phi \nonumber. \end{equation} Here, $\phi$ in general stands for one or more light dark sector particles (dark radiation). The late production of $\chi_2$ ensures that its annihilations to SM particles do not affect the CMB during the epoch of recombination. \vspace{2mm} The rest of the paper is organized thus: In Sec.~\ref{sec:DMpositron_b} we examine the constraints from astrophysical and cosmological observations on the best fit dark matter annihilation parameters needed to explain the {\fontfamily{cmss}\selectfont {AMS-02}} positron excess. In Sec.~\ref{sec:problem} we discuss the tension between the standard WIMP paradigm explanation of the positron excess and (1)~demanding the correct relic density (2)~CMB temperature and polarization anisotropy constraints on DM annihilation in the early universe. In Sec.~\ref{sec:scenario} we motivate and present the basic LD2DM scenario (along with a toy model that realizes this scenario in Sec.~\ref{sec:toymodel}) as a possible reconciliation of this conflict, while explaining the observed positron excess. In Sec.~\ref{sec:scenario_b} we discuss how this reconciliation is achieved and in Sec.~\ref{sec:scenario_c} we examine constraints that arise on the basic LD2DM scenario, from late time energy depositions (and their effect on the CMB) and from observations of small scale structure. In Sec.~\ref{sec:particle} briefly explore a few phenomenological implications of the toy model. We highlight a few general cosmological features of the LD2DM scenario in Sec.~\ref{sec:features}. Finally, we conclude with an overview of the key results from the paper in Sec.~\ref{sec:discussion}. In the Appendices we revisit the positron flux data from {\fontfamily{cmss}\selectfont {AMS-02}} and find the best fit dark matter annihilation parameters needed to explain the positron excess. Our results are in general agreement with results found elsewhere in the literature.
\label{sec:discussion} \vspace{2mm} The {\fontfamily{cmss}\selectfont {AMS-02}} positron excess continues to be a puzzle several years since the original discovery. In this work, we have explored an annihilating DM interpretation of the positron excess. We compared the best-fit parameters needed to explain the excess with constraints arising from the observed gamma ray flux from dwarf galaxies and diffuse gamma ray measurements. We found that among the four leptophilic DM annihilation channels we considered, i.e. 2$\mu$, 2$\tau$, 4$\mu$ and 4$\tau$, only the $4\mu$ channel was still allowed by these constraints. The most stringent constraint on such a dark matter candidate arises from observations of the CMB. As we have argued, if an $s$-wave annihilating dark matter species exists before the epoch of recombination, it would deposit energy into the thermal plasma in the early universe with observable effects on the CMB temperature and polarization power spectra. The large cross-section needed to explain the positron excess would certainly be ruled out, putting an annihilating DM interpretation of the positron excess in serious jeopardy. \vspace{2mm} The main feature of this work was our proposal of a novel paradigm, which we dubbed the late-decaying 2-component dark matter (LD2DM) scenario. In this scenario, there is a heavy dark matter species, $\chi_1$, that annihilates with a low (thermal relic) cross-section, while decaying on cosmological time scales to a more strongly annihilating lighter DM species, $\chi_2$. We posit that $\chi_2$ plays the role of DM in the present universe and its annihilations give rise to the observed positron excess. For concreteness, we presented a toy particle physics model that realizes the basic LD2DM scenario. \vspace{2mm} We showed, through an explicit calculation of the yields, that the LD2DM scenario can give rise to the correct relic abundance of DM in the present universe while evading CMB constraints on energy deposition during the epoch of recombination. We then proceeded to examine constraints that arise on the LD2DM scenario from cosmology, specifically (1) constraints arising from \textit{late-time energy depositions} during the cosmic dark ages and their effect on the CMB and (2) structure formation constraints on decaying dark matter. We found that these constraints force us to consider $\chi_1$ to be a long lived DM candidate with a lifetime greater than $\sim 10^{13}$~s and a very small mass splitting between $\chi_1$ and $\chi_2$. \vspace{2mm} We then discussed possible phenomenological implications of our toy model: potential observations, constraints and fine-tuning issues in this model. We also explored several possible cosmological implications of this toy model, such as the possibility to solve certain small scale structure problems and the potential to resolve the tension between the values of $\sigma_8$ and $H_0$ extracted from CMB observations with those from low-redshift measurements. \vspace{3mm} In this work we have left open several avenues that could be pursued if the basic LD2DM paradigm is taken seriously as an explanation of the positron excess. We highlight below some directions for future work: \vspace{2mm} \textbf{DM and the positron excess:} The very first question that needs to addressed is whether the observed positron excess will continue to favor a dark matter interpretation once more data is collected. The {\fontfamily{cmss}\selectfont {AMS-02}} detector is designed to continue collecting data through 2020, so future releases of data will improve precision in the high energy bins and reveal the shape of the positron fraction (flux) that may help in distinguishing between astrophysical and DM interpretations of the excess \cite{AMS:2015icrc}. Also, as advocated by \cite{Cirelli:2015gux}, determining the energy-dependent anisotropy of the incoming positrons may be an indicator of the nature of its source: a high anisotropy would favor a single- source hypothesis while a DM interpretation is better served by an isotropic positron source. \textbf{Particle physics models of the LD2DM scenario:} We leave a detailed development of the toy model and various extensions, along with a full analysis of the parameter space and particle phenomenology to future work. One potentially promising phenomenological avenue is to look at scenarios where $\chi_1$ decays to $\chi_2$ and a neutrino. This could lead to observable signatures in the Cosmic Neutrino Background. On the model building side, it would be interesting to see whether the fine tunings in our toy model can be explained by embedding it in a UV completion. \textbf{21cm astronomy and Epoch of Reionization:} It is interesting to consider the LD2DM scenario for a decay lifetime of $\chi_1$ of the order of the age of the universe. $\chi_2$ has a significantly larger cross-section compared to the vanilla WIMP particle and hence can have a substantial contribution in saturating the optical depth to reionization. On top of that, $\chi_2$ also has a varied abundance history which would cause its energy deposition record to be different from that of conventional $\Lambda$CDM cosmologies. Hence, we expect the LD2DM scenario to leave a distinct imprint on the reionization history of the universe. The observations of redshifted 21cm radiation from the Epoch of Reionization (EoR) will be essential in providing constraints on the contribution to reionization process from various sources including DM annihilation. The ongoing experiments like HERA \cite{Pober:2013jna} are expected to provide us with exciting information like global history of EoR progression, but they will only be limited up to $z \approx 11$. In the future, experiments like SKA \cite{Mesinger:2015sha} will have much better sensitivity, redshift coverage (upto $z \approx 30$) and resolution which will help shed light on the dark ages. This will also play a major role in probing alternate reionization histories as maybe the case in the LD2DM scenario. \newpage
16
9
1609.04821
1609
1609.04010_arXiv.txt
A microlensing survey by \citet{Sumi2011} exhibits an overabundance of short-timescale events ($t_E\lesssim 2~$days) relative to that expected from known stellar populations and a smooth power-law extrapolation down to the brown dwarf regime. This excess has been interpreted as a population of approximately Jupiter-mass objects that outnumber main-sequence stars by nearly twofold; however the microlensing data alone cannot distinguish between events due to wide-separation ($a \gtrsim 10$~AU) and free-floating planets. Assuming these short-timescale events are indeed due to planetary-mass objects, we aim to constrain the fraction of these events that can be explained by bound but wide-separation planets. We fit the observed timescale distribution with a lens mass function comprised of brown dwarfs, main-sequence stars, white dwarfs, neutron stars, and black holes, finding and thus corroborating the initial identification of an excess of short-timescale events. Including a population of bound planets with distributions of masses and separations that are consistent with the results from representative microlensing, radial velocity, and direct imaging surveys, we then determine what fraction of these bound planets are expected not to show signatures of the primary lens (host) star in their microlensing light curves, and thus what fraction of the short-timescale event excess can be explained by bound planets alone. We find that, given our model for the distribution of planet parameters, bound planets alone cannot explain the entire excess without violating the constraints from the surveys we consider, and thus some fraction of these events must be due to free-floating planets, if our model for bound planets holds. We estimate a median fraction of short-timescale events due to free-floating planets to be $f = 0.67$ (0.23--0.85 at 95\% confidence) when assuming ``hot-start'' planet evolutionary models and $f = 0.58$ (0.14--0.83 at 95\% confidence) for ``cold-start'' models. Assuming a delta-function distribution of free-floating planets of mass $m_p=2~M_\mathrm{Jup}$ yields a number of free-floating planets per main sequence star of $N = 1.4$ (0.48--1.8 at 95\% confidence) in the ``hot-start'' case and $N = 1.2$ (0.29--1.8 at 95\% confidence) in the ``cold-start'' case.
\label{sec:sec1} Deep optical and near-infrared photometric surveys to characterize the low-mass end of the substellar initial mass function (IMF) have identified populations of isolated, planetary-mass candidates in several nearby, young star-forming regions and clusters \citep{Comeron1993,Nordh1996,Itoh1996,Tamura1998,Lucas2000,ZapateroOsorio2000, ZapateroOsorio2002,McGovern2004,Kirkpatrick2006,Luhman2006,Bihain2009, Burgess2009,Scholz2009,Scholz2012a,Scholz2012b,Weights2009,Marsh2010,Quanz2010, Muzic2011,Muzic2012,Muzic2014,Muzic2015}. While a number of these photometrically-identified candidates have been met with some controversy in the literature concerning their youth (and thus low masses) or cluster membership \citep[see e.g.][]{Hillenbrand2000,Allers2007,Luhman2007a}, recent studies, in particular those by the Substellar Objects in Nearby Young Clusters (SONYC) group, have focused on obtaining confirmation spectra of such candidates and have verified several free-floating, planetary-mass objects with masses as low as a few Jupiter masses \citep{Scholz2009,Scholz2012a,Scholz2012b,Muzic2011,Muzic2012,Muzic2015}. \citet{Sumi2011} also present evidence for a large population of $\sim$Jupiter-mass objects that are either wide-separation ($a\gtrsim 10~$AU) or free-floating planets inferred from an excess of short events in the observed timescale distribution of a sample of microlensing events collected by the second phase of the Microlensing Observations in Astrophysics group \citep[MOA-II;][]{Sumi2003,Sako2008}. \citet{Wyrzykowski2015} report that data from the third phase of the Optical Gravitational Lensing Experiment \citep[OGLE-III;][]{Udalski2003} show a flattening in the slope of the observed event timescale distribution towards shorter timescales that is suggestive of a population of lenses similar to that reported by \citet{Sumi2011}, although this flattening is only marginally significant due to uncertainties resulting from small-number statistics and a low detection efficiency to such short-timescale events. Comparing the occurrence rates of free-floating planets inferred by imaging surveys with those from microlensing is difficult, as the imaging surveys have sensitivities that cut off around 1--3~$M_\mathrm{Jup}$ (and depend on the exact evolutionary model adopted), while \citet{Sumi2011} found that these objects (regardless of their boundedness) most likely have masses near (and probably below) the sensitivity limit of the imaging surveys at $1.2^{+1.2}_{-0.7}~M_\mathrm{Jup}$. Nevertheless, in the SONYC survey of the young cluster NGC 1333, \citet{Scholz2012b} find that the occurrence rate of (photometrically-identified, spectroscopically confirmed) free-floating, planetary-mass objects relative to main-sequence stars is smaller than that inferred by the \citet{Sumi2011} microlensing study by a very large factor of some 20--50. \citet{Scholz2012b} argue that the star formation process extends into the planetary-mass regime, down to the planetary masses they are able to probe, and thus this large difference in inferred occurrence rates of free-floating planets must be due to a very large upturn in the mass function of compact objects below $\sim 3~M_\mathrm{Jup}$ that is perhaps indicative of a different formation channel (assuming that microlensing and direct imaging surveys are probing an analogous population of compact objects). Alternatively, one might argue that young open clusters may have a different mass function than the objects in the Galactic disk and bulge that give rise to microlensing events. On the other hand, the photometric survey of $\rho$ Oph by \citet{Marsh2010} find a much larger number of isolated, planetary-mass objects per main-sequence star. After integrating their inferred mass function (shown in their Figure~8) in the planetary regime, the lowest two bins between $7\times 10^{-4} \lesssim M/M_{\odot} \lesssim 6\times10^{-3}$ (corresponding to roughly $0.7\lesssim M/M_\mathrm{Jup} \lesssim 6$), and in the stellar regime, the highest three bins between $0.08 \lesssim M/M_{\odot} \lesssim 1.0$, we divide these values to estimate the implied number of free-floating planets per main-sequence star of $\sim 30$. This number is over an order of magnitude larger than that of \citet{Sumi2011} and larger than the results of \citet{Scholz2012b} by an even greater factor. This seems to suggest that either the formation of free-floating planets is extremely sensitive to the local environment, the \citet{Marsh2010} sample is contaminated (with background stars or due to mis-estimates of the ages and/or masses of the candidate free-floating planets; see e.g. \citealt{Luhman2007a} and \citealt{Allers2007}) since they lack spectroscopic validation for many of their candidates, or some combination thereof. Broadly, there are two formation channels for free-floating planets, but there are issues with the theory and observations behind each. The first, as \citet{Scholz2012b} claim, is that these objects form as an extension of the star formation process, however the lower mass fragmentation limit predicted by models of collapsing clouds is uncertain \citep[e.g.][]{Silk1977,Padoan1997,Adams1996} and may, in fact, be dependent on environment (e.g. \citealt{Bate2005}; also see \citealt{Luhman2007b} and \citealt{Bastian2010} for a discussion of the substellar IMF and its universality). Secondly, if free-floating planets initially form from material in circumstellar disks (either by disk fragmentation or core accretion), they must be subsequently ejected out of the system via dynamical processes such as planet-planet scattering, mass loss during post-main-sequence evolution, or ionization by interloping stars. The ejection of a $\sim$~Jupiter-mass planet via planet-planet scattering requires a close encounter with another planet with a mass at least a Jupiter mass or above, as the least massive body in such an encounter is nearly always the one ejected \citep[see e.g. ][]{Ford2003,Raymond2008}. Thus, if planet-planet scattering were the dominant channel for formation of the population of (presumably) free-floating, Jupiter-mass planets inferred by \citet{Sumi2011}, the frequency of Jupiter- and super-Jupiter-mass planets around low-mass stars must necessarily be high ($\sim 50\%$; \citealt{Veras2012}), which is in significant disagreement with the predictions of core accretion theory \citep{Laughlin2004} as well as observational results from microlensing \citep{Gould2010,Cassan2012,Clanton2014b,Clanton2016}, radial velocity \citep{Bonfils2013,Montet2014}, and direct imaging \citep{Lafreniere2007,Bowler2015} surveys. Additionally, ejection due to mass loss during post-main-sequence evolution only works for planets with very wide orbital separations ($\sim$ several hundred AU) and requires (initial) host masses $\gtrsim 2~M_{\odot}$, and thus is not expected to produce free-floating planets at the required rate \citep{Veras2011,Mustill2014}. Similarly, ionization by interloping stars requires initially wide planetary orbits and a dense stellar environment since the ionization time scales as $t_\mathrm{ion}\propto \nu^{-1}a^{-2}$, where $\nu$ is the local stellar number density and $a$ is the semimajor axis \citep[see][and references therein]{Antognini2016}. \citet{Antognini2016} demonstrate that even in the case of the most optimistic interaction cross sections, $t_\mathrm{ion}\sim 2~$Gyr, implying that $\sim 10\%$ of systems with planets on wide orbits would have been ionized in a cluster with an age of 200~Myr. In the field, these authors find $t_\mathrm{ion}\sim 4\times 10^{12}~$yr and therefore $\lesssim 1\%$ of wide-separation planetary systems would have been ionized in the lifetime of the Galaxy. Given current measurements of upper limits on the frequency of Jupiter- and super-Jupiter-mass planets with $a\gtrsim 10~$AU from direct imaging surveys of young FGK stars of $\lesssim 20-30\%$ \citep{Lafreniere2007,Biller2013} and young M stars of $\lesssim 16\%$ \citep{Bowler2015}, it does not seem likely that ionization (even in clusters) is able to produce the large numbers of free-floating planets inferred by \citet{Sumi2011}, although (to the best of our knowledge) a robust, quantitative analysis has yet to be performed. Thus, while it may be possible to explain the formation of the smaller population of free-floating, planetary-mass objects observed by the SONYC group, the origin of the much larger population inferred by the \citet{Sumi2011} study remains elusive. One possible (and simple) solution could be that a majority of the planetary-mass objects needed to reproduce the over-abundance of short-timescale microlensing events seen in the MOA-II data \citep{Sumi2011} are not actually free-floating, but are gravitationally bound to host stars at wide enough orbital separations ($a\gtrsim 10~$AU) that we do not expect to see signatures of the primaries (i.e. host stars) in a majority of their microlensing light curves and we do not expect them to be detected by direct imaging surveys (due to either lying outside the outer-working angles of such surveys, and/or having masses less than $\sim$~few Jupiter masses, below their detection limits). In this study, we attempt to fit the observed timescale distribution with a standard lens mass function (hereafter LMF) comprised of brown dwarfs, main-sequence stars, white dwarfs, neutron stars, and black holes, along with a population of wide-separation, bound planets that is known to be consistent with the results of microlensing, radial velocity, and direct imaging surveys. In \citet{Clanton2016}, we demonstrated that there is a single planet population, modeled by a simple, joint power-law distribution function in planet mass and semimajor axis, that is simultaneously consistent with several representative surveys employing these three distinct detection techniques. Some fraction of such a planet population would produce detectable, short-timescale microlensing events that are well-fit by a single lens model, similar in nature to the 10 observed events with $t_E<2~$days in the MOA-II data that \citet{Sumi2011} present. We determine the expected timescale distribution for the combination of our adopted LMF and our planet population model and compare with the observed distribution to estimate the fraction of short-timescale events that are due to free-floating planets. The remainder of the paper is organized as follows. We detail the properties of the \citet{Sumi2011} microlensing event sample and review their analysis to infer the existence of an abundant population of either wide-separation or free-floating planets in Section~\ref{sec:sec2}. We describe the different channels for distinguishing microlensing events due to free-floating planets from those due to bound planets in Section~\ref{sec:sec3}. We detail the methodologies we employ in this study in Section~\ref{sec:sec4} and present our results, together with discussion, in Section~\ref{sec:sec5}. Finally, we provide a summary of this work in Section~\ref{sec:sec6}.
\label{sec:sec5} \begin{figure}[!t] \epsscale{1.15} \plotone{max_lhood_te_dist_fit.eps} \caption{Maximum likelihood fits to the observed timescale distribution \citep[black histogram;][]{Sumi2011} for our canonical LMF and a population of bound, wide-separation planets that is consistent with results from radial velocity, microlensing, and direct imaging surveys \citep{Clanton2016}, assuming either ``hot-start'' \citep[blue lines;][]{Baraffe2003} or ``cold-start'' \citep[red lines;][]{Fortney2008} planet evolutionary models. The thick lines show the expected timescale distribution from all lenses, while the thin lines show the expected contributions from planets (the curves peaking at shorter timescales) and brown dwarfs, main-sequence stars, and remnants (the curves peaking at longer timescales). For these maximum likelihood fits, wide-separation, bound planets account for roughly 2.9 of the 10 observed short-timescale ($t_E < 2~$days) events in both the ``hot-start'' and ``cold-start'' cases, and brown dwarfs account for about one event. \label{fig:max_lhood_te_dist_fit}} \end{figure} Figure~\ref{fig:max_lhood_te_dist_fit} shows the best-fit (i.e. maximum likelihood) expected timescale distribution for the combination of our canonical LMF described in the previous section with populations of wide-separation, bound planets found by \citet{Clanton2016} to be consistent with results from radial velocity, microlensing, and direct imaging surveys for either ``hot-start'' \citep{Baraffe2003} or ``cold-start'' \citep{Fortney2008} planet evolutionary models. Given that the parameters of these planet populations (i.e. the slopes of the mass function, $\alpha$ and semimajor axis function, $\beta$, normalizations, $\mathcal{A}$, and outer cutoff radii, $a_\mathrm{out}$) for the ``hot-start'' and ``cold-start'' models are not too different \citep[see Section~5.2 of][]{Clanton2016}, it is not surprising that the fits for these different models shown in Figure~\ref{fig:max_lhood_te_dist_fit} are so similar. The parameter values for the best-fit ``hot-start'' planet population are $\alpha=-0.85$, $\beta=0.091$, $\mathcal{A}=0.26~\mathrm{dex^{-2}}$, and $a_\mathrm{out}=740~$AU, and those for the best-fit ``cold-start'' population are similar. The value of $a_\mathrm{out}$ for this best fit is quite large due to the fact that the number of planets for which we do not expect to see signatures of a primary lens (host star) in the microlensing light curves (which are needed to explain the overabundance of short-timescale events) increases with this outer cutoff radius. For large $a_\mathrm{out}$, planets are allowed to be in very wide-separation orbits which lead to smaller planetary caustic sizes (and thus small rates of planetary caustic events, since at fixed $q$, $R_\mathrm{pc} \sim \theta_c \propto s^{-2}$ for $s\gg 1$) and which have low probability for source trajectories that pass near the primary ($\propto s^{-1}$ for $s\gg 1$). However, in order for a planet population with a large value of $a_\mathrm{out}$ to be consistent with the non-detections from direct imaging surveys (i.e. \citealt{Lafreniere2007} and \citealt{Bowler2015}), the slope of the semimajor axis distribution function must be shallow, and indeed, the best-fit population has $\beta$ near zero (corresponding to \"Opik's law; \citealt{Opik1924}). In Figure~\ref{fig:te_dist_fit_range}, we display the best-fit to the observed timescale distribution along with the range of fits in the 68\% confidence interval. It is clear from this figure that while we can explain some fraction of the short-timescale events with bound, wide-separation planets, an overabundance remains (particularly at timescales between 1--2~days). This suggests that either our assumed planet population model is incorrect in regions of parameter space where we currently have no observational constraints ($m_p \lesssim M_\mathrm{Jup}$ at separations $a\gtrsim 10~$AU), or free-floating planets are responsible for the remaining short-timescale events. We have no way of testing the former, but for the latter case we can constrain the fraction of short-timescale events that would be due to free-floating planets given our assumed planet model. \begin{figure} \epsscale{1.15} \plotone{te_dist_fit_range.eps} \caption{Maximum likelihood and 68\% confidence interval fits to the observed timescale distribution \citep[black histograms;][]{Sumi2011} for our canonical LMF and a population of bound, wide-separation planets that is consistent with results from radial velocity, microlensing, and direct imaging surveys \citep{Clanton2016}, assuming either ``hot-start'' \citep[top panel;][]{Baraffe2003} or ``cold-start'' \citep[bottom panel;][]{Fortney2008} planet evolutionary models. \label{fig:te_dist_fit_range}} \end{figure} For each planet population we fit to the observed timescale distribution, we determine the number of residual events with $0.3< t_E / \mathrm{days} < 2$ and divide by the number of observed events in this same range of $t_E$ to compute the fraction of such events which are expected to be due to free-floating planets, $f_\mathrm{ff}$. We plot the posterior distribution of $f_\mathrm{ff}$ in Figure~\ref{fig:frac_ste_ffp} and report the corresponding median values, 68\%, and 95\% confidence intervals in Table~\ref{tab:tab2}. The posterior for the ``cold-start'' case is shifted slightly towards lower $f_\mathrm{ff}$, as expected, but it is not significantly different from the ``hot-start'' case. \begin{figure} \epsscale{1.15} \plotone{frac_ste_ffp.eps} \caption{The fraction of short-timescale ($t_E<2~$days) microlensing events that must be due to free-floating planets, $f_\mathrm{ff}$, for our analyses that assume either ``hot-start'' \citep[blue;][]{Baraffe2003} or ``cold-start'' \citep[red;][]{Fortney2008} planet evolutionary models. The vertical, black lines mark the median values of these posterior distributions. \label{fig:frac_ste_ffp}} \end{figure} \begin{table*} \caption{\label{tab:tab2} Median values, 68\%, and 95\% confidence intervals on both the fraction of short-timescale events due to free-floating planets, $f_\mathrm{ff}$, and the number of free-floating planets relative to main-sequence stars, $N_\mathrm{ff}$. We report these values for our analyses that assume either ``hot-start'' \citep{Baraffe2003} or ``cold-start'' \citep{Fortney2008} planet evolutionary models.} \centering \begin{tabular}{c||c|c|c|c} \hline \hline & Planet Evolutionary & Median & 68\% Confidence & 95\% Confidence\\ & Model & Value & Interval & Interval \\ \hline \multirow{2}{*}{$f_\mathrm{ff}$} & ``Hot-Start'' & $0.67$ & $0.44-0.78$ & $0.23-0.85$ \\ \cline{2-5} & ``Cold-Start'' & $0.58$ & $0.40-0.74$ & $0.14-0.83$ \\ \hline \multirow{2}{*}{$N_\mathrm{ff}$} & ``Hot-Start'' & $1.4$ & $0.94-1.7$ & $0.48-1.8$ \\ \cline{2-5} & ``Cold-Start'' & $1.2$ & $0.86-1.6$ & $0.29-1.8$ \\ \hline\hline \end{tabular} \end{table*} In order to turn this fraction, $f_\mathrm{ff}$, into an actual number of free-floating planets (relative to main-sequence stars, for example), we must assume a form for their mass function. To this end, we assume that the free-floating planet mass function is given by a Dirac delta function, $\delta (m_\mathrm{p,\; ff} / M_\mathrm{Jup} - 2)$. We chose to center the delta function at $2~M_\mathrm{Jup}$ as such a free-floating planet population lead to a timescale distribution that most closely matches (by eye) the residuals obtained from subtracting off our LMF and the population of wide-separation, bound planets as described in the previous section. Admittedly, this is a rough calculation, however given the level of precision of this study, we do not believe a more careful analysis is currently warranted (especially given the fact that we currently have no constraints on the actual form of the free-floating planet mass function that we must adopt). The resultant posterior on the number of free-floating planets per main-sequence star is plotted in Figure~\ref{fig:num_ste_ffp} and the corresponding median values, 68\%, and 95\% confidence intervals are reported in Table~\ref{tab:tab2}. We plot the maximum likelihood fits for the ``hot-'' and ``cold-start'' analyses, including the contribution from free-floating planets, under this assumption of a delta-function mass distribution at $2~M_\mathrm{Jup}$ in Figure~\ref{fig:max_lhood_te_dist_fit_with_ffp}, and we show the range of fits in the 68\% confidence interval in Figure~\ref{fig:te_dist_fit_with_ffp_range}. \begin{figure} \epsscale{1.15} \plotone{num_ste_ffp.eps} \caption{The number of free-floating planets per main-sequence star, $N_\mathrm{ff}$, required to explain the residual short-timescale ($t_E<2~$days) microlensing events after fits of our canonical LMF and populations of wide-separation, bound planets are subtracted for our analyses that assume either ``hot-start'' \citep[blue;][]{Baraffe2003} or ``cold-start'' \citep[red;][]{Fortney2008} planet evolutionary models. The vertical, black lines mark the median values of these posterior distributions. Estimating this quantity requires an assumption about the mass function of free-floating planets. Here, we have chosen a delta function at a mass of $2~M_\mathrm{Jup}$ (see text for discussion). \label{fig:num_ste_ffp}} \end{figure} \begin{figure}[!t] \epsscale{1.15} \plotone{max_lhood_te_dist_fit_with_ffp.eps} \caption{Maximum likelihood fits to the observed timescale distribution \citep[black histogram;][]{Sumi2011} for our canonical LMF, a population of bound, wide-separation planets that is consistent with results from radial velocity, microlensing, and direct imaging surveys \citep{Clanton2016}, assuming either ``hot-start'' \citep[blue lines;][]{Baraffe2003} or ``cold-start'' \citep[red lines;][]{Fortney2008} planet evolutionary models, and a population of free-floating planets whose mass function is a $\delta$ function at $2~M_\mathrm{Jup}$ (black dashed line). The thick lines show the expected timescale distribution from all lenses, while the thin lines show the expected contributions from bound planets (the curves peaking at shorter timescales) and brown dwarfs, main-sequence stars, and remnants (the curves peaking at longer timescales). For these maximum likelihood fits, wide-separation, bound planets account for roughly 2.9 of the 10 observed short-timescale ($t_E < 2~$days) events in both the ``hot-start'' and ``cold-start'' cases, brown dwarfs account for about one event, and free-floating planets make up the difference. \label{fig:max_lhood_te_dist_fit_with_ffp}} \end{figure} \begin{figure} \epsscale{1.15} \plotone{te_dist_fit_with_ffp_range.eps} \caption{Maximum likelihood and 68\% confidence interval fits to the observed timescale distribution \citep[black histograms;][]{Sumi2011} for our canonical LMF, a population of bound, wide-separation planets that is consistent with results from radial velocity, microlensing, and direct imaging surveys \citep{Clanton2016}, assuming either ``hot-start'' \citep[top panel;][]{Baraffe2003} or ``cold-start'' \citep[bottom panel;][]{Fortney2008} planet evolutionary models, and a population of free-floating planets whose mass function is a $\delta$ function at $2~M_\mathrm{Jup}$. \label{fig:te_dist_fit_with_ffp_range}} \end{figure} The median number of free-floating planets per main-sequence star we find, $N_\mathrm{ff}=1.4^{+0.30}_{-0.46}$ ($N_\mathrm{ff}=1.2^{+0.40}_{-0.34}$) for the ``hot-start'' (``cold-start'') case, is quite a large number that seems difficult to explain with any known formation mechanism (see Section~\ref{sec:sec1} for discussion on the formation channels for free-floating planets). However, more ``comfortable'' values of $N_\mathrm{ff}=0.48$ ($N_\mathrm{ff}=0.29$; ``cold-start'') are allowed to within 95\% confidence and could perhaps be easier to explain. Furthermore, these values are sensitive to a number of assumptions, most notably the free-floating planet mass function and the model for the population of wide-separation, bound planets. The remainder of this section is devoted to discussion of these two primary sources of uncertainty. \textit{Free-Floating Planet Mass Function:} Without direct lens mass measurements for each of the short-timescale events, the only constraining power currently available on the free-floating planet mass function is, in fact, contained in the observed microlensing event timescale distribution. In order to constrain the free-floating planet mass function using the timescale distribution, prior knowledge of which of the short-timescale events are actually due to free-floating planets would be required. With such knowledge, one could perform a model comparison of fits to the observed timescale distribution assuming different forms for the free-floating planet mass function. Unfortunately, we do not know exactly which events are caused by truly unbound planets (due to fundamental degeneracies that affect a majority of microlensing observations; see \citealt{Gaudi2012} and references therein) and the number of short-timescale $t_E\lesssim 2~$days is small, making such a study difficult. This paper is an attempt to address the first of these issues by simulating microlensing events of wide-separation, bound planets to determine (statistically) the fraction of the short-timescale events that are caused by free-floating planets. Of course, the results presented herein are therefore dependent on our assumed model of bound planets. Data from ongoing and future microlensing surveys will allow direct measurements of both the frequency and mass function of free-floating planets, as well as their spatial distribution within our Galaxy. The recent \textit{K2} Campaign 9 (\textit{K2}C9) consisted of a survey toward the Galactic bulge \citep{Henderson2015}. For short-timescale microlensing events observed simultaneously from \textit{Kepler} and ground-based observatories (such that we see two distinct source trajectories), it is possible (for some events) to directly measure the lens mass and distance and obtain better constraints on the existence of a primary (i.e. host star) since \textit{Kepler} provides precise, continuous observations \citep[see][]{Henderson2015,Henderson2016}. However, given the short, $\sim 80~$day duration of \textit{K2}C9, the sample size will likely be too small to make population-level inferences about free-floating planets other than (at least limits) on their occurrence rates (recall that the event rate scales as $\Gamma \propto M_L^{1/2}$). Indeed, \citet{Penny2016} predict that \textit{K2}C9 will detect between 1.4 and 7.9 microlensing events due to free-floating planets (assuming 1.9 free-floating planets per main-sequence star per the \citealt{Sumi2011} result). Of these expected detections, \citet{Penny2016} predict that for between 0.42 and 0.98 it will be possible to gain a complete solution (i.e. to measure both finite-source effects and microlens parallax). Given the results we present in this paper, these numbers would be smaller by a factor of $\sim0.6$ (refer to Table~\ref{tab:tab2}), and thus it is unlikely \textit{K2}C9 will actually directly measure the lens mass in a short-timescale event. Fortunately, the microlensing survey of the \textit{Wide-Field InfraRed Survey Telescope} \citep[hereafter \textit{WFIRST};][]{Spergel2015} will ultimately, when combined with ground-based observations, provide the necessary data to directly measure frequencies, masses, and distances for a large sample of free-floating planets with masses down to that of Mars (see \citealt{Gould2003} and \citealt{Yee2013}, who demonstrate that simultaneous observations from the ground and \textit{WFIRST} at L2 will enable the measurement of the parallax of planetary events). Depending on the exact occurrence rates, \textit{WFIRST} will detect $\sim$hundreds to $\sim$thousands of free-floating planets \citep[see Table~2-6 of ][]{Spergel2015}. \textit{The Population of Wide-Separation, Bound Planets:} The model we assume in this paper is a joint power-law distribution function in planet mass and semimajor axis that \citet{Clanton2016} demonstrate to be consistent with results from radial velocity, microlensing, and direct imaging surveys (the caveats and uncertainties of which are laid bare in Section~6 of \citealt{Clanton2016}). However, the region of planet parameter space we examine in this paper ($m_p\lesssim M_\mathrm{Jup}$; $a\gtrsim 10~$AU) is not directly constrained by any observations. We have implicitly assumed that our distribution function extrapolates into this region of parameter space. It could be the case that the form of the planet mass function depends on semimajor axes for $a\gtrsim 10~$AU, which could significantly alter our conclusions. For example, if no planets with masses $m_p\gtrsim M_\mathrm{Jup}$ form beyond $\sim 10~$AU but there is an abundance of slightly less massive planets, we could easily explain most, if not all, the short-timescale microlensing events with bound planets and still satisfy results from all radial velocity, microlensing, and direct imaging surveys of M stars. Future observations will provide the necessary sensitivity to test the planetary mass function at wide-separations and determine whether or not the mass function measured by microlensing surveys extends further out (as we have assumed to be the case in this paper). The \textit{James Webb Space Telescope} \citep[\textit{JWST};][]{Gardner2006} is expected to have the capability to achieve contrasts of $\sim 10^{-5}$ at angular separations $\gtrsim 0.6~$arcseconds for observations at $\sim4.5~\mu$m with NIRCam \citep[and even greater sensitivity at larger separations;][]{Horner2004,Krist2007}. A survey of nearby, young M stars with \textit{JWST}/NIRCam as proposed by \citet{Schlieder2016} has the potential to probe down to masses of $\sim 0.1~M_\mathrm{Jup}$ at separations of $\sim 10~$AU, complementary (and perhaps with some overlap) to microlensing surveys.
16
9
1609.04010
1609
1609.01918_arXiv.txt
We study the dependence of kHz quasi-periodic oscillation (QPO) frequency on accretion-related parameters in the ensemble of neutron star low-mass X-ray binaries. Based on the mass accretion rate, $\dot{M}$, and the magnetic field strength, $B$, on the surface of the neutron star, we find a correlation between the lower kHz QPO frequency and $\dot{M}/B^{2}$. The correlation holds in the current ensemble of Z and atoll sources and therefore can explain the lack of correlation between the kHz QPO frequency and X-ray luminosity in the same ensemble. The average run of lower kHz QPO frequencies throughout the correlation can be described by a power-law fit to source data. The simple power-law, however, cannot describe the frequency distribution in an individual source. The model function fit to frequency data, on the other hand, can account for the observed distribution of lower kHz QPO frequencies in the case of individual sources as well as the ensemble of sources. The model function depends on the basic length scales such as the magnetospheric radius and the radial width of the boundary region, both of which are expected to vary with $\dot{M}$ to determine the QPO frequencies. In addition to modifying the length scales and hence the QPO frequencies, the variation in $\dot{M}$, being sufficiently large, may also lead to distinct accretion regimes, which would be characterized by Z and atoll phases.
\label{intr} The kilohertz quasi-periodic oscillations (kHz QPOs) usually appear as two simultaneous peaks in the $200-1300$ Hz range in the power spectra of low-mass X-ray binaries (LMXBs) harboring neutron stars with spin (or burst) frequencies in the $\sim 200-600$ Hz range \citep{Klis2000}. The separation between two kHz QPO peaks is roughly around $200-350$ Hz for almost all sources of different spectral type and X-ray luminosity \citep{MB2007}. There are two main spectral types of LMXB sources, the so-called Z and atoll sources, which can be identified from the particular shapes of their tracks in X-ray color-color and hardness-intensity diagrams \citep{HK1989}. Among them, Z sources seem to be the brightest sources in X-rays with the X-ray luminosity, $L_{\mathrm{X}}$, close to the Eddington limit, $L_{\mathrm{E}}$, whereas atoll sources are in the $0.005-0.2$ $L_{\mathrm{E}}$ range \citep{Ford2000}. The frequency range of kHz QPOs has been observed to be similar in sources with different X-ray luminosities. The distribution of sources in the form of parallel like groups or \emph{parallel tracks} can be seen in the QPO frequency versus X-ray luminosity plot where sources with $L_{\mathrm{X}}$ close to the Eddington luminosity $L_{\mathrm{E}}$ may cover the same frequency range, e.g., $500-1000$ Hz range, as compared to sources with $L_{% \mathrm{X}}\approx 10^{-2}L_{\mathrm{E}}$ \citep{Ford2000}. The \emph{parallel tracks} phenomenon has also been observed in a plot of kHz QPO frequency versus X-ray count rate for individual sources \citep{Zhang1998,Mendez1999,MK1999,Mendez2000,Klis2000}. The correlation between X-ray flux and kHz QPO frequencies can be clearly seen on short time scales such as hours or less than a day. On longer time scales (more than a day), however, kHz QPO sources are observed to follow different correlations which appear as \emph{parallel tracks} in the plane of kHz QPO frequency versus X-ray flux. Effects such as anisotropic emission, source inclination, outflows and two-component flow in a given source may play role in the decoupling between $L_{\mathrm{X}}$ and the mass accretion rate, $\dot{M}$, and therefore between the kHz QPO frequencies, $\nu _{\mathrm{kHz}}$, and $L_{\mathrm{X}}$ if $\nu _{\mathrm{kHz}}$ is only tuned by $\dot{M}$ \citep{Wijnands1996,Ford2000,Mendez2001}. An explanation for the \emph{parallel tracks} phenomenon was proposed by \citet{Klis2001}. According to the scenario, the QPO frequency is set by the mass inflow rate in the inner disk and its long-term average. The frequency-luminosity correlation, for individual sources, seems to depend strongly on a characteristic timescale associated with $\dot{M}$ variation. Estimation of the source distance and therefore of the $\dot{M}$ range can be very important in modelling the \emph{parallel tracks }of a single source. In the case of many sources, however, each track in the frequency-luminosity plane corresponds to a different source with its own intrinsic properties such as its mass, radius, spin, and magnetic field in addition to its $\dot{M}$ range. For the ensemble of neutron star LMXBs, the huge luminosity differences between sources sharing similar frequency ranges for kHz QPOs can be accounted for if a parameter determining the QPO frequency in addition to $\dot{M}$ differs from one source to another \citep{Mendez2001}. As the QPO frequencies are usually associated with certain characteristic radii at which the neutron star interacts with the accretion flow, it is plausible to consider the stellar magnetic field strength as the relevant parameter beside $\dot{M}$. In this paper, we search for possible correlations between QPO frequency and accretion-related parameters such as the mass accretion rate, $\dot{M}$, inferred from $L_{\mathrm{X}}$ and $\dot{M}/B^{2}$, where $B$ is the magnetic field strength on the surface of the neutron star. We find a correlation between the lower kHz QPO frequency, $\nu_{1}$, and $\dot{M}/B^{2}$. We consider the scaling of $\nu_{1}$ with the Keplerian frequency at the magnetopause and conclude that the new correlation suggests the magnetic boundary region as the origin of lower kHz QPOs. In Section~\ref{analys}, we describe the analysis and results. Our analysis consists of two parts: (1) estimation of source distances in comparison with the distance values used in \citet{Ford2000}, and (2) search for a possible correlation between the kHz QPO frequency and the accretion-related parameters, $\dot{M}$ and $\dot{M}/B^{2}$, based on source luminosities, which we calculate according to the up-to-date source distances. In Section~\ref{disc}, we discuss our results and present our conclusions.
\label{disc} The lack of correlation between kHz QPO frequencies and source luminosities in the ensemble of neutron star LMXBs is the main motivation of our present search to reveal the existence of a possible link between the lower kHz QPO frequency, $\nu_{1}$ and the accretion-related parameter, $\dot{M}/B^2$. The frequency distribution in the plane of $\nu_{1}$ versus mass accretion rate, $\dot{M}$, cannot account for a correlation between these two quantities in the presence of 15 LMXB sources (Figure~\ref{fig2}b). On the other hand, the lower kHz QPO frequency seems to be correlated with $\dot{M}/B^2$ in the ensemble of LMXBs if different neutron-star sources have similar radii. The comparison of Figure~\ref{fig2}c and Table~\ref{table2} with Figure~\ref{fig4} and Table~\ref{table3} is a clear evidence of the fact that the correlation between $\nu_{1}$ and $\dot{M}/B^2$ is a result of the cumulative effect of the model function fit to individual source data (Figure~\ref{fig3}). The existence of a possible correlation between $\nu_{1}$ and $\dot{M}/B^2$ in the collection of Z and atoll sources, if confirmed by future observations regarding the measurement of masses, radii, and magnetic field strengths of the neutron stars in LMXBs, can shed light on the nature of interaction between neutron star and accretion flow onto its surface. The observational constraints on the values of $M$, $R$, and $B$ together with the new QPO data, which might be available thanks to the forthcoming missions, could be used to test the model function as far as the distribution of kHz QPO frequencies is concerned. As we have shown in Section~\ref{mffit}, the model function depends on the mass and radius of the neutron star, which in turn define a certain region in the frequency versus $\dot{M}/B^2$ plane where kHz QPOs can be generated only for a specific value of magnetic field strength. If the correlation holds in the current ensemble as suggested by the analyses in Sections~\ref{plfit} and \ref{mffit}, the interaction is then magnetospheric in origin with the Alfv\'{e}n radius, $r_{\mathrm{A}}$, being only one of the basic length scales, which determine the QPO frequencies. The usual assumption that kHz QPO frequencies correspond to the Keplerian frequency or the square of it at the Alfv\'{e}n radius \citep{Zhang2004}, cannot account, however, for the \emph{parallel tracks} in the frequency distribution of an individual source. Moreover, neither $\nu_{1}\propto \nu_{\mathrm{A}}$ nor $\nu_{1}\propto \nu_{\mathrm{A}}^2$ can explain the power-law fit as a fair representative of the average run of lower kHz QPO frequency in the ensemble of sources (Figure~\ref{fig2}c). The presence of another length scale, such as the width of the boundary region where the neutron-star magnetosphere interacts with the accretion disk, can provide us with a possible explanation for the observed distribution of frequencies in an individual source as well as the ensemble of sources (Figures~\ref{fig3} and \ref{fig4}). The model parameter, $\delta$, represents the radial width of the boundary region in units of $r_{\mathrm{A}}$. As can be seen in Figure~\ref{fig3}, the model function fit to individual source data for the given mass and radius of the neutron star could be obtained if the change in $\delta$ is taken into account. We expect $\delta$ to vary as a function of $\dot{M}$ and therefore $\dot{M}/B^2$ in a source for which the magnetic field strength, $B$, cannot change. The region of the model function fit to data depends on the mass and radius of the neutron star as well as $\delta$. Once $M$ and $R$ are chosen for the source, a unique value for a certain range of $\delta$ can be assigned to the magnetic field strength on the neutron-star surface (Figure~\ref{fig3}). As we have also addressed, in the present work, how $\delta$ is supposed to change as $\dot{M}$ varies in time, we expect $\delta$ to increase with $\dot{M}$ (Figure~\ref{fig5}); otherwise, lower kHz QPO frequencies would monotonically increase to attain extremely high values in the range of 1500--2000 Hz (see Figure~\ref{fig3}) for sufficiently small values of $\delta$ if $\delta$ is kept constant. It is not so difficult to understand why $\delta$ has to increase with $\dot{M}$ at least for a wide range of $\dot{M}$. The radial width of the boundary region at the magnetopause, as we will discuss in a subsequent paper, is determined by the typical half-thickness of the inner disk, which increases with $\dot{M}$ as foreseen by the standard disk model \citep{SS73}. A relatively more elaborate description of the so-called \emph{parallel tracks} in terms of variations in $\delta$ and other physical parameters such as $\Delta \nu$ can be realized within the context of kHz QPO frequencies versus X-ray flux in individual sources \citep{Mendez2000}, provided that the variation of $\delta$ with $\dot{M}$ could be simulated. In a forthcoming paper, we will address the reproduction of kHz QPO data, i.e., the \emph{parallel tracks} of individual sources through modelling the change of $\delta$ with $\dot{M}$. The additive effect of individual frequency distributions (Section~\ref{mffit}) on the possible correlation between $\nu_{1}$ and $\dot{M}/B^2$ in the ensemble of sources reveals itself as if the frequency data are scattered all along a simple power-law (Figure~\ref{fig2}c) with the power-law coefficient and index being $A\approx 1$ and $\alpha \approx 0.1$, respectively (Section~\ref{plfit}). Our analysis in Section~\ref{mffit} shows that the source distribution in the $\nu_{1}$ versus $\dot{M}/B^2$ plane also depends on the neutron-star masses and radii (Figure~\ref{fig4}). The remarkable similarity between the source distribution in Figure~\ref{fig2}c and the one in Figure~\ref{fig4}b could be an indication of similar neutron-star masses and radii ($M\gtrsim 1M_{\odot }$ and $R\simeq 11$~km) for all sources in the ensemble. It is very likely, on the other hand, that the magnetic field strength varies from one source to another (Table~\ref{table3}). Regardless of the source classification as Z or atoll, we expect to find the frequency tracks with highest slopes, in general, close to the leftmost boundary of the distribution, which is determined by the relatively small value of $\delta$ (see, e.g., Figures~\ref{fig3} and \ref{fig4}). Although there is no strict range for $\delta$, the typical values for its upper and lower limits estimated by the model function fit to data are compatible with those suggested by earlier studies on magnetically threaded boundary regions \citep{EA2004}. The transient source XTE~J1701--462 is unique among all sources in the ensemble as it helps us identify the largest possible range for $\delta$. We could perform the model function fit to the data of other sources using different values of $\delta$, which all lie within this range (Figure~\ref{fig4}). The exclusion of the single data point of a lower kHz QPO seen at $\sim 500$~Hz during the Z phase of XTE~J1701--462 from the analysis in Section~\ref{nu1cor} is due to the lack of detailed information about the variation of kHz QPO frequencies throughout the Z phase with the source intensity \citep{Sanna2010}. Although the luminosity difference between Z and atoll phases can be deduced from the variation of maximum rms amplitude or quality factor with the luminosity of the source, details of variation of kHz QPO frequencies with luminosity are absent in the Z phase unlike the atoll phase of the same source \citep{Sanna2010}. Despite this uncertainty, the lower kHz QPOs in the Z phase are observed within an extremely narrow range of frequency ($\sim 600-650$~Hz) apart from the QPO at $\sim 500$~Hz. In Section~\ref{nu1cor}, we include all data in the Z phase except this $\sim 500$~Hz QPO. For simplicity, we assume a common luminosity for each data point in this narrow range. The inclusion of the QPO at $\sim 500$~Hz with the same luminosity would be the extreme case with infinite slope for the Z track from $\sim 500$~Hz to $\sim 650$~Hz. Even in this extreme case, it would be possible to describe both Z and atoll phases of XTE~J1701--462 within the region of model function fit to source data. Choosing a smaller value for $\delta$, such as $0.01$ instead of $0.03$ while shifting the data of the source towards lower values of $\dot{M}/B^2$, we would be able to end up with a similar distribution as in Figure~\ref{fig4}. This would only imply slightly higher magnetic field strength for the same mass and radius of the neutron star. In a more realistic case, we would expect to find the QPO at $\sim 500$~Hz with a lower luminosity as compared to others in the $\sim 600-650$~Hz range. The slope of the Z track would then be finite. The present region of the model function fit to data would remain almost the same and yet comprise both Z and atoll phases as in Figure~\ref{fig4}. In any case, the inclusion of the single data point of the QPO at $\sim 500$~Hz wouldn't alter at all the basic outcome of our present analysis. The existence of the peculiar LMXB XTE~J1701--462 reveals the fact that both Z and atoll characteristics can coexist in the same source provided that the range of luminosity variation is sufficiently large to include values of luminosity as high (low) as those in Z (atoll) sources. The manifestation of both classes in the same source rules out the possibility that the intrinsic source properties such as the mass, radius, spin and magnetic field of the neutron star can play a major role in determining the spectral evolution of the source. The early idea by \citet{HK1989} that the difference between the two spectral classes can be due to the absence (presence) of a magnetosphere in atoll (Z) sources cannot account for the present case of XTE~J1701--462. The transition between high- and low-luminosity phases in the same source suggests that the large variation in $\dot{M}$ can be responsible for the differences between Z and atoll characteristics. The low frequency QPOs, which were detected in Z sources and hence interpreted as the evidence of a magnetosphere, have also been observed in atoll sources \citep{Altamirano2005}. According to the early arguments in the literature, the magnetic field strength, $B$, on the surface of the neutron star has usually been considered to be the main agent that can account for the differences between Z and atoll classes. In this paper, we claim that the presence of a magnetosphere is not unique to Z phase. In accordance with our present findings regarding the frequency distribution of kHz QPOs, the magnetosphere-disk interaction is also efficient in atoll phase. The kHz QPOs can therefore be generated within the magnetic boundary region, which is presumably stable with respect to specific accretion phases such as Z and atoll. These phases are mainly determined by $\dot{M}$. As shown in Table~\ref{table3}, the surface magnetic field strength of XTE~J1701--462 is less than or close to the value of $B$ in atoll sources such as 4U~1820--30, 4U~1735--44, 4U~1728--34, 4U~1705--44, and 4U~1702--42. This is a very clear evidence of the fact that $B$ has nothing to do with the X-ray spectral state of the source. In addition to XTE~J1701--462, all sources mentioned above are candidates for exhibiting both Z and atoll characteristics if their range of variation in $\dot{M}$ becomes large enough to accommodate the properties of both classes. The change in the properties of the accretion flow around the neutron star can be the reason for the difference in QPO coherence and rms amplitude between Z and atoll phases \citep{Mendez2006,Sanna2010}. Our analysis suggests that both $\dot{M}$ and $B$ are important in determining the range of kHz QPO frequencies. Unlike the previous arguments in the same context \citep{WZ1997,Zhang2004}, however, we claim that the frequency distribution in an individual source, in addition to the Alfv\'{e}n radius, strongly depends on the length scale of the boundary region, $\delta$, which is also expected to vary with $\dot{M}$. It is highly likely within the model we propose here that $\dot{M}$ has a two-sided effect on QPOs. On the one hand, $\dot{M}$ modifies the length scales, which are mainly responsible for the range of QPO frequencies, on the other hand, it leads to a number of changes in the physical conditions of the accretion flow through the magnetosphere threading the boundary region. The density and temperature of the matter in the magnetosphere can be different in accretion regimes that differ from each other in $\dot{M}$. In this sense, the Z and atoll phases may reflect different accretion regimes. In our picture, the modulation mechanism takes place in the boundary region, which forms the base of the funnel flow extending from the innermost disk radius to the surface of the neutron star. As compared to the soft X-ray emission from the inner disk, the relatively hard X-ray emission associated with the funnel flow may then appear as the frequency resolved energy spectrum characterizing the X-ray variability in LMXBs \citep{Gilfanov2003}. The possible correlation of lower kHz QPO frequencies with $\dot{M}/B^2$ (Section~\ref{nu1cor}) indicates relatively strong magnetic fields for Z sources in comparison with atoll sources (Table~\ref{table3}). Apart from XTE~J1701--462, which exhibits both Z and atoll characteristics, this tendency may have a simple explanation (being different from the arguments based on magnetic field for a direct explanation of Z and atoll characteristics before the emergence of XTE~J1701--462). The fact that some sources, which are observed only in Z phase (high $\dot{M}$) and never in atoll phase, such as Cyg~X-2, GX~5--1, GX~17+2, and Sco~X-1, could be due to the propeller effect that would arise when the Alfv\'{e}n radius exceeds the co-rotation radius at sufficiently low mass accretion rates. If so, then we would expect to observe most of high $B$ sources in high $\dot{M}$ regime in accordance with the present statistics regarding the number of Z sources in comparison with that of atoll sources within the population of neutron star LMXBs.
16
9
1609.01918
1609
1609.03909_arXiv.txt
% {} {Using a generalized Bayesian inference method, we aim to explore the possible interior structures of six selected exoplanets for which planetary mass and radius measurements are available in addition to stellar host abundances: HD~219134b, Kepler-10b, Kepler-93b, CoRoT-7b, 55~Cnc~e, and HD~97658b. We aim to investigate the importance of stellar abundance proxies for the planetary bulk composition (namely Fe/Si and Mg/Si) on prediction of planetary interiors.} {We performed a full probabilistic Bayesian inference analysis to formally account for observational and model uncertainties while obtaining confidence regions of structural and compositional parameters of core, mantle, ice layer, ocean, and atmosphere. We determined how sensitive our parameter predictions depend on (1) different estimates of bulk abundance constraints and (2) different correlations of bulk abundances between planet and host star.} { The possible interior structures and correlations between structural parameters differ depending on data and data uncertainty. The strongest correlation is generally found between size of rocky interior and water mass fraction. Given the data, possible water mass fractions are high, even for most potentially rocky planets (\mbox{HD~219134b}, Kepler-93b, CoRoT-7b, and 55~Cnc~e with estimates up to 35~\%, depending on the planet). Also, the interior of Kepler-10b is best constrained with possible interiors similar to Earth. Among all tested planets, only the data of Kepler-10b and Kepler-93b allow to put a higher probability on the planetary bulk Fe/Si to be stellar compared to extremely sub-stellar.} {Although the possible ranges of interior structures are large, structural parameters and their correlations are constrained by the sparse data. The probability for the tested exoplanets to be Earth-like is generally very low. Furthermore, we conclude that different estimates of planet bulk abundance constraints mainly affect mantle composition and core size.}
\label{Intro} \sloppy The characterization of exoplanet interiors is key to understand planet diversity. Here we focus on characterizing six exoplanets for which mass, radius, and also refractory element abundances of the stellar hosts are known. These exoplanets are \mbox{HD~219134b}, Kepler-10b, Kepler-93b, CoRoT-7b, 55~Cnc~e, and \mbox{HD~97658b}. In order to make meaningful statements about their interior structures, it is mandatory not only to find few interior realizations that explain the data, but also to quantify the spread in possible interior structures that are in agreement with data, and data and model uncertainty \citep{rogers2010,dorn,dornA}. This spread is generally large, due to the inherent degeneracy, i.e. different interior realizations can have identical mass and radius. We therefore apply the methodology presented by \citet{dornA} that employs a full probabilistic Bayesian inference analysis using a Markov chain Monte Carlo (McMC) method. This method is validated against Neptune and previously by \citet{dorn} against the terrestrial fitsyno planets. It allows us to determine confidence regions for structural and compositional parameters of core, mantle, ice layer, ocean, and atmosphere. Planet bulk abundance constraints, in addition to mass and radius, are crucial to reduce the generally high degeneracy in interior structures \citep{dorn,grasset09}. Stellar abundances in relative refractory elements (Fe/Si and Mg/Si) were suggested to serve as a proxy for the planetary abundance \citep[e.g.,][]{bond,elser,johnson,thiabaud}. Spectroscopic observations of relative photospheric abundances of host stars may allow us to determine their importance for the interior characterization of planetary companions as outlined by \citet{santos}. We proceed along these lines and study the sensitivity of interior parameter predictions to different planet bulk abundances by testing (1) different stellar abundance estimates and (2) different correlations between stellar and planetary abundance ratios. Over the last few decades, a variety of telescopes and spectrographs have been utilized to measure stellar abundances, which affects the resolution and signal-to-noise (S/N) ratios of the data. In addition, the techniques used to determine the element abundances differ. Then, each research group chooses their own line list and number of ionization stages included, solar atmosphere models, and adopted solar abundances. All of these differences result in element abundance measurements that are fundamentally on different baselines and which do not overlap to within error (Hinkel et al. 2014). Here, we investigate whether these discrepancies significantly affects the determination of interior structure or not. The six selected exoplanets for which host star abundances are available have bulk densities in the range from 6.8 g/cm$^{3}$ (Kepler-93b) to 3.4 g/cm$^{3}$ (HD~97658b) and span across the proposed transition range between mostly rocky and mostly non-rocky exoplanets \citep[e.g.,][]{lopez,rogers15}. Terrestrial-type planets are generally thought to be differentiated with an iron-rich core, a silicate mantle, and a crust, whereas volatile-rich planets are thought to also contain significant amount of ices and/or atmospheres \citep[e.g., see][]{howe}. Direct atmosphere characterization for the selected planets are not available, except for the only recently characterized atmosphere of 55~Cnc~e \citep{Tsiaras, demory}. For super Earths in general, the diversity of atmospheric structures and compositions are mostly theoretical \citep[e.g., see][for a review]{burrows}. Studies of mass-radius relations generally consider H$_{2}$-He atmospheres \citep[e.g.,][]{rogers2011, fortney} and H$_2$O as liquid and high pressure ice \citep[e.g.,][]{valencia07a, seager2007}. However, compositional diversity of atmospheres and ices in planets may exceed the one found in our solar system \citep[e.g.,][]{newsom}. But the sparse observational data on exoplanets will only allow us to constrain few structural and compositional parameters. In that light, we assume a general planetary structure of a pure iron core, a silicate mantle, a water layer, and an atmosphere made of H, He, C, and O. Given the data, we constrain core size, mantle thickness and composition, mass of water, and key characteristics of the atmosphere (mass fraction, luminosity, and metallicity). Here, we use the term atmosphere synonymously with gas layer, for which there is commonly a distinction between a convective (i.e., envelope) and a radiative part (i.e., atmosphere). In order to compute the structure for a planet of a given interior, we employ model I from \citet[][]{dornA}. Namely, we employ state-of-the-art structural models that compute irradiated atmospheres and self-consistent thermodynamics of core, mantle, high-pressure ice and ocean. This paper is organized as follows: in Section \ref{prev} we review previous interior studies on the planets of interest. In Section \ref{Methodology} we introduce the inference strategy, model parameters, data and the physical model that links data and model parameters. In Section \ref{Results}, we apply our method on the six selected exoplanets and test different stellar abundance proxies for constraints on planet bulk abundance. We end with discussion and conclusions. \begin{figure}[] \centering \includegraphics[width = .5\textwidth]{plot_HINKEL_MR.png} \caption{Masses and radii of the six exoplanets of interest, including two scenarios for 55Cnc e (see main text). The terrestrial solar system planets are also plotted against idealized mass-radius curves of pure iron, pure magnesium silicate, and a 2-layered composition of 5~\% magnesium silicate and 95~\% water. \label{MR}} \end{figure}
\label{Discussion} The success of an inference method is subject to the limitations and assumptions of the forward function $g(\cdot)$. Firstly, we consider pure iron cores, which may lead to a systematic overestimation of core density and thus an underestimation of core size, if volatile compounds in the core are significant. The trade-off between predicted and independently inferred core sizes can be on the order of few to tens of percents, as shown for the terrestrial solar system planets in \citet{dorn}. Secondly, our approach assumes sub-solidus conditions for core and mantle as well as a perfectly known EoS parameters for all considered compositions. For the six exoplanets considered here, their pressures and temperatures exceed the ranges for which these parameters can be measured in the laboratory. Clearly, extrapolations introduce uncertainty. Thirdly, we have assumed pure water composition of the ice layer, but other compounds such as CO, CO$_2$, CH$_4$, NH$_3$, etc. are also possible since the additional elements are relatively abundant. We have used water as a proxy for these volatiles in general because (1) oxygen is more abundant than carbon and nitrogen in the universe, and (2) water condenses at higher temperatures than ammonia and methane. Fourthly, our atmospheric model uses prescribed opacities of \citet{JIN2014}, which is suitable for solar abundances. In this aspect, the prescription is therefore not self-consistent with the non-solar metallicity cases we present in this work. We have checked that using different infrared and/or visible opacities can introduce errors in radius of $\sim$5~\%. Models that compute line-by-line opacities with their corresponding atmospheric abundances should be performed in the future to compute planetary radii in a self-consistent way. Another aspect is our assumption of ideal gas behavior. We have checked that for atmospheric mass fractions (\menv/$M$) of up to 1~\%, the difference in the radius between using the \citet{saumon} EOS and ideal gas (for H-He atmospheres) can reach 10~\%. For the vast majority of our results, atmospheric mass fractions are smaller than 0.1~\%, and thus the ideal gas assumption does not introduce a remarkable flaw. Since our interior models are static, evolutionary effects on the interior structure are not accounted for in the forward model. As demonstrated in section \ref{evoloss}, constraints on evolutionary processes such as thermal mass loss of gaseous layers can be roughly approximated by post-analysis considerations. We have used very approximate analytic scaling laws from \citet{lopez} and \citet{JIN2014} to rule out gas-rich planet realizations.
16
9
1609.03909
1609
1609.08700_arXiv.txt
We infer the central mass distributions within 0.4--1.2 disc scale lengths of 18 late-type spiral galaxies using two different dynamical modelling approaches -- the Asymmetric Drift Correction (ADC) and axisymmetric Jeans Anisotropic Multi-gaussian expansion (JAM) model. ADC adopts a thin disc assumption, whereas JAM does a full line-of-sight velocity integration. We use stellar kinematics maps obtained with the integral-field spectrograph $\sauron$ to derive the corresponding circular velocity curves from the two models. To find their best-fit values, we apply Markov Chain Monte Carlo (MCMC) method. ADC and JAM modelling approaches are consistent within 5\% uncertainty when the ordered motions are significant comparable to the random motions, i.e, $\overline{v_{\phi}}/\sigma_R$ is locally greater than 1.5. Below this value, the ratio $v_\mathrm{c,JAM}/v_\mathrm{c,ADC}$ gradually increases with decreasing $\overline{v_{\phi}}/\sigma_R$, reaching $v_\mathrm{c,JAM}\approx 2 \times v_\mathrm{c,ADC}$. Such conditions indicate that the stellar masses of the galaxies in our sample are not confined to their disk planes and likely have a non-negligible contribution from their bulges and thick disks.
\label{S:intro} The non-Keplerian rotation curves of spiral galaxies provided the first observational evidence that galaxies are embedded in extensive dark matter haloes (\citealt{Bosma1978, Rubin1983, vanAlbada1985, Begeman1987}). Historically, the 21-cm emission from atomic neutral hydrogen gas (HI) has been the main tool to derive galaxy rotation curves, because of its capability to trace the gravitational field beyond the optical stellar disc. However, HI is less useful in constraining the rotation curve in the central parts of discs usually due to insufficient spatial resolution and the lack of HI gas in the inner parts of galaxies (e.g., \citealt{Noordermeer2007b}). The interstellar medium in the centre is instead dominated by gas in the molecular and ionised phases (e.g., \citealt{Leroy2008}). Unfortunately, the rotation curves derived through CO emission, the common tracer for the molecular gas distribution, often show non-axisymmetric signatures such as wiggles (\citealt{Wada2004,Colombo2014b}). Hot ionised gas has the additional disadvantage that the observed rotation alone is often insufficient to trace the total mass distribution and that its velocity dispersion needs to be included. This velocity dispersion is generally influenced by a typically unknown contribution from non-gravitational effects such as stellar winds and shocks (e.g., \citealt{weijmans2008}). Some of these shortcomings of gas tracers can be overcome through careful correction and analysis. However, a broader shortcoming of gas tracers becomes apparent because of their dissipative nature. The gas is easily disturbed by perturbations in the plane from, for example, a bar or spiral arm (\citealt{Englmaier1999, Weiner2001,Kranz2003}). Gas also settles in the galaxy disc plane (or polar plane) and is thus less sensitive to the mass distribution perpendicular to it. The typical velocity dispersions of the neutral and ionized gas, respectively, are $\sim$10 \kms (\citealt{Casertano1983,Caldu-Primo2013}) and $\sim$20 \kms (e.g. \citealt{Fathi2007}). Instead of gas, stars could be used as a tracer of the underlying gravitational potential. Stars are present in all galaxy types, are distributed in all three dimensions, and are collisionless making them less susceptible to perturbations. The random motion of the collisionless stars tipycally ranges from 20 \kms to above 300 \kms (\citealt{Dehnen1998,Martinsson2013b,Cappellari2013,Rys2013,Ganda2006}). This implies a scale height difference of over an order of magnitude between the gaseous and stellar components (e.g., \citealt{Koyama2009,Bottema1993}). However, we need to measure both the ordered and random motions of stars before they can be used as a dynamical tracer. Their random motion can be different in all three directions, an effect known as the velocity anisotropy. This anisotropy requires more challenging observational and modelling techniques to uncover the total mass distribution. Nonetheless, these techniques are becoming available. Integral-field spectrographs like \sauron\ (\citealt{Bacon2001}), which is used in this study, allow us to extract high-quality stellar kinematic maps and enable us to perform dynamical models. A common approach for spiral galaxies is to apply the asymmetric drift correction (ADC; \citealt{Binney2008}) to infer the circular velocity curve from the measured stellar mean velocity and velocity dispersion profiles. This approach is straightforward since the velocity and dispersion profiles can be obtained from long-slit spectroscopy, and no line-of-sight integration is required due to an underlying thin-disc assumption. Using this method requires assuming both the disc inclination and the magnitude of the velocity anisotropy. As such, the ADC approach is widely adopted in studies that use the stellar kinematics to infer the circular velocity curve. For example, when investigating the Tully-Fisher relation for earlier-type spirals (e.g., \citealt{Bottema1993}, \citealt{Neistein1999, Williams2010}), the speed of bars (e.g., \citealt{Aguerri1998, ButaZhang2009}), as well as the inner distribution of dark matter (e.g., \citealt{Kregel2005b,weijmans2008}). In this paper, we adopt an alternative approach, namely fitting a solution of the axisymmetric Jeans equations to stellar mean velocity and velocity dispersion fields to infer mass distributions. We apply this approach to the data acquired with the integral-field spectrograph SAURON from inner parts of 18 late-type Sb--Sd spiral galaxies. These Jeans models are less general than orbit-based models (e.g., Schwarzschild's method; \citealt{vandenBoschetal2008}) but are far less computationally expensive while still providing a good description of galaxies dominated by stars on disc-like orbits (e.g., \citealt{Cappellari2008}). The applicability of these fitted Jeans solutions even applies to dynamically hot systems such as lenticular galaxies. The Jeans models take into account the two-dimensional information in the stellar kinematic maps as well as integration along the line-of-sight. We also compare the resulting circular velocity curves with those obtained through ADC to investigate the validity of the assumptions underlying the simpler ADC approach. In Section~\ref{S:observations}, we summarise the sample including the \sauron\ observations and near-infrared imaging. The Jeans and ADC modelling approaches are described in Section~\ref{S:method} and applied via Markov Chain Monte Carlo technique in Section~\ref{S:analysis}. We then compare the circular velocity curves from both modelling approaches in Section~\ref{S:vcirc} and discuss the possible reasons for the significant differences we find in Section~\ref{S:discussion}. We draw our conclusions in Section~\ref{S:concl}.
\label{S:discussion} \subsection{Assumptions in the mass models} \label{SS:assumptions} \emph{Mass-follows-light (JAM):} Within the small radial range covered by the stellar kinematics, typically at $< R_e/3$, we expect the variations in the mass-to-light ratio to be small. \cite{Kent1986} provides evidence that in a sample of 37 Sb-Sc galaxies, most of their rotation curves computes from the luminosity profiles assuming constant mass-to-light ratio provide a good match to the observed curve out to the radius where the predicted curve turns over. Additionally, \citet{Kregel2005a} investigate the effects of realistic radially varying mass-to-light ratios and find the overall effect to only be $\sim$ 10\% variations on the derived kinematic properties within $R_e/2$. \emph{Constant anisotropy in the meridional plane (JAM):} Whereas the velocity anisotropy $\sigma_\phi/\sigma_R$ in the equatorial plane is inherent in the solution of the axisymmetric Jeans equations (Section~\ref{SS:axijeans}), the velocity anisotropy $\sigma_z/\sigma_R$ in the meridional plane is a free parameter, given as $\beta_z = 1 - \sigma^2_z/\sigma^2_R$. There is little evidence for strong variation of $\beta_z$. For example, \citet{Bottema1993} already argued that for spirals the $\sigma_z/\sigma_R$ is constant at approximately 0.6, close to the measured value of $0.53\pm0.07$ in the solar neighbourhood \citep{Dehnen1998, Mignard2000}. This ratio of the anisotropy is measured in a few spirals of type Sa to Sbc \citep{Gerssen1997,Gerssen2000, Shapiro2003}, yielding slightly larger constant values between 0.6 and 0.8. Based on long-slit spectra for a sample of 17 edge-on Sb--Scd spirals, \citet{Kregel2005a} also adopt constant values, although slightly lower: 0.5 to 0.7. While radial variation of the anisotropy is not excluded, a constant value of $\beta_z$ should be sufficient for our analysis. \emph{Shape of the velocity ellipsoid (JAM, ADC):} The ADC and JAM models in our study assume that the velocity ellipsoid in the meridional plane is aligned with the cylindrical coordinate system so that $\overline{v_R v_z}=0$. Additionally, the ADC approach may allow for a tilt of the velocity ellipsoid through the parameter $\kappa$ having intermediate values between $\kappa=0$ and $\kappa=1$, corresponding to the cylindrical and spherical coordinate system, respectively. However, we expect that the resulting circular velocity curve is only weakly dependent on this tilt, because most of the stellar mass is concentrated toward the equatorial plane, particularly for late-type spiral galaxies. In that case, the assumption of axisymmetry gives $\overline{v_R v_z}=0$. \emph{Dust (JAM, ADC):} The surface brightness distribution of the spiral galaxies that is used in both JAM and ADC modelling approaches can be strongly affected by extinction due to dust. We have tried to minimise the effects of dust in various ways (G09): (i) selecting galaxies with intermediate inclinations so that they are not edge-on (where dust extinction is strongest) or face-on (where the stellar velocity dispersion would be significantly below the spectral resolution), (ii) inferring the surface brightness distribution from images in the near-infrared where the extinction is significantly smaller than in the optical, and (iii) fitting smooth, analytical MGE profiles to the radial surface brightness profile after azimuthally averaging over annuli to suppress deviations caused by bars, spiral arms, and regions obscured by dust. The stellar kinematics are obtained from integral-field spectroscopy in the optical and thus could also be affected if the (giant) stars that contribute along the line-of-sight with different motions are affected by the dust in different ways. For example, if dynamically colder stars closer to the disc plane are relatively more obscured than dynamically hotter stars above the disc plane, the resulting combined ordered-over-random motion could be biased to lower values. The effects of dust are strongly reduced for lines of sight only a few degrees away from edge-on (\citealt{Baes2003b}) and the effects of extinction on the line-of-sight velocity are negligible: the change in $\sigma_z$ is only 1.3\% higher from the dustless case (\citealt{Bershady2010}). Given the selection in our sample, the effects of dust on the inferred circular velocity curves from both modelling approaches are expected to be minimal. \emph{Thin-disc (ADC):} Circular velocity curves of spirals nearly always come from (cold) gas, which is naturally in a thin disc. The stellar discs of these late-type spiral galaxies are also believed to be thin, with inferred intrinsic flattening $q \sim 0.14$ \citep[e.g.,][]{Kregel2002}. Even the bulges in late-type spiral galaxies are different from the 'classical' bulges in lenticular galaxies. S\'ersic profile fits to their surface brightness ($I(R) \propto \exp(-R^{1/n})$) show that towards later-types, the bulges are smaller in size, have profiles closer to exponential ($n=1$) than de Vaucouleurs ($n=4$), and are flatter (e.g., G09). Hence, the thin-disc assumption adopted in the ADC approach also seems to be reasonable. \subsection{Bulges and thick discs} \label{SS:bulgesandthickdiscs} Independent of the nature of the dynamically hot stellar (sub)system, it seems that the local value of the ordered-over-random motion is the key for understanding of the ADC and JAM discrepancy. To explore that, in Fig.~\ref{fig:resRatio}, we plot the radial velocity ratio $v_{\mathrm{c,JAM}}/v_{\mathrm{c,ADC}}$ of the galaxies, as well as the local luminosity fraction of the fitted exponential disc compared to the total luminosity as a thin solid curve. For the Sb--Sbc galaxies, the discrepancy in the estimated velocity ratio between the JAM and ADC models seems to be larger in the inner parts ($R<R_e/5$), where the luminosity is dominated by the presence of the bulge, and decreases outwards. For Scd--Sd galaxies the velocity ratio stays roughly constant and in a few cases increases towards larger radii, which could be due to the presence of a thick(er) disc component. In the left panel of Fig.~\ref{fig:VSrel}, we plot the velocity ratios $v_{\mathrm{c,JAM}}/v_{\mathrm{c,ADC}}$ obtained from Fig.\ref{fig:resRatio} for all galaxies as measured at different radii versus the ordered-over-random motion $\overline{v_{\phi}}/\sigma_R$ (the deprojected rotation vs. radial velocity dispersion) at the same radii. In the right panel of Fig.~\ref{fig:VSrel}, we also show the ratio $\ensuremath{\sigma^{2}_R/\overline{v_\phi}^2}$ (from the radial velocity dispersion profiles and deprojected rotation) as measured at different radii versus the mass ratio $M_{\mathrm{JAM}}/M_{\mathrm{ADC}}=\vcjam^2/\vcadc^2$ at the same radii. The discrepancy between the ADC and JAM inferred enclosed masses becomes gradually larger for increasing $\ensuremath{\sigma^{2}_R/\overline{v_\phi}^2}$. The impact of the models' discrepancy is larger when is converted into mass ratio. It also has the same order of magnitude as the expected uncertainty of the epicycle approximation ($\Delta\sim \ensuremath{\sigma^{2}_R/\overline{v_\phi}^2}$, \citealt{Vandervoort1975}), which is actually applied in ADC (but not in JAM) approach. We see that the circular velocity measurement of both modelling approaches are consistent for $\overline{v_{\phi}}/\sigma_R$ $\gtrsim 1.5$, corresponding to those radii in the Sb--Sbc galaxies where the disc luminosity is dominating over the bulge luminosity, i.e., where the thin solid curve in Fig.~\ref{fig:resRatio} approaches unity. Since some of our galaxies do not have well fitting models, we mark their data with a different symbol in order to check for possible biases. However, even after their removal, the general trends of the velocity and mass ratios are preserved. The grey filled circles correspond to high-quality ADC and JAM model fits data, while the open pentagon and the open star correspond to low-quality ADC and JAM data, respectively. The black filled squares show the median of the distribution of the data per bin. The error bar corresponds to the uncertainties of the median value at the 25th and 75th percentiles of the distribution at each bin (see also Table \ref{tab:calc5}). \begin{table} \begin{center} \caption{Median values of the velocity and mass ratios with corresponding (25th and 75th percentile) uncertainties at each bin in Fig.\ref{fig:VSrel}: (1) Ratio between the deprojected rotation and radial velocity dispersion; (2) Velocity ratio between JAM and ADC model and their uncertainty at fixed $\overline{v_{\phi}}/\sigma_R$ ratio; (3) Ratio between the deprojected velocity and radial velocity dispersion; (4) Mass ratio between JAM and ADC model and their uncertainty at fixed $\ensuremath{\sigma^{2}_R/\overline{v_\phi}^2}$ ratio; } \tabcolsep=1.2mm \begin{tabular}{|*{2}{l}|*{2}{l}|} \hline \multicolumn{4}{|c|}{JAM-ADC conversion factors} \\ \hline $\overline{v_{\phi}}/\sigma_R$ & $\frac{v_{\mathrm{c,JAM}}}{v_{\mathrm{c,ADC}}}$ & $\sigma^2_{R}/\overline{v_{\phi}}^2$ & $\frac{M_{\mathrm{JAM}}}{M_{\mathrm{ADC}}}$\\ (1) & (2) & (3) & (4) \\ \hline 0.1 & $1.550_{-0.089}^{+0.168}$ & 0.4 & $1.187_{-0.123}^{+0.055}$ \\ 0.3 & $1.666_{-0.123}^{+0.147}$ & 0.8 & $1.425_{-0.224}^{+0.424}$ \\ 0.5 & $1.593_{-0.146}^{+0.025}$ & 1.3 & $1.809_{-0.184}^{+0.227}$ \\ 0.7 & $1.370_{-0.073}^{+0.157}$ & 2.4 & $1.959_{-0.243}^{+0.551}$ \\ 0.9 & $1.345_{-0.089}^{+0.072}$ & 4.2 & $2.446_{-0.536}^{+0.162}$ \\ 1.1 & $1.244_{-0.112}^{+0.184}$ & 8.3 & $2.606_{-0.414}^{+0.603}$ \\ 1.3 & $1.116_{-0.031}^{+0.033}$ & 14.1 & $2.948_{-0.455}^{+0.404}$ \\ 1.5 & $1.090_{-0.036}^{+0.026}$ & 24.8 & $2.648_{-0.504}^{+0.215}$ \\ 1.7 & $1.004_{-0.044}^{+0.092}$ & 49.6 & $2.627_{-0.349}^{+0.414}$ \\ 1.9 & $0.933_{-0.020}^{+0.037}$ & 85.5 & $2.448_{-0.347}^{+0.582}$ \\ 2.0 & $0.953_{-0.007}^{+0.030}$ & 150.1 & $2.982_{-0.730}^{+0.719}$ \\ \hline \end{tabular} \label{tab:calc5} \end{center} \end{table} Since we are probing the inner parts of these spiral galaxies it might well be the presence of bulges and/or thick stellar discs that causes a break-down of the thin-disc/epicycle approximation in ADC. As can be seen from Fig.~\ref{fig:Vc_a}, the stellar rotation (first column) is of the same order and often even lower than the stellar dispersion (second column). The small ordered-over-random motion values implies that the stars are far from dynamically cold. Detailed photometric studies indicate that most disc galaxies contain a thick disc \citep[e.g.][]{Dalcanton2002,Seth2005, Comeron2011}, and in low-mass galaxies with circular velocities $<$\,120\,\kms, like the Scd--Sd galaxies in this sample, thick disc stars can contribute nearly half the luminosity and dominate the stellar mass (\citealt{Yoachim2006}). Moreover, recent hydrodynamical simulations that reproduce thick discs show that their typical scale lengths are around 3--5 kpc \citep[e.g.][]{Domenech-Moral2012}, i.e., twice range typically covered by our analysis. Additionally, various earlier studies have qualitatively indicated that the ADC approach might not be suitable in the case of stellar systems that are (locally) not dynamically cold \citep[e.g.,][]{Neistein1999, Bedregal2006,Williams2010}. We also find that the discrepancy between the two model is proportional to the error of the epicycle approximation in ADC (see \citealt{Vandervoort1975}), where $\ensuremath{\sigma^{2}_R/\overline{v_\phi}^2} \sim M_{\mathrm{JAM}}/M_{\mathrm{ADC}}$. \cite{Davis2013} show that rotation curves of early-type galaxies based on the dynamically cold molecular gas are well traced by the $\vcjam$, and that the $H_{\beta}/\mathrm{OIII} $ rotation curves from their \sauron\ data exhibit noticeable asymmetric drift. They also find the correction of the stellar rotation curve using ADC approach is also consistent with $\vcjam$. However, they note that there is some indication that this becomes less true as velocity dispersion approaches the rotation speed, as we find in this study. It is not excluded that there are strong streaming motions in the centers of the \sauron\ galaxies due to presence of non-axisymmetric features, which prevent an accurate asymmetric-drft correction. Nevertheless, some of the discrepancy might be arise from JAM overestimation of the M/L ratio. \cite{Lablanche2012} showed that the overestimation can be severe in face-on barred system. \cite{Davis2013} also discussed the possibility of an overestimation of the $M/L$ ratio due to disk averaging over various stellar populations. We do not exclude the latter since the region probed by our data is in the central part of the galaxies, where multiple stellar populations can be present. The former case should not influence some of our measurements, since all of our galaxies have intermediate inclinations. In future studies, we will explore further this problem using large statistical sample of galaxies and morphologies.
16
9
1609.08700
1609
1609.03554_arXiv.txt
We present a detailed investigation of the Large Magellanic Cloud (LMC) disk using classical Cepheids. Our analysis is based on optical ($I$,$V$; OGLE-IV), near-infrared (NIR: $J$,$H$,$K_{\rm{S}}$) and mid-infrared (MIR: $w1$; WISE) mean magnitudes. By adopting new templates to estimate the NIR mean magnitudes from single-epoch measurements, we build the currently most accurate, largest and homogeneous multi-band dataset of LMC Cepheids. We determine Cepheid individual distances using optical and NIR Period-Wesenheit relations (PWRs), to measure the geometry of the LMC disk and its viewing angles. Cepheid distances based on optical PWRs are precise at 3\%, but accurate to 7\%, while the ones based on NIR PWRs are more accurate (to 3\%), but less precise (2\%--15\%), given the higher photometric error on the observed magnitudes. We found an inclination $i$=25.05~$\pm$~0.02~(stat.)~$\pm$~0.55~(syst.) deg, and a position angle of the lines of nodes P.A.=150.76~$\pm$~0.02~(stat.)~$\pm$~0.07~(syst.) deg. These values agree well with estimates based either on young (Red Supergiants) or on intermediate-age (Asymptotic Giant Branch, Red Clump) stellar tracers, but they significantly differ from evaluations based on old (RR Lyrae) stellar tracers. This indicates that young/intermediate and old stellar populations have different spatial distributions. Finally, by using the reddening-law fitting approach, we provide a reddening map of the LMC disk which is ten times more accurate and two times larger than similar maps in the literature. We also found an LMC true distance modulus of $\mu_{0,LMC}=18.48 \pm 0.10$ (stat. and syst.) mag, in excellent agreement with the currently most accurate measurement \citep{pietrzynski13}.
The Large and Small Magellanic Clouds (LMC and SMC) represent a unique example of star-forming, dwarf interacting galaxies in the Local Group. Moreover, the Magellanic Clouds (MCs) system is embedded in the Milky Way gravitational potential, thus their dynamical history strongly affects the evolution of our own Galaxy. Yet, we still lack a comprehensive understanding of the dynamical history of the complex system MCs-Milky Way. From the theoretical point of view, two scenarios have emerged: the first-infall (unbound) scenario \citep{besla07,besla16}, and the multiple-passage (bound) scenario \citep{diaz11}. In the former, the MCs have been interacting between each other for most of the Hubble time, also experiencing at least one close encounter ($\sim$ 500 Myr ago), while they are just past their first pericentric passage. In the more classical bound scenario, the Milky Way potential determines the orbits of the MCs, that formed as independent satellites and only recently ($\sim$2 Gyr ago) become a binary system of galaxies \citep[see ][for a thorough review]{donghia15}. Even though it has proved to be challenging to distinguish between the two scenarios on the basis of observational constraints, evidence is mounting that the Clouds are now approaching the Milky Way for the first time \citep{besla16}. In particular, detailed three-dimensional study of the LMC kinematics obtained from the Hubble Space Telescope shows that the relative orientation of the velocity vectors implies at least one close encounter in the past 500 Myr \citep{kallivayalil13}. This is further supported by the distribution of OB stars in the Clouds and the Bridge \citep[a stream of neutral hydrogen that connects the MCs,][]{mathewson74}, which suggests a recent ($\sim$200 Myr ago) exchange of material \citep{casettidinescu14}. In this context, the irregular morphology of the Clouds is shaped by their reciprocal interaction. Recent dynamical simulations \citep{besla16,besla12,diaz12} show that the off-centre, warped stellar bar of the LMC, and its one-armed spiral naturally arise from a direct collision with the SMC. Thus, the observed morphology of the LMC can be directly related to its dynamical history. This is the first paper of a series aimed at investigating the morphology, the kinematics and the chemical abundances of the Large and Small Magellanic Clouds by adopting Classical Cepheids as tracers of the young stellar populations in these galaxies. In the current investigation, we focus our attention on the LMC geometry and three-dimensional structure, by using Cepheids optical ($V$,$I$) and near-infrared (NIR, $J$,$H$ and $K_{\rm{S}}$) period-luminosity (PL) and period-Wesenheit (PW) relations. The LMC viewing angles, the inclination $i$ and the position angle P.A. of the lines of nodes (the intersection of the galaxy plane and the sky plane), are basics parameters that describe the directions towards which we observe the LMC disk. The determination of such angles has major implications on the the determination of the dynamical state of the Milky-Way--MCs system. For instance, the uncertainty on the determinations on these angles affects the quoted results on the LMC kinematics, because they are needed to transform the line-of-sight velocities and proper motions into circular velocities, and, in turn, to determine the orbits of the stars. The LMC viewing angle estimates available in the literature span a wide range of values. It is somehow expected that different stellar tracers and methods will provide different results because $a)$ the old and young stellar populations in the LMC show different geometrical distributions \citep{devauc72,vandermarel01a, cioni00,weinberg01}, and $b)$ these distributions are non-axisymmetric, so results also depend on the fraction of the galaxy covered by the adopted tracer. Viewing angles based on studies of Red Giants \citep[RG, $i= 34^{\circ}.7 \pm 6^{\circ}.2$,P.A.= 122$^{\circ}.5 \pm 8.3^{\circ}$,][]{vandermarel01a} are consistent with the values found on the basis of RR Lyrae variable stars from the OGLE-III catalog \citep[$i = 32^{\circ}.4 \pm 4^{\circ}$, P.A. = 115$^{\circ} \pm 15^{\circ}$,][]{haschke12}. New estimates only based on ab-type of RR Lyrae stars \citep[$i = 22^{\circ}.25 \pm 0^{\circ}$.01, P.A.= 175$^{\circ}.22 \pm 0^{\circ}.01$,][]{deb14}, do not support previous findings based on the same tracers, but they agree with the values based on HI kinematics \citep{HI98}. The quoted uncertainties and the limited precision in dating individual Red Clump (RC) stars do not allow us to single out whether old and intermediate-age stellar tracers display different spatial distributions \citep[$i = 26^{\circ}.6 \pm 1^{\circ}.3$, P.A. = 148$^{\circ}.3 \pm 3.8^{\circ}$,][]{subramanian13}. Moreover, LMC viewing angles based on stellar tracers younger than $\lesssim$600 Myr display conflicting values. Using optical \citep[from MACHO,][]{allsman00} and NIR \citep[from DENIS,][]{epchtein98} data for $\sim$2,000 Cepheids, \citet{nikolaev04} found a position angle of P.A.=150$^{\circ}$.2 $\pm$ 2$^{\circ}$.4 and an inclination of $i=31^{\circ}\pm1^{\circ}$. On the other hand, \citet{rubele12} using NIR measurements from the Vista Survey for the Magellanic Clouds \citep[VMC,][]{cioni11} found a smaller position angle, P.A.= 129$^{\circ}.2 \pm$13$^{\circ}$, and a smaller inclination, $i$ =26$^{\circ}.2 \pm$ 2$^{\circ}$. Interestingly enough, the position angle found by \citet{nikolaev04} agrees quite well with the value estimated by \citet{vandermarel14} using the kinematics of young stars ($\lesssim$50 Myr). More recently, \citet{jac16} used optical mean magnitudes for Cepheids from the OGLE-IV Collection of Classical Cepheids \citep[CCs,][hereinafter S15]{sos2015}, including more than 4,600 LMC Cepheids. They found a smaller inclination ($i$ =24$^{\circ}.2 \pm$ 0$^{\circ}.6$) and a larger position angle (P.A.=151$^{\circ}.4 \pm$1$^{\circ}.5$) when compared with \citet{nikolaev04}. The large spread of the values summarised above shows how complex it is to estimate the LMC viewing angles, and how difficult it is to correctly estimate both the statistical and the systematic error associated to the measurements. In this paper we provide a new estimate of the LMC viewing angles, taking advantage of the opportunity to complement the large sample of LMC Cepheids recently released by the OGLE-IV survey, with NIR observations. In particular, we rely on single-epoch observations from the IRSF/SIRIUS \citep{kato07} survey and the 2MASS \citep{skrutskie06}, transformed into accurate NIR mean-magnitudes by adopting the new NIR templates by \citet[][hereinafter I15]{inno15}. We also provide new mid-infrared (MIR) mean magnitudes from light-curves collected by the ALLWISE- Multi-epoch-catalog \citep[Wide-field Infrared Survey Explorer,][]{cutri13}. The complete photometric data-set is presented in Section~2. In Section~3 we derive Cepheid individual distances with an unprecedented precision (0.5\% from optical bands, 0.5--15\% from NIR bands) and accuracy (7\% from optical bands, 3\% from NIR bands). We then use the Cepheid individual distances to derive the LMC viewing angles by using geometrical methods described in Section~4. Results are presented in Section~5, and discussed in Section~6. The use of multi-wavelengths magnitudes also allow us to compute the most accurate and extended reddening map towards the LMC disk available to date. A description of the method and the map can be found in Section~7. Moreover, we use the Cepheids' individual reddening to compute new period-luminosity relations corrected for extinction. The summary of the main results of this investigation and an outline of the future developments of this project are given in Section~8.
We collected the largest ($\sim$~4,000) sample of optical-NIR-MIR measurements for LMC Cepheids. The use of multi-wavelength observations and of accurate NIR templates, allowed us to determine 3\%(optical) --15\%(NIR) precise individual distances for the entire sample of Cepheids. Moreover, we adopted theoretical predictions based on up-to-date pulsation models to quantify possible systematic errors on individual Cepheid distances. We found that individual distances based on optical PW relations are more affected by systematics (uncertainty on the adopted reddening law, intrinsic dispersion), when compared with distances based on NIR PW relations. Using the predicted intrinsic dispersion for the PW$_{VI}$ relation ($\sim$0.06 mag), we found that individual distances \emph{only} based on optical mean magnitudes cannot have an accuracy better than 7\%. On the other hand, the simultaneous use of the three NIR bands: $J$,$H$ and $K_{\rm{S}}$, allow us to nail down the systematics and to provide individual distances with an accuracy better than 1.5 \%. However, the uncertainty on the W$_{HJK}$ mean magnitudes due to the photometric errors ($\sigma_{J=17}\sim$0.05--0.15 mag) on single observations effectively limits the above accuracy, for some samples, to 15\%. The error budget on individual Cepheid distances, when moving from optical to NIR bands is dominated by different uncertainties (systematics vs measurement errors). This gives us the unique opportunity to internally validate distances together with their errors and reddening. Our main results are summarised in the following. $\bullet${\em Viewing angles --} We find that the disk of the LMC is oriented with an inclination of $i=25.05 \pm 0.02$~(stat.)~$\pm 0.55$~(syst.)~deg and a position angle of P.A.=150.76$\pm$0.02~(stat.)~$\pm$ 0.07~(syst.). These values are in excellent agreement with recent estimates based on stellar tracers of similar age \citep[RSG stars,][]{vandermarel14}. On the other hand, previous investigations based on Cepheids found larger inclinations \citep[][P04]{nikolaev04,haschke12} and smaller position angles \citep[][P04]{haschke12}. The difference is caused by the different spatial distribution of the adopted Cepheid samples. The LMC viewing angles depend, due to its non-axisymmetric shape, on the sky coverage of the adopted stellar tracers. Moreover, the dependence of the inclination on the distance from the center of the distribution is mainly due to a limited mapping of the disk. Using Cepheids that are only located in the central fields, i.e., the LMC bar according to the definition by \citet{nikolaev04}, we found $i=35.^{\circ} 8 \pm 5^{\circ}$. Note that the error reported here and in the following is the difference between the angles found by adopting the PW$_{VI}$ and the PW$_{HJK}$ relations. If we extend the region covered by Cepheids towards the western part, i.e., till the edge of the north-western arm, we found $i=30^{\circ}\pm10^{\circ}$, which is very close to the values found by \citet{weinberg01} and by \citet{nikolaev04}. If we exclude the Cepheids located across the bar, the inclination is $\sim$24$^{\circ}$.2, thus perfectly consistent with what we have already found using the entire sample. This finding further supports the evidence that the current Cepheid sample allow us to precisely determine its geometry, since it traces the whole LMC disk, i.e., the bar plus the spiral arms. There is mounting empirical evidence that stellar tracers ranging from old, low-mass (RR Lyraes) to intermediate-mass (planetary nebulae, red clump, AGB) stars and evolved young massive stars (RSGs) do provide different viewing angles. The difference between old and young stellar tracers indicate that the former one is slightly less inclined by $\sim3^{\circ}$ and has a position angle $\sim20^{\circ}$ degree larger than those based on Cepheids. The above evidence needs to be supported by radial velocity measurements for large samples of the quoted stellar tracers. A few years ago, \citet{minniti03} using accurate individual distances of 43 LMC RR Lyrae based on the K-band PL relations and kinematic measurements, proposed the possible existence of a dynamically hot spherical halo surrounding the LMC. However, subsequent estimates based on star counts covering a broader area up to 20$^{\circ}$ from the LMC center \citep{saha10} and on RG kinematics \citep{gallart04,carrera11} did not support this finding. The possible occurrence of an extended disk is also still controversial \citep{majewski09,saha10,besla16}. The same outcome applies to the possible occurrence of metallicity gradients among the individual stellar components. The possible occurrence of radial gradients in the metallicity distribution appears even more promising, since it will allow us to couple the different star formation events with their own chemical enrichments and their radial migrations. The MCs play in this context a crucial role, since the difference in radial distance among the different stellar tracers is negligible. $\bullet${\em Distance to the LMC--} Taking advantage of the multi-band (optical-NIR-MIR) data-set, we adopted the reddening-law fitting method \citep{freedman85} to determine simultaneously the true distance modulus and the reddening of the entire Cepheid sample. We take account for both estimate and systematic error on individual distance moduli and we found that the final error ranges from 0.1\% to 0.7\%. We computed the LMC distance distribution and we found that the median is $\mu_0=18.48\pm 0.10$ mag using both fundamental and first overtone Cepheids. The above error, estimated as the standard deviation around the median, accounts for both statistical and systematic effects, but neglects the error on the zero-point of the photometric calibration ($\sim$0.02 mag). The excellent agreement on the distance based on fundamental and first overtone Cepheids further supports the use of FO Cepheids as solid distance indicators \citep{bono10,inno13}. Moreover, our estimate of the mean distance to the LMC is also in excellent agreement with similar estimates, but based on a smaller Cepheid sample \citep{inno13}, and with the geometrical distance obtained by \citet{pietrzynski13} on the basis of eclipsing binary systems. $\bullet${\em Reddening towards the LMC disk --} The reddening-law fitting method provides individual reddening estimates for each Cepheid in our sample, with an accuracy better than 20\%. We compared the current reddening values with those available in the literature and we found that the current reddening map agrees quite well with the one provided by \citet{haschke11} using RC stars, but it is one order of magnitude more accurate and a factor of two larger. We provide the entire Cepheid catalog with mean NIR and MIR magnitudes, together with the individual distances and extinction values. We demonstrated that the use of NIR PW relations to determine Cepheids individual distances is extremely promising. However, NIR surveys towards the LMC with modest photometric precision ($\sigma_{J=17}\sim$0.05--0.15 mag, on single observations) do not allow us to fully exploit the intrinsic accuracy of NIR distance diagnostics. However, accurate NIR templates allow us to use highly-accurate single-epoch photometric measurements available in the literature to match the precision typical of distance determinations based on optical bands. The current approach based on measurements ranging from optical to mid-infrared observations of classical Cepheids, appears very promising for accurate individual distance determinations and paves the way to accurate estimates of their intrinsic properties as a function of the radial distribution. We will complement the accurate information on the three-dimensional distribution presented here with individual radial velocities and chemical abundances for a significant fraction of the Cepheids in our sample. The kinematic and the chemical tagging of a significant fraction of LMC Cepheids will allow us to further constrain the physical proprieties of the young stellar population in the LMC disk.
16
9
1609.03554
1609
1609.00982.txt
{We present an analysis of the IRAS\,08589$-$4714 star-forming region. This region harbors candidate young stellar objects identified in the WISE and {\textit{Herschel}} images using { color index criteria} and spectral energy distributions (SEDs). The SEDs of some of the infrared sources and the 70 $\mu$m radial intensity profile of the brightest source (IRS 1) { are modeled %from Herschel fluxes using the one-dimensional radiative transfer DUSTY code}. For these objects, we estimate the envelope masses, sizes, densities, and luminosities which suggest that they are very young, massive and luminous objects at early stages of the formation process. Color-color diagrams in the bands of WISE and 2MASS are used to identify potential young objects in the region. Those identified in the bands of WISE would be contaminated by the emission of PAHs. We use the emission distribution in the infrared at 70 and 160 $\mu$m, to estimate the dust temperature gradient. This suggests that the nearby massive star-forming region RCW\,\,38, located $\sim$ 10 pc of the IRAS source position may be contributing to the photodissociation of the molecular gas and to the heating of the interstellar dust in the environs of the IRAS source.} % Spanish abstract - leave blank and it will be translated by the % editors. \resumen{Se presenta un análisis de la región de formación de estrellas masivas IRAS 08589$-$4714. Esta región alberga candidatos a objetos estelares jóvenes, {los cuales fueron} identificados en las imágenes de WISE y {\textit{Herschel}} empleando criterios { basados en índices de color} y distribuciones espectrales de energía (SEDs). Las SEDs de algunas de las fuentes y el perfil radial de intensidad de la fuente más brillante en la banda de 70 $\mu$m de {\textit{Herschel}} (IRS 1) son modelados mediante el { código unidimensional de transporte radiativo DUSTY}. Para estos objetos, se estiman las masas de las envolventes, los tamaños y densidades de las mismas y las luminosidades. Estos parámetros indican que se trata de objetos muy jóvenes, masivos y luminosos en las primeras etapas del proceso de formación. Se emplean los diagramas color-color en las bandas de WISE y de 2MASS para indentificar potenciales objetos jóvenes en la región. Aquellos identificados en las bandas de WISE estarían contaminados por la emisión de PAHs. Se emplea la distribución de la emisión en el infrarrojo en 70 y 160 $\mu$m, para estimar el gradiente de temperatura del polvo. Esto indica que la región de formación estelar masiva cercana RCW\,\,38, localizada $\sim$ 10\,\,pc de la posici\'on de la fuente IRAS, podría contribuir a la fotodisociación del gas molecular y al calentamiento del polvo interestelar en las inmediaciones de la fuente IRAS. } % Keywords must be from the standard list and in alphabetical order. \addkeyword{Stars: circumstellar matter} \addkeyword{Stars: formation} \addkeyword{Stars: massive} \addkeyword{ISM: dust, extinction} \addkeyword{ISM: individual objects (IRAS 08589$-$4714)} \addkeyword{Infrared: stars} %%% %%% Beginning of document proper %%% \begin{document} % Typeset article header
\label{sec:intro} Massive stars are believed to form in dense ($n$ $\sim 10^{3-8}$ cm$^{-3}$), cold ($T$\,\,$<$\,\,30\,\,K), very opaque ($\tau_{\rm 100}\footnote{Optical depth at 100 $\mu$m.} \sim$ 1\,--\,6) and massive ($M$ $\sim$ 8\,--\,2000 $M_{\sun}$) regions, known as pre-stellar cores \citep{2002A&A...390.1001S,2009A&A...494..157D,2010A&A...516A.102C,2014ApJ...787..113B}. \citet{2009ApJS..181..360C} analyzed a significant number of objects having these characteristics and classified them as active cores if they { were detected} in the mid-infrared ($\sim$ 24 $\mu$m) and as quiescent cores if no emission at these wavelengths { was measured}. These authors concluded that active cores host and form massive stars whereas inactive cores are excellent candidates for starless massive cores, prior to the onset of the star formation process. { However, unlike low-mass stars ($M <$ 2 M$_{\sun}$), high-mass stars lack a detailed formation scenario. For isolated low mass stars, the shape of the SEDs allows to classify them in four evolutionary classes \citep{1987ARA&A..25...23S,1987IAUS..115....1L,1993ApJ...406..122A}, widely used in the literature. Several factors can be invoked to account for our relatively less detailed knowledge of the formation process/es of high-mass stars (such as: the distances, the relatively small number of massive stars, the amount of energy and winds emitted by these objects, their short evolutionary time, the fact that they are deeply embedded in the cloud material, etc.). One way to help to improve our understanding of the formation scenario of massive stars is to increase the number of young stars with well-determined parameters and to study their environs.} % { However, unlike what happens for low-mass stars, high-mass stars lack a detailed formation scenario. % The shape of the SEDs allows to classifying isolated low mass ($M <$ 2 M$_{\sun}$) stars in four % (evolutionary) classes \citep{1987ARA&A..25...23S,1987IAUS..115....1L,1993ApJ...406..122A}, widely used % in the literature. Several factors, such as: the distances, the relatively small number of massive stars, % the amount of energy and winds emitted by these objects, their short evolutionary time, the fact that % they are deeply embedded in the cloud material, etc., are the causes of our relatively less detailed % knowledge of their formation processes in comparison with low mass stars. A manner to help to improve % our understanding of the massive star formation scenario is to increase the number of young stars with % well-determined parameters. \citet{2006A&A...447..221B} cataloged a large number of massive clumps belonging to the southern hemisphere observed in the infrared (IR) continuum at 1.2 mm. From their list, we selected the source IRAS 08589$-$4714 (RA,\,\,DEC(J2000) $=$ 09:00:40.5, -47:25:55) aimed to find massive YSOs (young stellar objects) possibly associated with this source and to analyze their evolutionary stage and derive physical parameters. % A large number of massive clumps belonging to the southern hemisphere have % been observed in the infrared (IR) continuum at 1.2 mm and cataloged by \citet{2006A&A...447..221B}. % From their list, we selected the source IRAS\,08589$-$4714 (RA, DEC.(J2000) % $=$ 09:00:40.5, $-$47:25:55) aimed % to find massive YSOs possibly associated with it and % to analyze their evolutionary stage. IRAS\,08589$-$4714 is located in the giant molecular cloud {\it Vela Molecular Ridge} (see zoomed region in the upper panel of Figure 1), which harbors hundreds of low-mass Class I objects and a large number of young massive stars \citep{1993A&A...275..489L}. \citet{2006A&A...447..221B} estimated a luminosity of 1.8$\times$10$^{3}$ $L_{\sun}$ \citep[compatible with a previous estimate by][]{1989A&AS...80..149W} and a mass of 40 $M_{\sun}$ for this IRAS source. These authors classify the source as an ultracompact HII region (UCHII) since it satisfies the criteria by \citet{1989ApJ...340..265W}\footnote{These criteria are based on IRAS fluxes satellite of a sample of $\sim$ 1650 UC HII regions previously detected in radio and states that for these regions: $S_{12 \mu m}$ and $S_{25 \mu m}$ $\geq$ 10 Jy, $\log(S_{60 \mu m}\,/\,S_{12 \mu m}$) $\geq$ 1.30 and $\log(S_{25 \mu m}\,/\,S_{12 \mu m}$) $\geq$ 0.57, where $S$ indicates the flux and the subscript the corresponding wavelength.}, although no compact radio-continuum source has been detected \citep{2013A&A...550A..21S}. In spite of its high mass, no CH$_{3}$OH maser emission was found towards this source \citep{1993MNRAS.261..783S}. However, \citet{2013A&A...550A..21S} reported water maser emission in the region of the IRAS source. \citet{1996A&AS..115...81B} observed the region in the CS(2-1) molecular line. They found that the line has a central velocity $V_{LSR} = +4.3$ km\,s $^{-1}$ and a velocity width at half-maximum $\Delta V = 2.0$ km\,s$^{-1}$. In a recent survey, \citet{2014MNRAS.437.1791U} detected emission from ammonium molecular tracer, NH$_{3}$, in the (1,1) and (2,2) transitions towards the IRAS source. The central velocity coincides with that of the CS line. Bearing in mind velocities in the range { 4\,--\,5 km\,s$^{-1}$} for IRAS\,08589$-$4714, the circular galactic rotation model by \citet{1993A&A...275...67B} predicts a kinematical distance of 2.0 kpc, with an uncertainty of 0.5 kpc adopting a velocity dispersion of 2.5 km\,s$^{-1}$ for the interstellar molecular gas. \citet{2000A&A...363..744G} and \citet{2008A&A...481..345M} observed this IRAS source in the mid and far infrared and in the sub-millimeter range, and using the IRAS fluxes and those in the J, H, and K bands, measured by \citet{1993A&A...275..489L}, they constructed and modeled the SED, and found that the IRAS source has $L \sim 10^{3}$ $L_{\sun}$, masses between 20\,--\,55 $M_{\sun}$, and a B5 spectral type. The detection of the IRAS 08589$-$4714 source in the dust continuum, as well as its identification as an UCHII, suggests that it may harbor candidate YSOs embedded. These characteristics make this source interesting to investigate the presence of YSOs and their physical properties. { In \S\,\ref{sec:infrared_sources} we present {\textit{Herschel}} and WISE data and use the WISE color-color diagram to identify YSOs in the region. In \S\,\ref{sec:analysis} we model the corresponding SEDs to derive infalling envelope parameters, such as: mass, size, density and luminosity. Some of the newly detected young stars show an arc-like structure, particular at 12 $\mu$m, pointing to the west, in the direction towards the RCW\,\,38 high star-forming region. %This region lies $\sim$ 10 pc away from IRAS 08589$-$4714. In \S\,\ref{sec:arc-like-structures} we suggest that RCW\,\,38 may be contributing to the photodissociation of the IRAS source region, shaping the molecular cloud material around the identified young stars.} Finally, we present the summary and conclusions in \S\,5. % The detection of the IRAS 08589$-$4714 source in the dust % continuum, as well as its identification as an UCHII, suggests that it % may harbor candidate YSOs embedded. These characteristics make this source % interesting to investigate the presence of YSOs and their physical properties. % The aim of this contribution is to determine the physical parameters of the % associated YSOs and their evolutionary status. In addition, we investigate the % influence of RCW\,38 on the gas and dust surrounding this IRAS source region.
\label{sec:summary} We use WISE 3.4, 4.6, 12 and 22 $\mu$m and {\textit{Herschel}} 70, 160, 250, 350 and 500 $\mu$m fluxes to analyze 7 sources identified in the IRAS 08589$-$4714 region. % The infrared sources detected in % the region, four of which have WISE colors corresponding to candidate % Class I and Class II objects, according to the criteria by \citet{2012ApJ...744..130K}. { Four of these sources (called IRS 1, 2, 3 and 7) have WISE colors of Class I and II objects, according to the criteria of \citet{2012ApJ...744..130K}.} The other three (IRS 4, 5 and 6) have no mid-infrared counterparts and are likely younger objects \citep{2009ApJS..181..360C}. We model the SEDs in the range 20--1200 $\mu$m of the four brightest sources in these wavelengths (IRS 1, 4, 5 and 6), deriving physical parameters for the associated envelopes, such as the envelope masses, sizes, densities, and luminosities. These parameters range from 16 to 68 $M_{\sun}$, 0.06\,--\,0.12 pc, 9.6$\times$10$^{4}$\,--\, 9.2$\times$10$^{6}$ cm$^{-3}$, and 0.12\,--\,2.6$\times$10$^{3}$\,\,$L_{\sun}$, suggesting that these sources are very young, massive and luminous objects at early stages of the formation process. We constructed the color-color diagrams in the bands of WISE and 2MASS to identify potential young objects in the region. Those identified in the bands of WISE are contaminated by the emission of PAHs, except four sources (IRS\,\,1, 2, 3 and 7). These faint candidates require more sensitive observations to confirm their nature and evolutionary status. In the WISE 12 $\mu$m band, we identify an arc-like structure linked to {IRS}\,\,1, pointing to the west, and other two smaller structures coinciding with the positions of other two infrared sources (IRS 5 and 6) lying in the region, with similar shapes and pointing in the same direction. The massive star-forming region RWC 38, harboring a dozen of O-type stars, is located $\sim$ 16\farcm 7 from these sources in the west direction and roughly at the same distance. The ultraviolet photon flux from the exciting stars of RCW 38 is probably photodissociating the material in the IRAS 08589$-$4714 region and creating a photodissociated region.
16
9
1609.00982
1609
1609.01284_arXiv.txt
Using a Monte Carlo random walks of a log-normal distribution, we show how to qualitatively study void properties for non-standard cosmologies. We apply this method to an $f(R)$ modified gravity model and recover the N-body simulation results of \citep{ABPW} for the void profiles and their deviation from GR. This method can potentially be extended to study other properties of the large scale structures such as the abundance of voids or overdense environments. We also introduce a new way to identify voids in the cosmic web, using only a few measurements of the density fluctuations around random positions. This algorithm allows to select voids with specific profiles and radii. As a consequence, we can target classes of voids with higher differences between $f(R)$ and standard gravity void profiles. Finally we apply our void criteria to galaxy mock catalogues and discuss how the flexibility of our void finder can be used to reduce systematics errors when probing the growth rate in the galaxy-void correlation function.
Over the past decade, galaxy surveys have revealed cosmic voids that are an essential component of the cosmic web (e.g.\ \citep{BondKofmanPogosyan96,Kirshneretal1981,Kauffmann&Fairall1991,Hoyle&Vogeley2002, Crotonetal2004, Panetal2012,Sutteretal2012,AB_voids2016}). Their dynamical formation carries information of the background expansion and the non-linear gravitational interactions, as matter flows out from underdense patches leading to the mass assembly of halos \citep{Dekel&Rees1994,Bernardeau&vandeWeygaert1997}. Therefore their statistical properties can be used to constrain cosmology, for instance using the integrated Sachs-Wolfe effect (e.g.\ \cite{Granettetal2009}), performing an Alcock-Paczynski test (e.g.\ \cite{Lavaux2012}), measuring their abundance or their density profiles (e.g.\ \cite{ANP,ABPW,Clampitt2013,Zivick2015}) or looking at the clustering of matter in underdense environments \citep{AB_BAO,Kirauta16}. \medskip Furthermore, void statistics are promising to probe dark energy models such as coupled dark energy \cite{coupledDE} or modified gravity models that rely on screening mechanism, such as f(R) gravity \cite{HuSaw,chame}. In particular, the abundance of voids and void profiles can be used to test the cosmic expansion and the growth rate, where the fifth force is unscreened in underdense environments (e.g.\ \citep{HuSaw,ABPW,Cai2015,Li2012,Zivick2015}). However, precise theoretical predictions of how void abundance and void profiles change for modified gravity models is still lacking. This has driven intensive N-body numerical analyses of such properties (e.g.\ \cite{Zhaosim,Puchweinsim,Raserasim}). In this work, we introduce a fast estimate of how void profiles can vary for non-standard cosmologies, based on Monte Carlo Random Walks (MCRW). This method can potentially be explored to study other quantities such as the abundance of voids or the statistical properties of overdense peaks in the non-linear matter density field. \medskip As an application, we show how different ways of selecting voids can enhance the imprint of $f(R)$ modified gravity, which has not been studied before. In fact, there is no one single approach to identify voids, nor should there be, as different techniques can highlight different properties of what each calls a void. For instance, many void finders define voids based on density criteria inside a sphere (e.g.\ \citep{Kauffmann&Fairall1991,Mulleretal2000,Hoyle&Vogeley2002, Colbergetal2005}). This definition is very helpful when trying to link the theory of an expanding underdense patch to the prediction of void abundance (for instance \cite{SVdW,ANP}). Other techniques based on watershed transforms or dynamical properties around voids (e.g.\ velocity field) have also been very useful to identify voids without imposing a particular shape for them (e.g.\ \citep{ElAd&Piran1997,Aikio&Mahonen1998,Plionis&Basilakos2002,Shandarinetal2006,AragonCalvoetal2007,Hahnetal2007, PlatenVdWJones2007}). With cosmological observations, ZOBOV/VIDE \citep{N08,VIDE} has also been quite successful in identifying density minima using Voronoi tessellation, leading to voids with interesting properties and that are not necessary spherical. In this case it is however more challenging to link the initial under-dense patches of matter to the identified void \cite{ANP}. \medskip In this work we study how a measurement of the density fluctuations at a limited number of scales is enough to identify voids, which would not be the case if the matter was not clustered. We will also show how the selection of voids with specific ridges can be important when probing the growth rate. \medskip This paper is organized as follows: in Sec.~\ref{sec1}, we show how we can test the imprint on Large Scale Structure for non-standard cosmology using MCRW, focusing on void profiles. In Sec.~\ref{Appli1} we apply this technique to test the deviation of the void profiles for $f(R)$ gravity and discuss the importance of the void identification when testing for deviations with $\Lambda$CDM. In Sec.~\ref{sec2}, we apply the void finder criteria to mock catalogues, and show how a few measurements of the density fluctuations is enough to identify voids in a low density survey. Finally in Sec.~\ref{secAppli2}, we test the effect the identification criteria to the measurement of the growth rate, and highlight the advantages of having flexible void profiles (at a fixed void radius), when probing the growth rate. In Sec.~\ref{conclu} we present our conclusions.
In this work we present a new method to quickly test the imprint of non-standard cosmology using MCRW for a log-normal distribution. We focus on the departure from GR in the void density profiles for an $f(R)$ gravity model. In order to do so, we introduce flexible criteria to identify the random walks that mimic void profiles. \medskip We find interesting results: our method can reproduce the qualitative features of the $f(R)$ gravity imprints in the void profiles found in \citep{ABPW}, without using a full N-body simulation. Furthermore, the departure from GR is sensitive to the type of voids, highlighting the importance of the ridge when identifying voids. \medskip In addition, we test how flexible density criteria can be used to identify voids in real galaxy surveys. It would not be the case if the matter of our Universe would be randomly distributed. We also show how the flexibility to identify voids is important when probing the growth rate using redshift space distortions around voids. \medskip Finally this work can be extended to different approaches, for instance it would be interesting to explore further the correspondence between the initial density criteria that lead to voids identified in the MCRW and N-body simulations. We could also test which are the initial density fluctuations that result in voids, for different cosmologies. This could potentially be used to model the abundance of voids. It would also be interesting to test how the voids identified with the criteria used in Sec.~\ref{sec2} change in a non-standard cosmology where the clustering properties of the matter are different (e.g.\ warm dark matter). Our MCRW method can also be used to study the statistical properties of overdense fluctuations in the cosmic web such as peaks. \medskip The code used to find the voids in this work is available on demand if the reader is interested in using the method described in Sec.~\ref{sec2} and is not afraid of using Fortran.
16
9
1609.01284
1609
1609.05691_arXiv.txt
Weak emission-line active galactic nuclei (WLAGNs) are radio-quiet active galactic nuclei (AGNs) that have nearly featureless optical spectra. We investigate the ultraviolet to mid-infrared spectral energy distributions of 73 WLAGNs (0.4 $<$ \emph{z} $<$ 3) and find that most of them are similar to normal AGNs. We also calculate the covering factor of warm dust of these 73 WLAGNs. No significant difference is indicated by a KS test between the covering factor of WLAGNs and normal AGNs in the common range of bolometric luminosity. The implication for several models of WLAGNs is discussed. The super-Eddington accretion is unlikely the dominant reason for the featureless spectrum of a WLAGN. The present results favor the evolution scenario, i.e., WLAGNs are in a special stage of AGNs.
The unified model of active galactic nuclei (AGNs) includes multiple structures around the central black hole, e.g., accretion disk, broad-line region (BLR), and dusty torus. In this model, the UV/X-ray photons from the accretion disk and corona ionize the gas in the BLR, giving rise to broad emission lines in the optical spectrum. The dust in the torus absorbs the radiation from the central engine and re-emits it in the infrared (IR) band, leading to the IR bump in the spectral energy distribution (SED). The AGNs would exhibit different characteristics with different orientations of the line of sight, which is the most important assumption in AGN unification (Antonucci 1993). As a subclass of AGNs, BL Lac objects are characterized by strong radio emission, polarized continua, rapid variation, and nearly featureless spectra (Urry $\&$ Padovani 1995). According to the unified model of AGNs, they are explained by the small angle between the jet and the line of sight (Antonucci 1993). Because of the dilution by the Doppler$-$boosted relativistic jet, there will be weak or even no emission lines. In the last decade, a handful of radio-quiet AGNs with weak emission lines (WLAGNs\footnote{Because it is still unclear whether those objects are weak emission-line quasars or radio-quiet BL Lac objects, we call them WLAGNs as a whole.}) are discovered (McDowell et al. 1995; Collinge et al. 2005; Diamond$-$Stanic et al. 2009; Shemmer et al. 2009; Plotkin et al. 2010a (P10a hereafter)). Diamond$-$Stanic et al. (2009) reported the optical polarization of seven WLAGNs and found that the largest degree of polarization of these sources is 2.43\symbol{37}, which is lower than the nominal level of high polarization (degree of polarization $\geq$ 3\symbol{37}; Impey $\&$ Tapia 1990). Heidt \& Nilsson (2011) measured the optical polarization of 25 WLAGN candidates and found 23 of them are unpolarized. Gopal-Krishna et al. (2013), Chand et al. (2014), and Kumar et al. (2015) have monitored 15 WLAGNs and found all of them display intranight optical variability. They claimed that their variability duty cycles are higher than for radio-quiet quasars and radio lobe-dominated quasars. However, Liu et al. (2015) have not detected significant microvariations in eight WLAGNs. The properties of WLAGNs are different from those of BL Lac objects, but consistent with normal AGNs in most respects except for their featureless spectra. A radiatively inefficient accretion flow could cause weak emission lines (Yuan \& Narayan 2004), but the high luminosity of WLAGNs excludes this possibility. Plotkin et al. (2010b) investigated 13 radio-quiet BL Lac candidates and found that the relativistic beaming seems not to be the cause of weak lines. However, they cannot exclude the possibility of the existence of weak jets in some of these sources. Several explanations have been proposed for the nearly featureless optical spectra of WLAGNs. Leighly et al. (2007a,b) considered that an extremely high accretion rate in WLAGNs will induce a soft ionizing continuum, which cannot produce enough high-energy photons for the BLR. Besides that, Wu et al. (2011) assumed that there is some shielding gas with a high covering factor between the accretion disk and BLR. It would absorb the high-energy ionizing photons from the accretion disk and prevent them from generating the emission lines in the BLR. This kind of ``shielding gas'' is further considered as a geometrically thick accretion disk formed by super-Eddington accretion (Luo et al. 2015). Also, a cold accretion disk, formed by a massive black hole and a relatively low accretion rate, will not produce enough ionizing photons either (Laor $\&$ Davis 2011). In these models, the weak emission lines are caused by insufficient ionizing photons. Thus, the IR luminosity of WLAGNs might be relatively lower than that of normal quasars. Another model for the featureless \textbf{optical} spectra of WLAGNs is based on the unusual BLR, e.g., ``anemic'' BLR (Shemmer et al. 2010). Niko{\l}ajuk \& Walter (2012) estimated the covering factor of the BLR in WLAGNs from the equivalent width (EW) of C \uppercase\expandafter{\romannumeral4} emission line and the ratio of X-ray to optical luminosity. They found that the covering factors of BLRs in WLAGNs are lower than those in normal quasars by a factor of 10. Niko{\l}ajuk \& Walter (2012) argued that WLAGNs are the consequence of a low covering factor of the BLR. WLAGNs can also be in the early stage of AGN evolution. In this stage, the radiative feedback blows off the gas in the BLR and there will be no emission lines (Hryniewicz et al. 2010; Liu $\&$ Zhang 2011; Banados et al. 2014; Meusinger \& Balafkan 2014). textbf{In the case of an unusual BLR, the covering factors of tori of WLAGNs are unaffected and expected to be consistent with those of normal quasars. The UV to mid-infrared (MIR) SEDs of WLAGNs are generally consistent with those of normal radio-quiet AGNs (Lane et al. 2011; Wu et al. 2012; Luo et al. 2015), which are dominated by two bumps corresponding to the emission from the accretion disk and torus. We will expand the sample and focus on the UV to MIR SED, comparing the temperature of the torus of low-redshift WLAGNs with that of high-redshift WLAGNs in Lane et al. (2011). Many works suggest that there are correlations between the covering factor of the torus and AGN properties, e.g., bolometric luminosity ($L_{\rm Bol}$) and black hole mass (\emph{M}$_{\rm BH}$) (e.g., Maiolino et al. 2007; Treister et al. 2008; Mor $\&$ Trakhtenbrot 2011; Ma $\&$ Wang 2013). This work will investigate the ratio of the MIR luminosity to the bolometric luminosity of WLAGNs, which corresponds to the warm dust covering factor of the torus (CF$_{\rm WD}$). It is insightful to test whether WLAGNs follow the same correlation of normal radio-quiet AGNs. Furthermore, the value and distribution of CF$_{\rm WD}$ of WLAGNs will be helpful in understanding their origin. Sample selection is shown in Section 2, and then the SED and investigation of covering factor are described in Section 3. We discuss the findings in Section 4.
} Lane et al. (2011) and Wu et al. (2012) have investigated the UV to MIR SED of 18 and 36 WLAGNs respectively, and found that the SEDs of WLAGNs are generally consistent with the mean SED of typical quasars in the UV to MIR band. We have constructed the SEDs of 73 WLAGNs with 0.4 $\leq$ \emph{z} $\leq$ 3 and confirmed their conclusions. The SEDs of WLAGNs are significantly different from the SEDs of the blazar sequence\footnote{Blazar sequence shows a systematic trend from low-peaked, high-luminosity sources with strong emission lines to high-peaked, low-luminosity sources with weak or even no emission lines (Finke 2013). (Figure \ref{fig6})}. This suggests that the origin of their continua is likely to be the accretion disk. The composite SED of 70 WLAGNs has been well fitted by a power law ($\alpha_{\rm \lambda}$ $=$ 0.76) and a single-temperature black body ($T=$ 960 K). These parameters are consistent with those of the high-redshift WLAGNs (2.7 $\leq$ $z$ $\leq$ 5.9; 0.42 $\leq$ $\alpha_{\rm \lambda}$ $\leq$ 1.21 and \emph{T} $\sim$ 1000 K) in Lane et al. (2011), suggesting that there is no significant evolution of the SEDs of WLAGNs with redshift. We have also calculated the covering factor of warm dust and found that there is no significant difference in CF$_{\rm WD}$ between normal AGNs and WLAGNs. Plotkin et al. (2015) found that, although high-ionization emission lines (i.e., C \uppercase\expandafter{\romannumeral4}) of WLAGNs are weak, their low-ionization lines (e.g., H $\beta$, Mg \uppercase\expandafter{\romannumeral2}) remain relatively normal. Since the torus is located in the outer region of BLR, its properties (e.g., covering factors) are likely to be similar to those of normal quasars. Our results can provide some constraints on the models of WLAGNs. \begin{figure} \includegraphics[width=85mm]{f6.eps} \caption{Comparison between the composite SED of WLAGNs (filled circles) and the blazar sequence (solid curves). The SEDs of the blazar sequence are adopted from Pfrommer (2013).\label{fig6}} \end{figure} \subsection{Insufficient Ionization Photons? \label{sec41}} Luo et al. (2015) assumed that the slim disk can play the role of shielding gas. However, the illumination on the torus will be suppressed by the self-occultation of the optically and geometrically thick disk (Kawaguchi \& Mori 2011; Kawakatu \& Ohsuga 2011).\footnote{For the slim disk, $L_{\rm MIR}$/$L_{\rm Bol}$ will not represent the covering factor of warm dust, because the radiation from accretion disk is anisotropic.} Kawakatu \& Ohsuga (2011) investigated the strength of IR emission from the AGNs with sub- or super-Eddington accretion flows, by considering the anisotropic radiation flux of accretion disk. They showed that for the sub- and super-Eddington AGNs with the same covering factor as the torus, the ratio of IR luminosity to bolometric luminosity ($L_{\rm IR}/L_{\rm Bol}$) of super-Eddington AGNs is significantly less than that of sub-Eddington AGNs (see Figure 2 in Kawakatu \& Ohsuga 2011). In order to increase the observed $L_{\rm IR}/L_{\rm Bol}$ of super-Eddington AGNs to be comparable with that of sub-Eddington AGNs, the torus of super-Eddington AGN should have a large scale height. However, under the radiative feedback from the super-Eddington accretion disk, the scale height of a torus will be even smaller than that of sub-Eddington ones (combining the anisotropic radiation of super-Eddington accretion in Kawakatu \& Ohsuga [2011] and the equation of radial motion in Liu \& Zhang [2011]). Actually, Castell{\'o}-Mor et al. (2016) constructed a sample of AGNs with similar luminosities and found that $L_{\rm IR}$/$L_{\rm Bol}$ of super-Eddington AGNs are significantly smaller than that of sub-Eddington AGNs. However, our results show the mean $L_{\rm MIR}$/$L_{\rm Bol}$ of WLAGNs and normal AGNs are 0.23 and 0.19 in the luminosity range from log($L_{\rm Bol}$/erg s$^{-1}$) $=$ 46.3 to 47.0, respectively. The strength of the IR hump in WLAGNs is even slightly higher, though the difference is not significant. Leighly et al. (2007a,b) considered that the featureless spectrum is a consequence of a soft continuum induced by the super-Eddington accretion. Plotkin et al. (2015) and Shemmer \& Lieber (2015) calculated the Eddington ratios of 10 WLAGNs in total (0.3 $\leq$ $L$/$L_{\rm Edd}$ $\leq$ 1.3), based on the FWHM of the H$\beta$ line and the monochromatic luminosity at rest-frame 5100 ${\rm \AA}$. However, only two of them are super-Eddington and the uncertainties of Eddington ratios are large (errors of log($L$/$L_{\rm Edd}$) are larger than 0.3 dex in Shemmer\& Lieber [2015]). Niko{\l}ajuk \& Walter (2012) calculated the Eddington ratios of 76 WLAGNs from FWHM(C \uppercase\expandafter{\romannumeral4}). Only 12 sources are super-Eddington ones and the errors are large as well. Table 2 shows the $L_{\rm MIR}$/$L_{\rm Bol}$ of 5 common WLAGNs in our sample. Even for the source (SDSS J144741.76-020339.1) with the highest Eddington ratio ($L$/$L_{\rm Edd}$ $=$ 1.3), $L_{\rm MIR}$/$L_{\rm Bol}$ is still normal. It seems that the accretion rate of WLAGNs is relatively higher than than of a normal AGN, but this is \textbf{unlikely} the dominant reason for the featureless spectrum of WLAGNs. \begin{deluxetable}{rcc} \tablewidth{0pt} \tablecaption{Eddington Ratios and Covering Factors.} \tablehead{ \colhead{Name } & \colhead{$L$/$L_{\rm Edd}$} & \colhead{$L_{\rm MIR}$/$L_{\rm Bol}$} \\ \colhead{SDSS J} & \colhead{} & \colhead{} } \startdata 083650.86+142539.0 & $0.87^{+1.36}_{-0.65}$ & 0.31$\pm$ 0.02 \\ 132138.86+010846.3 & $0.63^{+0.23}_{-0.37}$ & 0.07$\pm$ 0.01 \\ 141141.96+140233.9 & $0.34^{+0.42}_{-0.17}$ & 0.18$\pm$ 0.01 \\ 141730.92+073320.7 & 0.92$\pm$ 0.50 & 0.11$\pm$ 0.01 \\ 144741.76-020339.1 & $1.30^{+0.17}_{-0.78}$ & 0.14$\pm$ 0.01 \\ \enddata \tablecomments{The values of $L$/$L_{\rm Edd}$ here are obtained from Plotkin et al. (2015).} \end{deluxetable} A cold accretion disk, proposed by Laor $\&$ Davis (2011), will induce a SED with a steep fall in UV. This type of SED is observed in SDSS J094533.99+100950.1 by Hryniewicz et al. (2010). We have not found this special shape of SED in our sample. In fact, Plotkin et al. (2015) estimated the black hole mass and Eddington ratio of SDSS J094533.99+100950.1 and some other WLAGNs from the H$\beta$ line and found that these sources do not satisfy the criteria for a cold accretion disk. \subsection{An Unusual BLR \label{sec42}} An unusual BLR of WLAGNs can still explain the present results. Since the scale of the torus is much larger than the accretion disk, it should form first and is likely to feed the accretion disk. Therefore, the BLR should be formed after the torus and accretion disk. Hryniewicz et al. (2010) proposed a phase ($\sim$ $10^{3}$ yr) that exists in the early stage of AGN evolution. In this stage, the BLR has not yet fully formed. The gas close to the accretion disk that produces low-ionization emission lines is formed earlier than the gas away from the disk that produces high-ionization emission lines. This will result in a low covering factor of BLR, which is consistent with the conclusion of Niko{\l}ajuk \& Walter (2012). In the evolution scenario, the high-ionization emission lines are expected to be weak while the low-ionization lines should be normal, which is confirmed by Plotkin et al. (2015) as well. Hryniewicz et al. (2010) also estimated the duty cycle of WLAGNs and found that it is consistent with the fraction of WLAGNs in SDSS quasars, though the sample is not complete. If WLAGNs are indeed in a special stage of AGNs, their fraction should be the function of redshift. A future large sample of WLAGNs with well known incompleteness could examine this evolution scenario more quantitatively. A larger sample of WLAGNs will be helpful in clarifying the correlations of CF$-$\emph{L}$_{\rm Bol}$ and CF$-$M$_{\rm BH}$, if the M$_{\rm BH}$ could be determined, e.g., from the H$\beta$ line.
16
9
1609.05691
1609
1609.07141_arXiv.txt
Roughly half of all stars reside in galaxies without significant ongoing star formation. However, galaxy formation models indicate that it is energetically challenging to suppress the cooling of gas and the formation of stars in galaxies that lie at the centers of their dark matter halos. In this Letter, we show that the dependence of quiescence on black hole and stellar mass is a powerful discriminant between differing models for the mechanisms that suppress star formation. Using observations of 91 star-forming and quiescent central galaxies with directly-measured black hole masses, we find that quiescent galaxies host more massive black holes than star-forming galaxies with similar stellar masses. This observational result is in qualitative agreement with models that assume that effective, more-or-less continuous AGN feedback suppresses star formation, strongly suggesting the importance of the black hole in producing quiescence in central galaxies.
\label{sec:Intro} Galaxy surveys have revealed the dramatic growth of the quiescent, non-star-forming galaxy population with cosmic time \citep[e.g.,][]{mms2013}. Despite the high present abundance of quiescent galaxies, the relative importance of possible physical drivers of galaxy-wide suppression of star formation remains uncertain. In a cosmological context, gas cooling and accretion into the center of a dark matter halo fuels ongoing star formation. Thus, the onset of quiescence means that gas is somehow removed from the galaxy and that gas cooling is offset by some source of heat. Unlike satellites, galaxies in the center of a halo's potential well -- hereafter referred to as central galaxies -- must eject and heat their gas without relying on interactions with the hot, diffuse medium present in other halos, groups, and clusters \citep{tlb2013}. This implies stringent energetic requirements not easily met by stellar feedback \citep[e.g.][]{bbm2006}. Heating mechanisms proposed for central galaxies include ejected gas from supernovae Ia (SNIa) and stellar winds \citep[e.g.][]{hqm2012}, virial shock heating \citep[e.g.][]{bdn2007}, gravitational heating \citep[e.g.][]{jno2009}, and -- currently the most popular explanation -- feedback from active galactic nuclei \citep[AGN,][]{kh2000,dsh2005, csw2006, cfb2009, f2012}. One powerful approach towards characterizing the importance of different physical drivers of quiescence in central galaxies is to measure the correlation between quiescence and a range of galaxy properties that could affect the balance between heating and cooling. For example, cooling and gas accretion depend strongly on halo mass, and would thus be expected to correlate with stellar mass (with significant scatter; see \citealp{tbh2016}). Heating or gas ejection could correlate with a variety of properties: halo mass due to virial shock heating or gravitational quenching, stellar mass due to SNIa and stellar feedback, or black hole mass due to AGN feedback. With these concerns in mind, many studies have explored how quiescence correlates with a variety of quantities: for example, stellar mass, halo mass, surface density, inferred velocity dispersion, \citet{s1963} index, and bulge mass \citep{khw2003, fvf2008, bvp2012, lws2014, bme2014, wdf2015, mwz2016}. The latter quantities are expected to correlate with the prominence of a supermassive black hole \citep{kh2013}, in support of the idea that AGN feedback is an important driver of quiescence. Yet, correlating quiescence with directly-measured black hole mass would be a clearer and more critical test of AGN feedback. With the number of dynamical black hole mass measurements increasing each year, such an exercise has now become possible. The goal of this Letter is to characterize the physical drivers of quiescence by studying the observed distribution of star-forming and quiescent central galaxies as a function of their central black hole mass and stellar mass (\S\ref{sec:obsdata}) and comparing those findings with the results from four galaxy formation models (\citealp{hwt2015}, \S\ref{sec:henriques}; Illustris -- \citealp{vgs2014}, \S\ref{sec:illustris}; EAGLE -- \citealp{scb2015}, \S\ref{sec:EAGLE}; and GalICS -- \citealp{cdd2006}, \S\ref{sec:GALICS}). We then describe (\S\ref{sec:results}) and discuss (\S\ref{sec:disc}) the apparent agreement between observations and models that use effective, more-or-less continuous AGN feedback to halt star formation. We assume the standard cosmology in order to be consistent with our compiled observational distances: $\Omega_{M}$ = 0.3, $\Omega_{\Lambda}$ = 0.7, and H$_{0}$ = 70 km/s/Mpc.
\label{sec:conc} Cosmological models of galaxy formation predict that the relationship between quiescence, $M_{\rm{BH}}$, and $M_{*}$ is a crucial discriminator between models and a sensitive probe of the drivers of quiescence. We compare directly-measured $M_{\rm{BH}}$, $M_{*}$, and other properties of a sample of star-forming and quiescent galaxies, finding that observed quiescent galaxies have higher $M_{\rm{BH}}$ than star-forming galaxies with similar $M_{*}$. These trends are in good qualitative agreement with models in which star formation is suppressed due to quasi-continuous heating from AGN feedback. We assert that models that do not replicate this behavior are missing an essential element in their physical recipes. Our study suggests that the central black hole is critical to the process by which star formation is terminated within central galaxies, giving credence to the AGN quenching paradigm.
16
9
1609.07141
1609
1609.07188_arXiv.txt
text{ Early-type galaxies (ETGs) are supposed to follow the virial relation $M = k_e \sigma_*^2 R_e / G$, with $M$ being the mass, $\sigma_*$ being the stellar velocity dispersion, $R_e$ being the effective radius, $G$ being Newton's constant, and $k_e$ being the virial factor, a geometry factor of order unity. Applying this relation to (a) the \atlas\ sample of \citet{cappellari2013a} and (b) the sample of \cite{saglia2016} gives ensemble-averaged factors $\ke=5.15\pm0.09$ and $\ke=4.01\pm0.18$, respectively, with the difference arising from different definitions of effective velocity dispersions. The two datasets reveal a statistically significant tilt of the empirical relation relative to the theoretical virial relation such that $M\propto(\sigma_*^2R_e)^{0.92}$. This tilt disappears when replacing $R_e$ with the semi-major axis of the projected half-light ellipse, $a$. All best-fit scaling relations show zero intrinsic scatter, implying that the mass plane of ETGs is fully determined by the virial relation. Whenever a comparison is possible, my results are consistent with, and confirm, the results by \citet{cappellari2013a}. The difference between the relations using either $a$ or $R_e$ arises from a known lack of highly elliptical high-mass galaxies; this leads to a scaling $(1-\epsilon) \propto M^{0.12}$, with $\epsilon$ being the ellipticity and $R_e = a\sqrt{1-\epsilon}$. Accordingly, $a$, not $R_e$, is the correct proxy for the scale radius of ETGs. By geometry, this implies that early-type galaxies are axisymmetric and oblate in general, in agreement with published results from modeling based on kinematics and light distributions. } \begin{document} \jkashead %
} Early-type galaxies (ETGs) are known to follow characteristic scaling relations between several structural parameters including, most prominently, the line-of-sight velocity dispersion $\sigma_*$ of their stars. The first of these relations to be discovered was the Faber--Jackson relation $L\propto\sigma_*^4$ between velocity dispersion and galactic luminosity $L$ \citep{faber1976}. Subsequent work led to the discovery of the ``classic'' fundamental plane relation $L\propto\sigma_*^{\alpha}I^{\beta}$, or equivalently, $R_e\propto\sigma_*^{\gamma}I^{\delta}$ which include the (two-dimensional, projected) effective (half-light) radius $R_e$ and the average surface brightness $I$ within $R_e$ \citep{dressler1987, djorgovski1987}; recent studies \citep{cappellari2013a} find $\gamma\approx1$ and $\delta\approx-0.8$. From the virial relation $M\propto\sigma_*^2R_e$, with $M$ being the galaxy mass, one expects $\gamma=2$ and $\delta=-1$; the tilt of the fundamental plane, i.e. the discrepancy between expected and observed values, can be ascribed to a scaling of mass-to-light ratio with velocity dispersion (\citealt{cappellari2006}; but see also \citealt{cardone2011}). The Faber--Jackson relation can be understood as a projection of the fundamental plane onto the $L$--$\sigma_*$ plane (but see also \citealt{sanders2010}). Whereas scaling relations that involve $L$ are convenient because the luminosity is an observable, the galactic dynamics is controlled by the galaxy mass $M$ for which $L$ is a proxy. For pressure-supported stellar systems, mass and velocity dispersion are connected via the virial relation \begin{equation} \label{eq:virial} M = k_e\,\frac{\sigma_*^2\,R_e}{G} \end{equation} where $G$ is Newton's constant and $k_e$ is a geometry factor of order unity \citep[e.g.,][]{binney2008}. Accordingly, a ``more fundamental plane'' \citep{bolton2007} is given by the ``mass plane'' relation $M\propto\sigma_*^{\kappa}R_e^{\lambda}$; from the virial theorem, one expects $\kappa=2$ and $\lambda=1$ (see also \citealt{cappellari2016} for a recent review). \begin{figure*}[t!] \centering \includegraphics[angle=-90, width=84mm]{fig/Mbulge-virial-massplane-ATLAS3D.eps} \hspace{4mm} \includegraphics[angle=-90, width=84mm]{fig/Mbulge-virial-massplane-Saglia2016.eps} \caption{Galaxy mass $M$ as function of virial term $\sigma_*^2R_e/G$, both in units of solar mass. Please note the somewhat different axis scales. \emph{Left:} for the \atlas\ sample. The grey line corresponds to a linear relation (Equation \ref{eq:virial}) with ensemble-averaged virial factor $\ke = 5.15$. The black line marks the best-fit generalized virial relation (Equation~\ref{eq:massplane}) with $x = 0.924 \pm 0.016$ \emph{Right:} for the data of \citet{saglia2016}. The grey line indicates a linear relation with $\ke = 4.01$. The black line marks the best-fit generalized virial relation with $x = 0.923 \pm 0.018$. \label{fig:virialRe}} \end{figure*} Testing the validity and accuracy of Equation~(\ref{eq:virial}) is important as virial mass estimators of this type are widely applied to pressure-supported stellar systems. Fundamental plane studies usually derive masses $M$ from photometry and equate them with dynamical masses, presuming an equality of the two. A deviation from the theoretical relation would imply the presence of additional ``hidden'' parameters or dependencies between parameters. Else than for relations between luminosity and other parameters, a tilt in the mass plane would be connected immediately to the dynamics or structure of galaxies. To date, the virial relation (Equation~\ref{eq:virial}) is commonly assumed to be valid exactly (cf., e.g., \citealt{cappellari2013a}). This is, however, not undisputed. Based on an analysis of about 50\,000 SDSS galaxies, \citet{hyde2009} concluded that $M\propto(\sigma_*^2R_e)^{0.83\pm0.01}$. More recent observations, accompanied by more sophisticated dynamical modeling, of early-type galaxies in three nearby galaxy clusters find $\kappa\approx1.7$ \citep{scott2015}. This raises the question to what extend Equation~(\ref{eq:virial}) is appropriate for describing the dynamics of ETGs, and which alternative formulations might be necessary.
} Using public data for the early-type galaxy samples of \citet{cappellari2011, cappellari2013a} and \citet{saglia2016}, I probe the validity and accuracy of the virial relation given by Equation~(\ref{eq:virial}). The key results are: \begin{enumerate} \item Assuming a linear relationship between galaxy mass and virial term, I find ensemble-averaged virial factors of $\ke = 5.15 \pm 0.09$ and $\ke = 4.01 \pm 0.18$ for the \atlas\ and \citet{saglia2016} samples, respectively, in agreement with \citet{cappellari2013a} (for \atlas). The difference between the two samples arguably arises from the \citet{saglia2016} velocity dispersions being systematically higher than the \atlas\ ones by 13\% due to different conventions. \item For both galaxy samples, the empirical virial relation is significantly (by more than $4\sigma$) tilted, such that $M \propto (\sigma_*^2 R_e)^{0.92}$. For the \atlas\ data, this is consistent with the mass plane analysis provided \citet{cappellari2013a}. \item Replacing the effective radius $R_e$ with the semi-major axis of the projected half-light ellipse $a$ reconciles empirical and theoretical virial relations, with $M \propto \sigma_*^2 a$ (Equations \ref{eq:virial-a} and \ref{eq:massplane-a}). The ensemble-averaged virial factor is $\ka = 3.82 \pm 0.062$, in good agreement with \citet{cappellari2013a}. \item All best-fit virial relations show intrinsic scatter consistent with zero. This implies that the mass plane of ETGs is fully determined by the virial relation, i.e., that masses $M$ do not scale independently with $\sigma_*$ and either $R_e$ or $a$ but only with $\sigma_*^2 R_e$ (or $\sigma_*^2 a$). \item The ``roundness'' $1-\epsilon$, with ellipticity $\epsilon$, mildly scales with galaxy mass such that $(1 - \epsilon) \propto M^{0.12}$. This agrees with the known lack of highly elliptical galaxies for $M \gtrsim 10^{11}\msol$. As $R_e = a\sqrt{1-\epsilon}$, the scaling of mass and roundness explains the tilt in the virial relation that occurs when using $R_e$ instead of $a$ as galaxy scale radius. \item Given that (i) $a$ turns out to be the correct proxy for the galactic scale radius and (ii) the best-fit virial relation (Equation~\ref{eq:massplane-a}) fits the data with zero intrinsic scatter, one finds that early-type galaxies are axisymmetric and oblate in general. This agrees with results from modeling their intrinsic shapes based on kinematics and light distributions. \end{enumerate}
16
9
1609.07188
1609
1609.09706_arXiv.txt
We present a photometric study of the Andromeda\,XVIII dwarf spheroidal galaxy associated with M31, and situated well outside of the virial radius of the M31 halo. The galaxy was resolved into stars with Hubble Space Telescope/Advanced Camera for Surveys revealing the old red giant branch and red clump. With the new observational data we determined the Andromeda\,XVIII distance to be $D = 1.33_{-0.09}^{+0.06}$ Mpc using the tip of red giant branch method. Thus, the dwarf is situated at the distance of 579 kpc from M31. We model the star formation history of Andromeda\,XVIII from the stellar photometry and Padova theoretical stellar isochrones. An ancient burst of star formation occurred 12--14 Gyr ago. There is no sign of recent/ongoing star formation in the last 1.5 Gyr. The mass fractions of the ancient and intermediate age stars are 34 and 66 per cent, respectively, and the total stellar mass is $4.2\times$10$^6\,M_{\odot}$. It is probable that the galaxy has not experienced an interaction with M31 in the past. We also discuss star formation processes of dSphs KKR\,25, KKs\,03, as well as dTr KK\,258. Their star formation histories were uniformly measured by us from HST/ACS observations. All the galaxies are situated well beyond the Local Group and the two dSphs KKR\,25 and KKs\,03 are extremely isolated. Evidently, the evolution of these objects has proceeded without influence of neighbours.
In recent years, the galaxies in the immediate vicinity of the Local Group have been the focus of intense and wide-ranging research \citep{mquinn, collins15, cr}, as these galaxies represent a unique laboratory for studies of the history of star formation in the Universe, the properties of dark matter and tests of the modern cosmological LCDM paradigm. In particular, it is obvious that the Local Group can be especially promising in the search for new dwarf galaxies. This search is especially important, bearing in mind the well known "lost satellites" problem in cosmology, because the LCDM theory predicts more satellites of giant galaxies than are found in reality. Recently we have detected and studied several unusual nearby dwarf galaxies. These spheroidal objects are rather isolated, or at the outskirts of the Local Group \citep{mak12, kar14, kar15}. Although the sample of galaxies is still small, they represent a class of dwarf galaxies that can help understand the problems of the formation and evolution of dwarf galaxies and groups of galaxies within the LCDM paradigm. \begin{figure*} \includegraphics[width=11cm]{fig1.ps} \caption{\textit{HST}/ACS combined distortion-corrected mosaic image of And\,XVIII in the \textit{F606W} filter. The image size is $3.4\times3.4$ arcmin. } \label{fig:ima} \end{figure*} The Local Group dwarf galaxy Andromeda\,XVIII is one more object in the sample of quite isolated dwarf spheroidals. It was discovered by \citet{mccon2008} as a part of their CFHT/MegaPrime photometric survey of M31. The observations were made in {\it g} and {\it i} bands, and the galaxy was seen as a prominent overdensity of stars. The colour-magnitude diagram of And\,XVIII was obtained by \citet{mccon2008}, and red giant branch stars were clearly visible. \citet{tollerud} observed spectra of a number of red giant stars belonging to And\,XVIII. According to their data, And\,XVIII has the heliocentric systemic velocity $v_{sys} = -332.1\pm2.7$ \kms and the velocity dispersion $\sigma_v = 9.7\pm2.3$ \kms. The authors note, that the measured $v_{sys}$ is very close to M31's $v_{sys}$. They pointed out that at a distance of about 600 kpc from M31 And\,XVIII is near its apocenter and therefore at rest with respect to M31. In this work we present new observations of the And\,XVIII dwarf galaxy obtained aboard the Hubble Space Telescope with the Advanced Camera for Surveys (HST/ACS), which allow us to measure an accurate photometric distance and the detailed star formation history of the galaxy based on the photometry of the resolved stellar populations of And\,XVIII. These new data can shed light on the origin and evolution of the dwarf spheroidal galaxy.
We present new observations with HST/ACS of the Andromeda\,XVIII dwarf galaxy situated in the Local Group of galaxies and associated with giant spiral M\,31. We analyse the star formation history and possible evolution of the galaxy and summarize the results of our study as follows: \begin{itemize} \item We obtained a colour-magnitude diagram of the And\,XVIII as a result of the precise stellar photometry of the resolved stars possible with deep F606W (V) and F814W (I) HST/ACS images. The colour-magnitude diagram of And\,XVIII reveals a thin and well distinguished red giant branch. The most abundant feature in the CMD is the red clump, visible well above the photometric limit. There is no pronounced blue main sequence, a characteristic of dwarf spheroidal galaxies that lack ongoing star formation. \item The high quality stellar photometry allows us to derive an accurate distance to And\,XVIII using the tip of the red giant branch distance indicator. The measured TRGB magnitude is $F814W_{TRGB} = 21.70_{-0.14}^{+0.06}$ in the ACS instrumental system. Using the calibration for the TRGB distance indicator by \citet{rizzietal07} and the Galactic extinction E(B-V) = 0.093 from \citet{schlafly}, we derived the true distance modulus for And\,XVIII of $25.62_{-0.17}^{+0.09}$ ($D = 1.33_{-0.09}^{+0.06}$ Mpc). Taking into account this new distance, And\,XVIII dwarf spheroidal galaxy is situated at the linear distance of 579 kpc from M31, well outside of the virial radius of the M31 halo. \item We performed total and surface photometry of And\,XVIII with the fully processed distortion-corrected HST/ACS F606W and F814W images. The measured total magnitudes are $V = 15.50\pm0.24$ mag and $I = 14.68\pm0.15$ mag. The correspondent absolute magnitudes of And\,XVIII are $M_V = -10.41$ and $M_I = -11.10$, taking into account Galactic extinction \citep{schlafly} and the distance modulus given directly above. The surface brightness profiles of And\,XVIII galaxy are well-fitted by an exponential law. The derived central surface brightness is $\mu_0^V = 23.96\pm0.02$ mag\,arcsec$^{-2}$ and $\mu_0^I = 23.14\pm0.02$ mag\,arcsec$^{-2}$. The uncertainties are formal fitting errors. The exponential scale length is $h_V = 26.2$ arcsec (170 pc) and $h_I = 26.8$ arcsec (174 pc). \item The quantitative star formation and metal enrichment history of And\,XVIII was determined from the stellar photometry results using our StarProbe package \citep{mm04}. According to our calculations, an ancient burst of star formation occurred 12--14 Gyr ago. The metallicity of the stars formed at that time is found to be in the interval [Fe/H]=$-2$ to $-1.6$ dex. Our model is consistent with a subsequent quiescent period in the star formation history of And\,XVIII from 8 to 12 Gyr ago. Following that period, our measurements indicate that there was star formation from 8 to 1.5 Gyr ago, particularly prominently at the earliest and latest of those times. The stars from this period have higher metallicity of [Fe/H]=$-1.6$ to $-1.0$ dex. There is no signs of recent/ongoing star formation in the last 1.5 Gyr. The mass fractions of the ancient and intermediate age stars are 34 and 66 per cent, respectively. The total stellar mass of And\,XVIII is $4.2\times$10$^6\,M_{\odot}$. \item Taking into account this detailed star formation history of And\,XVIII, as well as the models in the works of \citet{teyssier} and \citet{watkins}, we compare the possible evolution scenarios of And\,XVIII and 5 other isolated dwarf galaxies in the vicinity of the Local Group. It is suggested that, in isolated galaxies, gas loss and star formation quenching, the evident current condition in dSph systems, were driven by internal processes in the galaxy itself rather than by an external influence. It is likely that early evolution of very isolated dwarf galaxies is controlled mostly by the value of the potential well. For deeper wells a larger amount of gas accretes faster towards the centre, cools down, and the bulk of stars form in a short time. \end{itemize}
16
9
1609.09706
1609
1609.00941.txt
\vspace{0.2cm} \noindent \textbf{\footnotesize ABSTRACT.} \, We present a fully analytical, time-dependent leptonic one-zone model that describes a simplified radiation process of multiple interacting ultrarelativistic electron populations, accounting for the flaring of GeV blazars. In this model, several mono-energetic, ultrarelativistic electron populations are successively and instantaneously injected into the emission region, i.e., a magnetized plasmoid propagating along the blazar jet, and subjected to linear, time-independent synchrotron radiative losses, which are caused by a constant magnetic field, and nonlinear, time-dependent synchrotron self-Compton radiative losses in the Thomson limit. Considering a general (time-dependent) multiple-injection scenario is, from a physical point of view, more realistic than the usual (time-independent) single-injection scenario invoked in common blazar models, as blazar jets may extend over tens of kiloparsecs and, thus, most likely pick up several particle populations from intermediate clouds. We analytically compute the electron number density by solving a kinetic equation using Laplace transformations and the method of matched asymptotic expansions. Moreover, we explicitly calculate the optically thin synchrotron intensity, the synchrotron self-Compton intensity in the Thomson limit, as well as the associated total fluences. In order to mimic injections of finite duration times and radiative transport, we model flares by sequences of these instantaneous injections, suitably distributed over the entire emission region. Finally, we present a parameter study for the total synchrotron and synchrotron self-Compton fluence spectral energy distributions for a generic three-injection scenario, varying the magnetic field strength, the Doppler factor, and the initial electron energy of the first injection in realistic parameter domains, demonstrating that our model can reproduce the typical broad-band behavior seen in observational data.
%---------------------------------- \noindent Blazars are among the most energetic phenomena in nature, representing the most extreme type in the class of active galactic nuclei \cite{UP}. They feature relativistic jets that extend over tens of kiloparsecs and are directed toward the general direction of Earth. Observations of their radiation emission show very high luminosities, rapid variabilities, and high polarizations. Moreover, apparent superluminal characteristics can be detected along the first few parsecs of the jets. The main components of blazar jets are magnetized plasmoids, which are assumed to arise in the Blandford-Znajek and the Blandford-Payne process \cite{BlZn, BP}, constituting the major radiation zones. These plasmoids pick up -- and interact with -- particles of interstellar and intergalactic clouds along their trajectories \cite{PoSch}, giving rise to the emission of a series of strong flares. A blazar spectral energy distribution (SED) consists of two broad non-thermal radiation components in different domains. The low-energy spectral component, ranging from radio to optical or X-ray energies, is usually attributed to synchrotron radiation of relativistic electrons subjected to ambient magnetic fields. The origin of the high-energy spectral component, covering the X-ray to $\gamma$-ray regime, is still under debate. It can, for instance, be modeled by inverse Compton radiation coming from low-energy photon fields that interact with the relativistic electrons \cite{B07, DSS}. This process can be described either by a synchrotron self-Compton (SSC) model (see, e.g., \cite{SchlRö, RS09a} and references therein), where the electrons scatter their self-generated synchrotron photons, or by so-called external Compton models like \cite{DS, sbr94, bea00}, where the seed photons are generated in the accretion disk, the broad-line region, or the dust torus of the central black hole (some models also consider ambient fields of IR radiation from diffuse, hot dust, see, e.g., \cite{SBMM}). Aside from these leptonic scenarios, the high-energy component can also be modeled via proton-synchrotron radiation or the emission of $\gamma$-rays arising from the decay of neutral pions formed in interactions of protons with ambient matter (see, e.g., \cite{Bea13, ws15, Cer2} and references therein). Mixed models including both leptonic as well as hadronic processes are also considered in the literature (e.g., \cite{Cer1, DaLi}). A major task is to properly understand and to account for the distinct variability patterns of the non-thermal blazar emission at all frequencies with different time scales ranging from years down to a few minutes, where the shortest variability time scales are usually observed for the highest energies of the spectral components, as in PKS 2155-304 \cite{AeaH07, AeaH09} and Mrk 501 \cite{aeaM07} in the TeV range, or Mrk 421 \cite{C04} in the X-ray domain. So far, only multi-zone models, which feature an internal structure of the emission region with various radiation zones caused by collisions of moving and stationary shock waves, have been proposed to explain the extreme short-time variability of blazars (see, e.g., \cite{smm, mar, ggpk, gtbc09, gub09, BG12}). In the framework of one-zone models, however, extreme short-time variability can, a priori, not be realized as the duration of the injection into a plasmoid of finite size (with characteristic radius $\mathcal{R}_0$) and the light crossing (escape) time in this region are naturally of the order $\mathcal{O}(2 \, \mathcal{R}_0/(c \, \mathcal{D}))$, $c$ being the speed of light in vacuum and $\mathcal{D}$ the bulk Doppler factor \cite{ESR}. This leads to a minimum time scale for the observed flare duration, which may exceed the short-time variability scale in the minute range by several magnitudes. In particular, using the typical parameters $\mathcal{R}_0 = 10^{15} \, \textnormal{cm}$ and $\mathcal{D} = 10$, we find $2 \, \mathcal{R}_0/(c \, \mathcal{D}) \approx 1.9 \, \textnormal{h}$ in the observer frame. Thus, this type of model can only be used to account for variability on larger time scales, that is, from years down to hours. Both one-zone and multi-zone models have been studied extensively over the last decades in order to explain the variability but also the SEDs of blazars, incorporating leptonic and hadronic interactions. In these studies, analytical as well as numerical approaches were used, containing, among others, various radiative loss and acceleration processes, details of radiative transport, diverse injection patterns, different cross sections for particle interactions, as well as particle decay and pair production/annihilation (see the review article \cite{B10} and the references therein). Thus, the literature on this matter is quite comprehensive. However, models featuring pick-up processes of any kind usually assume only a single injection of particles into the emission region as the cause of the flaring. This may be unrealistic as blazar jets, which may extend over tens of kiloparsecs, most likely intersect with several clouds, leading to multiple injections. In such an intricate situation, it is of particular interest to have a self-consistent description of the particle's radiative cooling, the radiative transport, et cetera. Therefore, we propose a simple, but fully analytical, time-dependent leptonic one-zone model featuring multiple uniform injections of nonlinearly interacting ultrarelativistic electron populations, which undergo combined synchrotron and SSC radiative losses (for previous works, see \cite{RökenSchlickeiser, RS09a}). More precisely, we assume that the blazar radiation emission originates in spherically shaped and fully ionized plasmoids, which feature intrinsic randomly-oriented, but constant, large-scale magnetic fields and propagate ultrarelativistically along the general direction of the jet axis. These plasmoids pass through -- and interact with -- clouds of the interstellar and intergalactic media, successively and instantaneously picking up multiple mono-energetic, spatially isotropically distributed electron populations, which are subjected to linear, time-independent synchrotron radiative losses via interactions with the ambient magnetic fields and to nonlinear, time-dependent SSC radiative losses in the Thomson limit. This is the first time an analytical model that describes combined synchrotron and SSC radiative losses of several subsequently injected, interacting injections is presented (for a small-scale, purely numerical study on multiple injections see \cite{lk00}). We point out that because the SSC cooling is a collective effect, that is, the cooling of a single electron depends on the entire ensemble within the emission region, injections of further particle populations into an already cooling system give rise to alterations of the overall cooling behavior \cite{RökenSchlickeiser, zs13}. Moreover, as we do employ Dirac distributions for the time profile of the source function of our kinetic equation and, further, do not consider any details of radiative transport, we mimic injections of finite duration times and radiative transport by partitioning each flare into a sequence of instantaneous injections, which are appropriately distributed over the entire emission region, using the quantities computed here. The article is organized as follows. In Section \ref{SEC2}, an approximate analytical solution of the time-dependent, relativistic kinetic equation of the volume-averaged differential electron number density is derived. Based on this solution, the optically thin synchrotron intensity, the SSC intensity in the Thomson limit, as well as the corresponding total fluences are calculated in Sections \ref{SEC3} and \ref{SEC4}. We explain how to mimic finite injection durations and radiative transport by multiple use of our results in Section \ref{MSTVBL}, and show a parameter study for the total synchrotron and SSC fluence SEDs. Section \ref{S&O} concludes with a summary and an outlook. Supplementary material, which is required for the computations of the electron number density and the synchrotron and SSC intensities, is given in Appendices \ref{app:LM}-\ref{LTOF}. Moreover, in Appendix \ref{numcode}, we briefly describe the plotting algorithm employed for the creation of the fluence SEDs. %---------------------------------------------------------------------
\label{S&O} %-------------------------------------------------- \noindent We introduced a fully analytical, time-dependent leptonic one-zone model for the flaring of blazars that employs combined synchrotron and SSC radiative losses of multiple interacting, ultrarelativistic electron populations. Our model assumes several injections of electrons into the emission region as the cause of the flaring, which differs from common blazar models where only a single injection is considered. This is, from a physical point of view, more realistic since blazar jets may extend over distances of the order of tens of kiloparsecs and, thus, it is most likely that there is a pick up of more than just one particle population from interstellar and intergalactic clouds. At the same time, it further assumes both radiative cooling processes to occur simultaneously, as would be the case in any physical scenario. In more detail, applying Laplace transformations and the method of matched asymptotic expansions, we derived an approximate analytical solution of the relativistic kinetic equation of the volume-averaged differential electron number density for several successively and instantaneously injected, mono-energetic, spatially isotropically distributed, interacting electron populations, which are subjected to linear, time-independent synchrotron radiative losses and nonlinear, time-dependent SSC radiative losses in the Thomson limit. Using this solution, we computed the optically thin synchrotron intensity, the SSC intensity in the Thomson limit, as well as the corresponding total fluences. Moreover, we mimicked finite injection durations and radiative transport by modeling flares in terms of sequences of instantaneous injections. Ultimately, we presented a parameter study for the total synchrotron and SSC fluence SEDs for a generic three-injection scenario with variations of the magnetic field strength, the Doppler factor, and the initial electron energies, showing that our model can reproduce the characteristic broad-band SED shapes seen in observational data. We point out that the SSC radiative loss term considered here is strictly valid only in the Thomson regime, limiting the applicability of the model to at most GeV blazars. Nonetheless, it can be generalized to describe TeV blazars by using the full Klein-Nishina cross section in the SSC energy loss rate. This leads to a model for which similar yet technically more involved methods apply. Further, in order to make our simple analytical model more realistic, terms accounting for spatial diffusion and for electron escape could be added to the kinetic equation. Also, more elaborate source functions, e.g., with a power law energy dependence, a time dependence in form of rectangular functions for finite injection durations, and with a proper spatial dependence may be considered. However, judging from the complexity of our more elementary analysis, this is most likely only possible via direct numerical evaluation of the associated kinetic equation. \vspace{0.2cm}
16
9
1609.00941
1609
1609.02830.txt
We illustrate how it is possible to calculate the quantum gravitational effects on the spectra of primordial scalar/tensor perturbations starting from the canonical, Wheeler-De Witt, approach to quantum cosmology. The composite matter-gravity system is analysed through a Born-Oppenheimer approach in which gravitation is associated with the heavy degrees of freedom and matter (here represented by a scalar field) with the light ones. Once the independent degrees of freedom are identified the system is canonically quantised. The differential equation governing the dynamics of the primordial spectra with its quantum-gravitational corrections is then obtained and is applied to diverse inflationary evolutions. Finally, the analytical results are compared to observations through a Monte Carlo Markov Chain technique and an estimate of the free parameters of our approach is finally presented and the results obtained are compared with previous ones.
The paradigm of inflation \cite{inflation} has led to a beautiful connection between microscopic and macroscopic scales. This occurs since inflation acts as a ``magnifying glass'' insofar as microscopic quantum fluctuations at the beginning of time, when the universe was very small, evolve into inhomogeneous structures \cite{Stewart:1993bc}. Thus the observed structure of the present-day universe is related to the very early time quantum dynamics. As a consequence the former can be used to test the primordial dynamics and in particular the possible effects of quantum gravity at early times corresponding to a very small universe. The reason for this is that because of the huge value of the Planck mass quantum gravity effects are otherwise suppressed (of course one can also hope to observe quantum gravitational effects in the presence of very strong gravitational fields, for example in the proximity of black holes).\\ Composite systems which involve two mass (or time) scales such as molecules are amenable to treatment by a Born-Oppenheimer approach \cite{BO}. For molecules this is possible because of the different nuclear and electron masses, this allows one to suitably factorise the wave-function of the composite system leading, in a first approximation, to a separate description of the motion of the nuclei and the electrons. In particular it is found that the former are influenced by the mean hamiltonian of the latter and the latter (electrons) follow the former adiabatically (in the quantum mechanical sense). Similarly for the matter gravity system as a consequence of the fact that gravity is characterised by the Planck mass, which is much greater than the usual matter mass, the heavy degrees of freedom are associated with gravitation and the light ones with matter \cite{BO-cosm}. As a consequence, to lowest order, gravitation will be driven by the main matter Hamiltonian and matter will follow gravity adiabatically. As mentioned above we shall quantise the composite system, by this we mean that we shall perform the canonical quantisation of Einstein gravity and matter leading to the Wheeler-DeWitt (WDW) equation \cite{DeWitt}. This is what we mean by quantum gravity and is quite distinct to the introduction of so-called trans-Planckian effects (loosely referred to as quantum gravity) through ad hoc modifications of the dispersion relation \cite{Martin} and/or the initial conditions \cite{inicond}. Further the equations we shall obtain after the BO decomposition will be exact, in the sense that they also include non-adiabatic effects. The above approach has been previously illustrated in a mini-superspace model with the aim of studying the semiclassical emergence of time \cite{BO-cosm}, which is otherwise absent in the quantum system. Conditions were found for the usual (unitary) time evolution of quantum matter (Schwinger-Tomonaga or Schr\"odinger) to emerge, essentially these are that non-adiabatic transitions (fluctuations) be negligible or that the universe be sufficiently far from the Planck scale. In a series of papers \cite{K} we have generalized the approach to non-homogeneous cosmology in order to obtain corrections to the usual power spectrum of cosmological fluctuations produced during inflation. These corrections, which essentially amount to the inclusion of the effect of the non-adiabatic transitions, affect the infrared part of the spectrum and lead to an amplification or a suppression depending on the background evolution. More interestingly they depend on the wavenumber $k$ and scale as $k^{-3}$, in both the scalar and the tensor sectors, when background evolution is close to de Sitter. That non-adiabatic effects affect the infrared part of the spectrum, which is associated with large scales, is not surprising, since it is this part of the spectrum which exits the horizon in the early stages of inflation and is exposed to high energy and curvature effects for a longer time. The latest Planck mission results \cite{cmb} provide the most accurate constraints available currently to inflationary dynamics \cite{inflation}. So far the slow roll (SR) mechanism has been confirmed to be a paradigm capable of reproducing the observed spectrum of cosmological fluctuations and the correct tensor to scalar ratio \cite{Stewart:1993bc}. Since the inflationary period is the cosmological era describing the transition from the quantum gravitational scale down to the hot big bang scale, it may, somewhere, exhibit related peculiar features which could be associated with quantum gravity effects. Quite interestingly a loss of power, with respect to the expected flatness for the spectrum of cosmological perturbations, can be extrapolated from the data at large scales \cite{powerloss1}. Since, as mentioned above, it is for such scales that quantum gravity effects due to non-adiabaticity may appear, this has motivated us to estimate such effects. Unfortunately, such a feature (evident already in the WMAP results) exhibits large errors due to cosmic variance. Nonetheless we feel that it is worth comparing our detailed analytical predictions for the quantum gravity effects with Planck data through a Monte Carlo Markov Chain (MCMC) based method.\\ The paper is organized as follows. In section 2 the basic equations are reviewed, the canonical quantization method and the subsequent BO decomposition are illustrated. In the section 3 we calculate the master equation governing the dynamics of the two point function of the quantum fluctuations when the quantum gravitational effects are taken into account and the vacuum prescription for these fluctuations is briefly discussed. In section 4 we review the basic relations for de Sitter, power law and slow-roll (SR) inflation and the quantum corrections to the primordial spectra are explicitly calculated for these three distinct cases. In section 5 we illustrate how our analytical predictions are compared to observations and we comment our results. Finally in section 6 we draw the conclusions. %%%%%%%%%%%%%%%%%%%%%
As we mentioned in the introduction the matter-gravity system is amenable to a Born-Oppenheimer treatment, wherein gravitation is associated with the heavy (slow) degrees of freedom and matter with the light (fast) degrees of freedom. Once the system is canonically quantised and the associated wave function suitably decomposed, one obtains that, on neglecting terms due to fluctuations (non-adiabatic effects), in the semiclassical limit gravitation is driven by the mean matter Hamiltonian and matter follows gravitation adiabatically, while evolving according to the usual Schwinger-Tomonaga (or Schr\"odinger) equation. Our scope in this paper has been to study perturbatively the effect of the non-adiabatic contributions, for different inflationary backgrounds. In particular we wished to see such effects on the observable features of the scalar/tensor fluctuations generated during inflation. In order to do this we obtained a master equation for the two-point function for such fluctuations, which includes the lowest order quantum gravitational corrections. These corrections manifest themselves on the largest scales, since the associated perturbations are more effected by quantum gravitational effects, as they exit the horizon at the early stages of inflation and are exposed to high energy and curvature effects for a longer period of time. Interestingly the very short wavelength part of the spectrum remains unaffected and one may consistently assume the BD vacuum as an initial condition for the evolution of the quantum fluctuations. Computationally this feature is relevant as it allows one to find the long wavelength part of the spectrum of the fluctuation through a matching procedure (similar to the standard case without quantum gravitational corrections). \\ In particular one finds, for a de Sitter evolution, a power enhancement w.r.t. the standard results for the spectrum at large scales, with corrections behaving as $k^{-3}$. Such a $k^{-3}$ was also found with similar approaches \cite{quantumloss} and may appear to be a peculiarity of such quantum gravity models. However the case of power law inflation is different: while power enhancement is also true for power-law inflation, of interest for this case is that one finds that the $k$ dependence of the quantum gravitational corrections differs from $k^{-3}$ and is, perhaps not surprisingly, directly related to the $k$ dependence of the unperturbed spectra.\\ Finally it is the slow roll case that is more realistic and of greatest interest. The quantum gravitational corrections for the SR case have peculiar features and are very different from the de Sitter case. In particular, for the case of the scalar fluctuations, their form is not simply a deformation of the de Sitter result proportional to the SR parameters. New contributions arise due to SR and their effect is comparable with the de Sitter-like contributions for very large wavelengths. The new contributions are proportional to $\ep{SR}-\eta_{SR}$ and are zero for the de Sitter and power-law cases. They can lead to a power-loss term for low $k$ in the spectrum of the scalar curvature perturbations at the end of inflation, providing the difference $\ep{SR}-\eta_{SR}>0$. The evolution of the primordial gravitational waves has also been addressed. The quantum gravitational corrections also affect the dynamics of tensor perturbations and determine a deviation from the standard results in the low multipole region, which always leads to a power enhancement. In performing the analysis, for simplicity, we restricted ourselves to the particular case of negligible quantum gravitational contributions to the spectrum of primordial gravitational waves. Further, since our corrections are perturbative,in order to keep them so for all values of k, we have suitably extrapolated our predictions for the scalar sector beyond the leading order, describing this in terms of two parameters, and examined them down to $k\rightarrow 0$. Other parametrizations have also been considered, however the one we presented is the simplest and leads to the best results.\\ %Further we assumed that the LO spectra are generated by the conventional SR mechanism and single field inflation. As a consequence the consistency condition relating scalar and tensor spectral indices and the tensor to scalar ration are valid to such a LO. Different potentials lead to differing expressions for these quantities. It is found that, given the form obtained for the quantum gravitational corrections, our model imposes severe constraints on the shape of the inflationary potential ,as a loss in power at large scales is compatible with observations, whereas a power enhancement must be zero or extremely small to fit the data. Only a small subset of the inflationary models, satisfying the observed values for $n_s$ and $r$, lead to a loss of power at large scales. The remaining models give a power increase which may be a distinguishable feature, unless $\bar k$ is too small to be observed in the CMB.\\ Finally the analysis performed was based on Planck datasets released in 2015 , include the Planck TT data with polarization at low $l$ (PL) and the data of the BICEP2/{\it Keck Array}-Planck joint analysis (BK) \cite{bicep}. In our preceding paper \cite{K} our model predictions were tested through Planck 2013 and BICEP2 earlier data and the results were different. The MCMC results (see Tables \ref{tab4} and \ref{tab4b}) show that the quantum gravitational modification of the standard power law form for the primordial scalar spectrum, improves the fit to the data. Such improvements are much more significant w.r.t the standard modifications of the primordial spectra, obtained by considering a running spectral index. Let us note that the 2015 Planck data give constraints on the running, which are quite different from those coming from 2013 data. In particular the fit to the 2015 data does not improve much if one considers a running spectral index in the scalar sector. On including the BK data in our analysis, we find that the results take vey similar values for the best fit. Furthermore comparison with the data predicts, for our model, a loss in power of about $20-25\%$ w.r.t. the standard power law as $k$ approaches zero. and fixes the scale $\bar k$, which necessarily appears in the theoretical model. One finds values for $\bar k$ which are very large, compared to the wave number associated with the largest observable scale in the CMB (namely $k_{\rm min}\simeq 1.4\cdot 10^{-4}\;{\rm Mpc}$). Let us note that the existence of such a small fundamental length may have relevant consequences on astrophysical observation. Indeed it is associated with distances which are comparable with the diameter of a large galaxy or a galaxy cluster. We further observe that a 3 order of magnitude variation of the value of $\bar k$ can be obtained on "re-tuning" the parameters used for its estimate. Further we observe that the value of $\bar k$, although illustrated for a specific inflationary model, is quite general and is found for diverse power loss compatible models. This is rather surprising and of course,assuming our proposed mechanism is correct, indicates the possible presence of new physics at such scales. Actually such a result is not new. Indications for this have been seen both from a study of the stability of clusters of galaxies or is associated with the running of Newton's constant (\cite{scale}).\\ %%%%%%%%%%%%
16
9
1609.02830
1609
1609.08120_arXiv.txt
We study the temporal evolution of the Na {\sc{i}} D$_{1}$ line profiles in the M3.9 flare SOL2014-06-11T21:03~UT, using high spectral resolution observations obtained with the IBIS instrument on the Dunn Solar Telescope combined with radiative hydrodynamic simulations. Our results show a significant increase in line core and wing intensities during the flare. The analysis of the line profiles from the flare ribbons reveal that the Na {\sc{i}} D$_{1}$ line has a central reversal with excess emission in the blue wing (blue asymmetry). We combine RADYN and RH simulations to synthesise Na {\sc{i}} D$_{1}$ line profiles of the flaring atmosphere and find good agreement with the observations. Heating with a beam of electrons modifies the radiation field in the flaring atmosphere and excites electrons from the ground state $\mathrm{3s~^2S}$ to the first excited state $\mathrm{3p~^2P}$, which in turn modifies relative population of the two states. The change in temperature and the population density of the energy states make the sodium line profile revert from absorption into emission. Analysis of the simulated spectra also reveals that the Na {\sc{i}} D$_{1}$ flare profile asymmetries are produced by the velocity gradients generated % in the lower solar atmosphere.
The lower solar atmosphere is key to our understanding of solar flares, as the vast majority of the flare radiative energy originates in the chromosphere and photosphere. Chromospheric radiation is dominated by the optically thick lines of hydrogen, calcium and magnesium which provide diagnostics on flare dynamics. One of the main characteristics of the flaring chromosphere is the centrally-reversed H$\alpha$ emission profile with asymmetric red and blue wings. The asymmetries are attributed to the downflow and upflow of plasma, triggered by the deposition of non-thermal energy, which are also known as chromospheric condensation and evaporation \citep{1984SoPh...93..105I,1989ApJ...341.1088W,1992ApJ...401..761D}. These processes can be very effective tracers for the velocity field of the flaring atmosphere. \cite{1999ApJ...521..906A} computed flare time-dependent H$\alpha$ and Ca {\sc{ii}} K line profiles with the radiative-hydrodynamic code \citep[RADYN;][]{1997ApJ...481..500C}, and showed that the asymmetries could be produced by the strong velocity gradients generated during the flare. These gradients create differences in the opacity between the red and blue wings of H$\alpha$ and their sign determines whether the asymmetric emission appears to the blue or red side of the line profile. Recently, \cite{2015ApJ...813..125K} made a direct comparison of the observation and RADYN simulation for the evolution of the H$\alpha$ profile. They showed that the steep velocity gradients in the flaring chromosphere modify the wavelength of the central reversal in H$\alpha$. The shift in the wavelength of maximum opacity to shorter and longer wavelengths generates the red and blue asymmetries, respectively. The exact formation height of the Na {\sc{i}} D$_1$ line core is difficult to estimate from observations, and with simulations indicating it is formed below the formation height of the H$\alpha$, Ca {\sc{ii}} H \& K, and the Ca {\sc{ii}} infrared (IR) triplet line cores. \cite{2008AN....329..494S} explored the formation height of the Na {\sc{i}} D lines by calculating the response functions of their profiles to p-mode power variations. They concluded that the line has its origin in a wider region, from the photosphere up to the lower chromosphere at height of around 800 km. Observations and simulations performed for the quiet solar atmosphere have revealed that the Na {\sc{i}} D$_1$ core brightness samples the magnetic bright points in the solar photosphere and is strongly affected by 3D resonance scattering \citep{2002ESASP.477..147M,2010ApJ...709.1362L,2010ApJ...719L.134J}. These studies suggest that the line core emission in the quiet Sun originates in the upper photosphere and lower chromosphere. \begin{figure*}[t] \begin{center} \includegraphics[width=17.4cm]{fig1} \end{center} \caption{(Left) The IBIS Na {\sc{i}} D$_1$ line profile of a quiet solar region. Asterisks show the spectral positions selected for the IBIS line scan. The full black line shows the mean spectrum averaged over the quiet Sun FOV. (Right) Na {\sc{i}} D$_1$ wing and core images at selected wavelength positions. The blue contours show the upper and lower flare ribbons analysed in this paper. Contours indicate the 50\% level of the intensity maximum.} \label{fig1} \end{figure*} There is a lack of flare observations and modelling in the Na {\sc{i}} D lines, with the exception of some early work \citep{1967SoPh....1..389G,1990A&AS...84..601F,1996A&A...310L..29M}. The most recent observations of solar flares in Na {\sc{i}} D$_1$ have been reported by \cite{2010SoPh..263..153C}, who analysed the D$_1$ and D$_2$ line intensities observed with the GOLF spectrophotometer onboard SOHO. They found that the intensities of these lines integrated over the solar disk are increased during flares. However, the GOLF instrument can only record intensities in the single wavelength positions of $\pm$0.108~{\AA} from the line core and does not allow for full spectral line profiles. In this paper we present high temporal, spatial and spectral resolution observations of an M3.9 solar flare in Na {\sc{i}} D$_1$. Multi-wavelength observations of this M3.9 flare are also presented in \cite{christian2016}. We study the evolution of the line profiles of the flare ribbons, and compare our findings with synthesised profiles obtained with a radiative hydrodynamic simulation. Motivated by the close match between simulations and observations, we investigate the formation of centrally-reversed, asymmetric Na {\sc{i}} D$_1$ line profiles using synthetic spectra. The line contribution functions and the velocity field in the simulated atmosphere allow us to investigate the nature of the observed line asymmetries.
We have presented spectroscopic observations of the Na {\sc{i}} D$_1$ line in an M3.9 flare and compared our findings with radiative hydrodynamic simulations. Our high spectral resolution observations show that during the flare the Na {\sc{i}} D$_1$ line goes into emission and a central reversal is formed (Figures~\ref{fig3} and 4). The analysis of synthetic line profiles indicates that the change from absorption to emission is a result of the heating of the lower solar atmosphere by the non-thermal electron beam. Although the formation of the line occurs deep below the primary non-thermal energy dissipation site, it is still strongly affected by the heating as the temperature of the region responds immediately to the electron beam (left panel of Figure~\ref{fig55}). The heating rapidly changes the balance between the population densities of the energy states in the sodium atom (Figure~\ref{fig55}), with increased collision rates exciting electrons from the ground $\mathrm{^{~2}S}$ to the first excited state $\mathrm{{~^2}P}$ which allows Na {\sc{i}} D$_1$ photons to escape freely and results in an increase in the line intensity. Furthermore, the line source function shows a local minimum near 300 km and increases toward the core formation height (Figure~\ref{fig555}). As a result, during the heating phase the line profile developes fully into emission. However, when the beam heating was stopped the temperature starts to drop and the ratio of the population densities changes back again (Figure~\ref{fig55}). The line source function has developed a local maximum near 900 km, in the region where the source function is already decoupled with the Planck function. As a result, the line profiles develop a small absorption near line core (central reversal), as shown in Figure~\ref{fig5}. The observed line profiles have similar centrally-reversed shapes during the flare (Figures~\ref{fig3} and \ref{fig4}), with the absorption dip smaller for the lower ribbon profiles, indicating that the heating was more intense in the lower ribbon (Figures~\ref{fig3} and \ref{fig4}). Indeed, RH simulation confirms that in the flaring atmosphere produced with the lower electron flux (e.g. F9 model), the Na {\sc{i}} D$_1$ profiles has a deeper absorption dip at the relaxation phase. It must be noted that the central reversal in the synthetic Na {\sc{i}} D$_1$ line profile appears only after heating ends, i.e., in the relaxation phase of the flare simulation. During the beam heating the line profile develops fully into emission without reversal. Recent high spatial resolution ground-based observations of solar flares in H$\alpha$ and He {\sc{i}} 10830 {\AA} lines indicating that the ribbon kernels of the M-class solar flares could have a very narrow width, $<$500~km \citep{2016ApJ...819...89X, 2016NatSR...624319J}. % This suggest that the actual footpoint size of the flaring loops could be smaller, and hence, energy flux could be higher than an estimated 10$^{11}$ erg s$^{-1}$ cm$^{-2}$. To assess the effects of a higher energy input on the synthesised Na {\sc{i}} D$_1$ line profile we performed RADYN simulations for a stronger ($\mathrm{5\times~F11}$) energy flux and used resulted atmospheric snapshots in RH for synthesising Na {\sc{i}} D$_1$ line profiles. The obtained line profiles shows a similar evolution patterns, in particular, when beam heating is on Na {\sc{i}} D$_1$ is in total emission (without central reversal) and when beam heating is off the line profiles are centrally-reversed with asymmetric wing emission depending of the velocity field. Similar behaviour has been shown by the synthesised Na {\sc{i}} D$_1$ spectra simulated with RH for a weaker (F9) RADYN atmosphere. Therefore, the response of the atmosphere at the Na {\sc{i}} D$_1$ formation height to the different energy beam heating is qualitatively similar (however, stronger ($\mathrm{F12}$) case have to be examined as well.) This shows that the formation of the central reversal in the spectra could be used as a diagnostic of the non-thermal heating processes in solar flares. The temporal evolution of the line profile shows excess emission in the blue wing with an almost unshifted line core (Figures~\ref{fig3} and 4). In an atmosphere without velocity fields, the centrally-reversed chromospheric line profiles are symmetric with respect to the line core \citep{1993A&A...274..917F,2006ApJ...653..733C}. However, dynamic models account for the mass motions of the flaring material and reproduce the asymmetric signatures seen in the observations. Our Figure~\ref{fig6} shows that in a zero velocity field the line profile is indeed symmetric. The velocity field in the lower solar atmosphere is not disturbed during the first 10 s of the active beam heating that produces the explosive evaporation above 1000 km, which is higher than line formation height. However, after 20 s the lower atmosphere has developed upflows of around $\mathrm{2-3~\ks}$, which are generally attributed to gentle evaporation. As a result, the symmetry in the line profiles is now broken. The negative velocity gradient at around $\mathrm{400-800~km}$ modifies the optical depth of the atmosphere in such a way that higher-lying (core) atoms absorb photons with longer wavelengths (red wing photons) and the blue asymmetry is formed (Figure~\ref{fig6}). The line cores of the observed and simulated Na {\sc{i}} D$_1$ asymmetric line profiles remain unshifted (panels $d,e,f$ of Figure~\ref{fig6}), in contrast to the centrally-reversed H$\alpha$ line profiles which show a red-shifted line core during the blue asymmetry \citep{2015ApJ...813..125K}. This may be due to more complex velocity fields with different condensation/evaporation patterns. Indeed, Figure~\ref{fig6} shows that the velocity gradient changes sign above 800 km, which covers the upper, narrow layer of core formation height. This can produce an effectively unshifted line core. To our knowledge, we have presented the first high spectral resolution imaging spectroscopy of a solar flare in the Na {\sc{i}} D$_1$ line. The simulated line profiles show good agreement with observations, indicating that they can be a very important diagnostic of the properties and dynamics of the lower flaring atmosphere located below the formation height of the H$\alpha$ and Ca {\sc{ii}} line cores. % We have shown that as in H$\alpha$, the asymmetries in centrally-reversed Na {\sc{i}} D$_1$ spectral profiles could be an effective tracer of the velocity field in the flaring atmosphere.
16
9
1609.08120
1609
1609.01976_arXiv.txt
The interaction between nuclei in the inner crust of neutron stars consists of two contributions, the so-called ``direct'' interaction and an ``induced'' one due to density changes in the neutron fluid. For large nuclear separations $r$ the contributions from nuclear forces to each of these terms are shown to be nonzero. In the static limit they are equal in magnitude but have opposite signs and they cancel exactly. We analyze earlier results on effective interactions in the light of this finding. We consider the properties of long-wavelength collective modes and, in particular, calculate the degree of mixing between the lattice phonons and the phonons in the neutron superfluid. Using microscopic theory, we calculate the net non-Coulombic contribution to the nucleus--nucleus interaction and show that, for large $r$, the leading term is due to exchange of two phonons and varies as $1/r^7$: it is an analog of the Casimir--Polder interaction between neutral atoms. \pacs{ }
In the inner crust of neutron stars, neutron-rich atomic nuclei coexist with a neutron fluid in addition to the background of electrons which ensures electrical neutrality. Consequently, there are contributions to the effective interaction between nuclei due to the presence of the neutrons outside nuclei, in addition to the screened Coulomb interaction. These interactions determine frequencies of collective modes of the crust \cite{Epstein, ChamelPageReddy,KP1} and it has been suggested that they could alter the equilibrium lattice structure \cite{KP2}. In addition, they are an important ingredient in calculations of dynamical phenomena. The approach in this article is to use thermodynamic reasoning, as was done in the case of dilute solutions of $^3$He in superfluid liquid $^4$He \cite{BBP1}. An important difference between the present problem and that of liquid helium mixtures is that the translational kinetic energy of the nuclei is relatively unimportant and one may work in terms of an effective potential between nuclei that depends on their positions. For helium mixtures, the fact that $^3$He atoms have a mass comparable to that of the $^4$He atoms means that it is more natural to work with quantum-mechanical eigenstates in which the $^3$He atoms are not localized in space but are spread out over the whole of the system. In many-body physics it is common to express the effective interaction between particles as the sum of two terms, a `direct' interaction, which represents the energy change when the average particle density is held fixed and an `induced' interaction, which results from changes in the average density \cite{BabuBrown}. In the thermodynamic approach to effective interactions we shall demonstrate that at large nuclear separations these contributions are nonzero, equal in magnitude but opposite in sign: thus they cancel exactly. This paper is organized as follows. In Sec.\ II we review the thermodynamic approach to effective interactions in two-component systems and describe its implications for the spatial dependence of the interactions. We then analyse the results obtained from the calculations of Lattimer and Swesty \cite{LattimerSwesty} in the light of these findings. Velocities of long-wavelength collective modes are calculated in Sec.\ III. The formalism employed in Refs.\ \cite{ChamelPageReddy,KP1} is based on the traditional formulation of the two-fluid model for superfluidity \cite{LandLHydro, CJPChamelReddy}, in which one works with the velocity of the normal component (a contravariant vector) and the so-called superfluid velocity (a covariant vector), and we comment on how this is related to a treatment in terms of two covariant vector quantities, as is more natural in the context of a mixture of two superfluids, such as may be anticipated to occur in the outer core of neutron stars. There are two longitudinal modes, which may be thought of as hybrids of lattice phonons and the Bogoliubov--Anderson mode of the neutron superfluid. We give numerical results for the degree of hybridization, which is an important ingredient in calculations of the damping of modes. We show that the hybridization is rather sensitive to the neutron superfluid density, a quantity whose magnitude is rather uncertain. Section IV contains a microscopic calculation of the interaction between nuclei and we show that the long-range part of the interaction is attractive and varies as $1/r^7$ \cite{Schecter}. Section V contains concluding remarks, and in the Appendix we give an explicit example of the cancellation of direct and induced interactions in a model of matter in the inner crust in which Coulomb and surface effects are neglected.
One of the main conclusions of this article is that, when the Coulomb interaction between nuclei is neglected and in the static limit, the induced interaction between two nuclei in the inner crust of a neutron star is exactly cancelled by the direct interaction in any model in which the energy of a nucleus is a function only of the density of the surrounding neutrons. The cancellation is rather general, and in Appendix A we have demonstrated it explicitly for a model in which surface effects are neglected. This cancellation points to the need to investigate in greater detail the spatial dependence of the interaction between nuclei, and as a first step we have shown that at large distances the total interaction between two nuclei is attractive and of the Casimir--Polder form, $\sim 1/r^7$, due to exchange of two phonons. At smaller separations there are contributions to the interaction due to exchange of higher numbers of phonons. More generally, when the energy exchange between the two nuclei is nonzero, the cancellation of the two contributions will be incomplete because of the nonzero time required for the neutron density to respond to a change in the configuration of the nuclei. Our rough estimates of the size of the Casimir--Polder interaction suggest that it is considerably smaller than the direct Coulomb interaction between nuclei and therefore it seems unlikely that it can destabilize the lattice, as suggested in Ref.\ \cite{KP2}. Our calculations of collective modes show that results for the degree of hybridization of lattice phonons and phonons in the neutron superfluid is very sensitive to the neutron superfluid density. This implies that estimates of Landau damping of the phonons in the neutron superfluid also depend on the neutron superfluid density. A challenge for the future is to calculate the neutron superfluid density taking into account both band-structure effects and pairing.
16
9
1609.01976