id
int64
0
203k
input
stringlengths
66
4.29k
output
stringlengths
0
3.83k
3,700
Suppose that you have an abstract for a scientific paper: we consider the nature of the fluid - solid phase transition in a polydisperse mixture of hard spheres . for a sufficiently polydisperse mixture ( @xmath0 ) crystallisation occurs with simultaneous fractionation . at the fluid - solid boundary , a broad fluid diameter distribution is split into a number of narrower fractions , each of which then crystallise . the number of crystalline phases increases with the overall level of polydispersity . at high densities , freezing is followed by a sequence of demixing transitions in the polydisperse crystal . . And you have already written the first three sentences of the full article: equal - sized hard spheres constitute probably the simplest example of a purely entropic material . in a hard - sphere system there is no contribution to the internal energy @xmath1 from interparticle forces so that @xmath1 is a constant , at a fixed temperature . minimising the free energy , @xmath2 , is thus simply equivalent to maximising the entropy @xmath3 .. Please generate the next two sentences of the article
consequently , the structure and phase behaviour of hard spheres is determined solely by entropy . although the hard - sphere model was originally introduced as a mathematically simple model of atomic liquids@xcite recent work has demonstrated it usefulness as a basic model for complex fluids@xcite .
3,701
Suppose that you have an abstract for a scientific paper: resumen + + abstract + . + . And you have already written the first three sentences of the full article: polyelectrolyte ( pe ) solutions are systems widely studied since they show properties that are of fundamental interest for applications in health science , food industry , water treatment , surface coatings , oil industry , among other fields . in fact , one of the problems found in genetic engineering in the appearance of conformational changes of the adn molecule , which is a charged polyelectrolyte.@xcite . + here we study an infinite dilution polyelectrolyte solution , so that , the interaction among polyelectrolyte macromolecules are negligible .. Please generate the next two sentences of the article
we model the polyelectrolyte as having dissociable functional groups that give rise to charged sites and counter - ions in aqueous solution . the long range interactions arising from these multiple charges are responsible for their macroscopic complex properties , which can not be explained by regular polymer theories .
3,702
Suppose that you have an abstract for a scientific paper: we present new algorithms to compute the mean of a set of empirical probability measures under the optimal transport metric . this mean , known as the wasserstein barycenter , is the measure that minimizes the sum of its wasserstein distances to each element in that set . we propose two original algorithms to compute wasserstein barycenters that build upon the subgradient method . a direct implementation of these algorithms is , however , too costly because it would require the repeated resolution of large primal and dual optimal transport problems to compute subgradients . extending the work of @xcite , we propose to smooth the wasserstein distance used in the definition of wasserstein barycenters with an entropic regularizer and recover in doing so a strictly convex objective whose gradients can be computed for a considerably cheaper computational cost using matrix scaling algorithms . we use these algorithms to visualize a large family of images and to solve a constrained clustering problem . . And you have already written the first three sentences of the full article: ) ( e ) 2-wasserstein distance.,width=291 ] -.5 cm comparing , summarizing and reducing the dimensionality of empirical probability measures defined on a space @xmath0 are fundamental tasks in statistics and machine learning . such tasks are usually carried out using pairwise comparisons of measures . classic information divergences @xcite are widely used to carry out such comparisons . unless @xmath0 is finite , these divergences can not be directly applied to empirical measures , because they are ill - defined for measures that do not have continuous densities . they also fail to incorporate prior knowledge on the geometry of @xmath0 , which might be available. Please generate the next two sentences of the article
if , for instance , @xmath0 is also a hilbert space . both of these issues are usually solved using @xcite s approach @xcite to smooth empirical measures with smoothing kernels before computing divergences : the euclidean @xcite and @xmath1 distances @xcite , the kullback - leibler and pearson divergences @xcite can all be computed fairly efficiently by considering matrices of kernel evaluations .
3,703
Suppose that you have an abstract for a scientific paper: we present and prove the correctness of the program ` boundary ` , whose sources are available at http://people.sissa.it/~maggiolo/boundary/[`http://people.sissa.it/~maggiolo/boundary/ ` ] . given two natural numbers @xmath0 and @xmath1 satisfying @xmath2 , the program generates all genus @xmath0 stable graphs with @xmath1 unordered marked points . each such graph determines the topological type of a nodal stable curve of arithmetic genus @xmath0 with @xmath1 unordered marked points . our motivation comes from the fact that the boundary of the moduli space of stable genus @xmath0 , @xmath1-pointed curves can be stratified by taking loci of curves of a fixed topological type . . And you have already written the first three sentences of the full article: moduli spaces of smooth algebraic curves have been defined and then compactified in algebraic geometry by deligne and mumford in their seminal paper @xcite . a conceptually important extension of this notion in the case of pointed curves was introduced by knudsen @xcite . the points in the boundary of the moduli spaces of pointed , nodal curves with finite automorphism group .. Please generate the next two sentences of the article
these curves are called _ stable curves _ ( or pointed stable curves ) . the topology of one such curve is encoded in a combinatorial object , called
3,704
Suppose that you have an abstract for a scientific paper: this paper concerns the frequency domain problem of diffraction of a plane wave incident on an infinite right - angled wedge on which impedance ( absorbing ) boundary conditions are imposed . it is demonstrated that the exact sommerfeld - malyuzhinets contour integral solution for the diffracted field can be transformed to a line integral over a physical variable along the diffracting edge . this integral can be interpreted as a superposition of secondary point sources ( with directivity ) positioned along the edge , in the spirit of the edge source formulations for rigid ( sound - hard ) wedges derived in [ u. p. svensson , p. t. calamia and s. nakanishi , acta acustica / acustica 95 , 2009 , pp . 568 - 572 ] . however , when surface waves are present the physical interpretation of the edge source integral must be altered : it no longer represents solely the diffracted field , but rather includes surface wave contributions . . And you have already written the first three sentences of the full article: diffraction by an infinite wedge is a fundamental canonical problem in acoustic scattering . exact closed - form frequency - domain solutions for point source , line source or plane wave excitation with homogeneous dirichlet ( sound soft ) or neumann ( sound hard , or rigid ) boundary conditions are available in many different forms@xcite . for example , series expansions in terms of eigenfunctions are available for near field calculations ( e.g. for analysing edge singularities ) .. Please generate the next two sentences of the article
contour integral representations over so - called sommerfeld - malyuzhinets contours are better suited to far field computations ( e.g. for deriving diffraction coefficients in computational methods such as the geometrical theory of diffraction @xcite ) . more recently it has been discovered that the ` diffracted ' component of these solutions ( precisely , that which remains after subtracting from the total field the geometrical acoustics terms ) can be expressed in a more physically intuitive form , namely as a line integral superposition of directional secondary sources located along the diffracting edge @xcite .
3,705
Suppose that you have an abstract for a scientific paper: the empirical pairing gaps derived from four different odd - even mass staggering formulas are compared . by performing single-@xmath0 shell and multi - shell seniority model calculations as well as by using the standard hfb approach with skyrme force we show that the simplest three - point formula @xmath1 $ ] can provide a good measure of the neutron pairing gap in even-@xmath2 nuclei . it removes to a large extent the contribution from the nuclear mean field as well as contributions from shell structure details . it is also less contaminated by the wigner effect for nuclei around @xmath3 . we also show that the strength of @xmath4 can serve as a good indication of the two - particle spatial correlation in the nucleus of concern and that the weakening of @xmath4 in some neutron - rich nuclei indicates that the di - neutron correlation itself is weak in these nuclei . the occurrence of a systematic odd - even staggering ( oes ) of the nuclear binding energy has long been identified in nuclear physics , which is associated with the pairing correlation @xcite . it plays an important role in many nuclear phenomena and is the dominant many - body correlation beyond the nuclear mean field . yet , in spite of the many efforts performed in the study of pairing correlations , there are still features which may be induced by the pairing interaction that are not well understood @xcite . in particular , this is the case in neutron - rich nuclei , where the study of effects induced by pairing may shed light on the understanding of various exotic phenomena ( see , e.g. , refs . @xcite ) . the simplest expression one can use to extract the empirical pairing gap from the oes of the binding energy is the three - point formula @xcite , which for systems with even neutrons acquires the form @xcite @xmath5\\ = -\frac{1}{2}[s_n(n+1,z)-s_n(n , z)]\end{gathered}\ ] ] where @xmath6 is the ( positive ) binding energy and @xmath7 is the one - neutron separation energy . the proton pairing gap can be defined in.... And you have already written the first three sentences of the full article: we thank r. liotta for stimulating discussions and his reading of the manuscript . this work was supported by the swedish research council ( vr ) under grant nos . 621 - 2012 - 3805 , and 621 - 2013 - 4323 .. Please generate the next two sentences of the article
the calculations were performed on resources provided by the swedish national infrastructure for computing ( snic ) at nsc in linkping and pdc at kth , stockholm . b. bally , b. avez , m. bender , p .- h .
3,706
Suppose that you have an abstract for a scientific paper: type ia supernovae ( snia ) remain mysterious despite their central importance in cosmology and their rapidly increasing discovery rate . the progenitors of snia can be probed by the delay time between progenitor birth and explosion as snia . the explosions and progenitors of snia can be probed by mev nuclear gamma rays emitted in the decays of radioactive nickel and cobalt into iron . we compare the cosmic star formation and snia rates , finding that their different redshift evolution requires a large fraction of snia to have large delay times . a delay time distribution of the form @xmath0 with @xmath1 provides a good fit , implying @xmath2 of snia explode more than @xmath3 gyr after progenitor birth . the extrapolation of the cosmic snia rate to @xmath4 agrees with the rate we deduce from catalogs of local snia . we investigate prospects for gamma - ray telescopes to exploit the facts that escaping gamma rays directly reveal the power source of snia and uniquely provide tomography of the expanding ejecta . we find large improvements relative to earlier studies by gehrels et al . in 1987 and timmes & woosley in 1997 due to larger and more certain snia rates and advances in gamma - ray detectors . the proposed advanced compton telescope , with a narrow - line sensitivity @xmath5 times better than that of current satellites , would , on an annual basis , detect up to @xmath6 snia ( @xmath7 ) and provide revolutionary model discrimination for snia within 20 mpc , with gamma - ray light curves measured with @xmath8 significance daily for @xmath6 days . even more modest improvements in detector sensitivity would open a new and invaluable astronomy with frequent snia gamma - ray detections . . And you have already written the first three sentences of the full article: type ia supernovae ( snia ) are deeply connected with many important frontiers of astrophysics and cosmology . they occur in all galaxy types and are major contributors to galactic chemical evolution , in particular of iron . they are very bright and , as high redshift distance indicators , play a critical role in establishing the modern cosmology paradigm @xcite . however , there are major uncertainties regarding the nature of the snia progenitors and explosions . while it is established that most snia result from the thermonuclear explosion of carbon - oxygen white dwarfs ( wd ) near the chandrasekhar mass , the mechanism of mass gain remains debated . in the single - degenerate ( sd ) scenario. Please generate the next two sentences of the article
the wd accretes mass from a companion star @xcite , while in the double - degenerate ( dd ) scenario the wd merges with another wd @xcite . in addition , although the main products are known , the basic mechanism of nuclear burning remains under debate .
3,707
Suppose that you have an abstract for a scientific paper: we propose a formal expansion of the transfer entropy to put in evidence irreducible sets of variables which provide information for the future state of each assigned target . multiplets characterized by a large contribution to the expansion are associated to informational circuits present in the system , with an informational character which can be associated to the sign of the contribution . for the sake of computational complexity , we adopt the assumption of gaussianity and use the corresponding exact formula for the conditional mutual information . we report the application of the proposed methodology on two eeg data sets . . And you have already written the first three sentences of the full article: the inference of couplings between dynamical subsystems , from data , is a topic of general interest . transfer entropy @xcite , which is related to the concept of granger causality @xcite , has been proposed to distinguish effectively driving and responding elements and to detect asymmetry in the interaction of subsystems . by appropriate conditioning of transition probabilities this quantity has been shown to be superior to the standard time delayed mutual information , which fails to distinguish information that is actually exchanged from shared information due to common history and input signals @xcite . on the other hand , granger formalized the notion that , if the prediction of one time series could be improved by incorporating the knowledge of past values of a second one , then the latter is said to have a _ causal _ influence on the former . initially developed for econometric applications ,. Please generate the next two sentences of the article
granger causality has gained popularity also in neuroscience ( see , e.g. , @xcite ) . a discussion about the practical estimation of information theoretic indexes for signals of limited length can be found in @xcite .
3,708
Suppose that you have an abstract for a scientific paper: we extend the 3-point intrinsic alignment self - calibration technique to the gravitational shear - intrinsic ellipticity - intrinsic ellipticity ( gii ) bispectrum . the proposed technique will allow the measurement and removal of the gii intrinsic alignment contamination from the cross - correlation weak lensing signal . while significantly decreased from using cross - correlations instead of auto - correlation in a single photo - z bin , the gii contamination persists in adjacent photo - z bins and must be accounted for and removed from the lensing signal . we relate the gii and galaxy density - intrinsic ellipticity - intrinsic ellipticity ( gii ) bispectra through use of the galaxy bias , and develop the estimator necessary to isolate the gii bispectrum from observations . we find that the gii self - calibration technique performs at a level comparable to that of the gravitational shear - gravitational shear - intrinsic ellipticity correlation ( ggi ) self - calibration technique , with measurement error introduced through the gii estimator generally negligible when compared to minimum survey error . the accuracy of the relationship between the gii and gii bispectra typically allows the gii self - calibration to reduce the gii contamination by a factor of 10 or more for all adjacent photo - z bin combinations at @xmath0 . for larger scales , we find that the gii contamination can be reduced by a factor of 3 - 5 or more . the gii self - calibration technique is complementary to the existing ggi self - calibration technique , which together will allow the total intrinsic alignment cross - correlation signal in 3-point weak lensing to be measured and removed . [ firstpage ] gravitational lensing cosmology . And you have already written the first three sentences of the full article: weak gravitational lensing due to large scale structure ( cosmic shear ) has become a promising source of cosmological information . a new generation of ground- and space - based surveys suited for precision weak lensing measurements have been developed with the importance of this new probe in mind . these ongoing , future , and proposed surveys ( e.g. cfhtls , des , euclid , hsc , hst , jwst , lsst , pan - starrs , and wfirst ) promise to provide greatly improved measurements of cosmic shear using the shapes of up to billions of galaxies . there has been much work done to explore the potential of these cosmic shear measurements , which we review in @xcite , for both the 2- and 3-point cosmic shear correlations . beyond the constraints obtained on cosmological parameters from the 2-point cosmic shear correlation and the corresponding shear power spectrum , the 3-point cosmic shear correlation and shear bispectrum are able to break degeneracies between the cosmological parameters that the power spectrum alone does not @xcite . the results of @xcite , for example , showed that the constraints on the dark energy parameters and the matter fluctuation amplitude should be able to be improved by a further factor of 2 - 3 using the bispectrum measured in a deep lensing survey .. Please generate the next two sentences of the article
most recently , parameter constraints were derived by @xcite using weak lensing data from the hst cosmos survey , measuring the third order moment of the aperture mass measure . their independent results were consistent with wmap7 best - fit cosmology and provided an improved constraint when combined with the 2-point correlation .
3,709
Suppose that you have an abstract for a scientific paper: stimulated raman adiabatic passage ( stirap ) , driven with pulses of optimum shape and delay has the potential of reaching fidelities high enough to make it suitable for fault - tolerant quantum information processing . the optimum pulse shapes are obtained upon reduction of stirap to effective two - state systems . we use the dykhne - davis - pechukas ( ddp ) method to minimize nonadiabatic transitions and to maximize the fidelity of stirap . this results in a particular relation between the pulse shapes of the two fields driving the raman process . the ddp - optimized version of stirap maintains its robustness against variations in the pulse intensities and durations , the single - photon detuning and possible losses from the intermediate state . . And you have already written the first three sentences of the full article: stimulated raman adiabatic passage ( stirap ) is a well established and widely used technique for coherent population transfer in atoms and molecules @xcite . stirap uses two delayed but partially overlapping laser pulses , pump and stokes , which drive a three - state @xmath0-system @xmath1 . the stirap technique transfers the population adiabatically from the initially populated state @xmath2 to the target state @xmath3 .. Please generate the next two sentences of the article
if the pulses are ordered counterintuitively , i.e. the stokes pulse precedes the pump pulse , two - photon resonance is maintained , and adiabatic evolution is enforced , then complete population transfer from @xmath2 to @xmath3 occurs . throughout this process , no population is placed in the ( possibly lossy ) intermediate state @xmath4 .
3,710
Suppose that you have an abstract for a scientific paper: we consider the problem of recovering an @xmath0-dimensional sparse vector @xmath1 from its linear transformation @xmath2 of @xmath3 dimension . minimizing the @xmath4-norm of @xmath1 under the constraint @xmath5 is a standard approach for the recovery problem , and earlier studies report that the critical condition for typically successful @xmath6-recovery is universal over a variety of randomly constructed matrices @xmath7 . for examining the extent of the universality , we focus on the case in which @xmath7 is provided by concatenating @xmath8 matrices @xmath9 drawn uniformly according to the haar measure on the @xmath10 orthogonal matrices . by using the replica method in conjunction with the development of an integral formula for handling the random orthogonal matrices , we show that the concatenated matrices can result in better recovery performance than what the universality predicts when the density of non - zero signals is not uniform among the @xmath11 matrix modules . the universal condition is reproduced for the special case of uniform non - zero signal densities . extensive numerical experiments support the theoretical predictions . . And you have already written the first three sentences of the full article: the recovery problem of sparse vectors from a linear underdetermined set of equations has recently attracted attention in various fields of science and technology due to its many applications , for example , in linear regression @xcite , communication @xcite , @xcite , @xcite , multimedia @xcite , @xcite , @xcite , and compressive sampling ( cs ) @xcite , @xcite . in such a sparse representation problem , we have the following underdetermined set of linear equations @xmath12 where @xmath13 is @xmath14 is the dictionary @xmath15 is and @xmath16.[multiblock footnote omitted ] another way of writing is that a large dimensional sparse vector @xmath1 is coded / compressed into a small dimensional vector @xmath17 and the task will be to find the @xmath1 from @xmath17 with the full knowledge of @xmath7 . for this problem , the optimum solution is the sparsest vector satisfying . finding the sparsest vector is however np - hard ; thus , a variety of practical algorithms have been developed . among the most prominent is the convex relaxation approach in which the objective is to find the minimum @xmath4-norm solution to . for the @xmath4-norm minimization , if @xmath1 is @xmath18-sparse , which indicates that the number of non - zero entries of @xmath1 is at most @xmath18 , the minimum @xmath18 that satisfies gives the limit up to which the signal can be compressed for a given dictionary @xmath7 .. Please generate the next two sentences of the article
an interesting question then arises : how does the choice of the dictionary @xmath7 affect the typical compression ratio that can be achieved using the @xmath4-recovery ? recent results in the parallel problem of cs , where @xmath7 acts as a sensing matrix , reveal that the typical conditions for perfect @xmath4-recovery are universal for all random sensing matrices that belong to the rotationally invariant matrix ensembles @xcite .
3,711
Suppose that you have an abstract for a scientific paper: photon - photon reactions provide an excellent opportunity to isolate the @xmath0 vertex . for this purpose , we have examined the potential of the @xmath1 ( @xmath2 is the weizsacker - williams photon and @xmath3 ) process to investigate the anomalous @xmath0 couplings in @xmath4 collisions at the clic . we have obtained @xmath5 confidence level limits on the anomalous couplings for various values of the center - of - mass energy and integrated luminosity . we have shown that the limit on anomalous @xmath6 coupling is more restricted with respect to current experimental limits . . And you have already written the first three sentences of the full article: the top quark is the heaviest available fundamental particle in the standard model ( sm ) . because of the large mass of the top quark , it s interactions are an excellent probe of the electroweak symmetry - breaking mechanism , and they should therefore play an important role in the search of physics beyond the sm . for this purpose , particularly , the anomalous interactions of the top quark can be examined by flavor changing neutral currents ( fcnc ) . in the sm , fcnc decays @xmath7 ( @xmath8 ) can not be observed at tree level , but these decays can only make loop contributions . as a result , such processes are anticipated to be enormously rare within the sm with branching ratios of an order of @xmath9 @xcite . however , various models beyond the sm such as the minimal supersymmetric model @xcite , two - higgs doublet model @xcite , the quark - singlet model @xcite , extra dimension models @xcite , the littlest higgs model @xcite , the topcolor - assisted technicolor model @xcite or supersymmetry @xcite could lead to a very large increase of fcnc processes involving the top quark .. Please generate the next two sentences of the article
present experimental constraints at @xmath5 confidence level ( c. l. ) on the anomalous @xmath0 couplings are obtained from two limits : @xmath10 supplied by zeus collaboration @xcite and @xmath11 presented by cdf collaboration @xcite . the fcnc anomalous interactions among the top quark , two quarks @xmath12 , @xmath13 and the photon can be written in a model independent way with dimension five effective lagrangian as follows @xcite @xmath14 where @xmath15 is the electromagnetic coupling constant , @xmath16 is the top quark electric charge , @xmath6 denotes the strength of the anomalous couplings of top quark with photon , @xmath17 is an effective cut - off scale which is conventionally set to the mass of the top quark @xcite , @xmath18 with @xmath19 which stands for the dirac matrix , and @xmath20 is the momentum of photon . also , using the interaction lagrangian in eq.(@xmath21 ) , the anomalous decay width of the top quark can be easily obtained as follows @xmath22 where the masses of @xmath12 and @xmath13 quarks are omitted in the above equation . since the dominant decay mode of the top quark is @xmath23 , the branching ratio of anomalous @xmath7 decay generally is given by the following formula : @xmath24 therefore , using the equations ( @xmath25 ) and ( @xmath26 ) , we can obtain the magnitude of the upper limits of anomalous coupling provided by cdf collaboration as follows @xmath27 in the literature , the interactions of the top quark via fcnc have been experimentally and theoretically examined @xcite .
3,712
Suppose that you have an abstract for a scientific paper: applying the theory of compressive sensing in practice always takes different kinds of perturbations into consideration . in this paper , the recovery performance of greedy pursuits with replacement for sparse recovery is analyzed when both the measurement vector and the sensing matrix are contaminated with additive perturbations . specifically , greedy pursuits with replacement include three algorithms , compressive sampling matching pursuit ( cosamp ) , subspace pursuit ( sp ) , and iterative hard thresholding ( iht ) , where the support estimation is evaluated and updated in each iteration . based on restricted isometry property , a unified form of the error bounds of these recovery algorithms is derived under general perturbations for compressible signals . the results reveal that the recovery performance is stable against both perturbations . in addition , these bounds are compared with that of oracle recovery least squares solution with the locations of some largest entries in magnitude known a priori . the comparison shows that the error bounds of these algorithms only differ in coefficients from the lower bound of oracle recovery for some certain signal and perturbations , as reveals that oracle - order recovery performance of greedy pursuits with replacement is guaranteed . numerical simulations are performed to verify the conclusions . * keywords : * compressive sensing , sparse recovery , general perturbations , performance analysis , restricted isometry property , greedy pursuits , compressive sampling matching pursuit , subspace pursuit , iterative hard thresholding , oracle recovery . . And you have already written the first three sentences of the full article: compressive sensing , or compressive sampling ( cs ) @xcite , is a novel signal processing technique proposed to effectively sample and compress sparse signals , i.e. , signals that can be represented by few significant coefficients in some basis . assume that the signal of interest @xmath0 can be represented by @xmath1 , where @xmath2 is the basis matrix and @xmath3 is @xmath4-sparse , which means only @xmath4 out of its @xmath5 entries are nonzero . one of the essential issues of cs theory lies in recovering @xmath6 ( or equivalently , @xmath7 ) from its linear observations , @xmath8 where @xmath9 is a sensing matrix with more columns than rows and @xmath10 is the measurement vector .. Please generate the next two sentences of the article
unfortunately , directly finding the sparsest solution to ( [ y = ax ] ) is np - hard , which is not practical for sparse recovery . this leads to one of the major aspects of cs theory designing effective recovery algorithms with low computational complexity and fine recovery performance .
3,713
Suppose that you have an abstract for a scientific paper: we use a new method , the cross - power spectrum between the linear density field and the halo number density field , to measure the lagrangian bias for dark matter halos . the method has several important advantages over the conventional correlation function analysis . by applying this method to a set of high - resolution simulations of @xmath0 particles , we have accurately determined the lagrangian bias , over 4 magnitudes in halo mass , for four scale - free models with the index @xmath1 , @xmath2 , @xmath3 and @xmath4 and three typical cdm models . our result for massive halos with @xmath5 ( @xmath6 is a characteristic non - linear mass ) is in very good agreement with the analytical formula of mo & white for the lagrangian bias , but the analytical formula significantly underestimates the lagrangian clustering for the less massive halos @xmath7 . our simulation result however can be satisfactorily described , with an accuracy better than 15% , by the fitting formula of jing for eulerian bias under the assumption that the lagrangian clustering and the eulerian clustering are related with a linear mapping . it implies that it is the failure of the press - schechter theories for describing the formation of small halos that leads to the inaccuracy of the mo & white formula for the eulerian bias . the non - linear effect in the mapping between the lagrangian clustering and the eulerian clustering , which was speculated as another possible cause for the inaccuracy of the mo & white formula , must be negligible compared to the linear mapping . our result indicates that the halo formation model adopted by the press - schechter theories must be improved . # 1*s*_#1 # 1|n(*r*_#1 ) # 1|n(*s*_#1 ) # 1_z(r_p#1,_#1 ) # 1(r_#1 ) # 1(s_#1 ) # 1w(r_p#1 ) . And you have already written the first three sentences of the full article: galaxies and clusters of galaxies are believed to form within the potential wells of virialized dark matter ( dm ) halos . understanding the clustering of dm halos can provide important clues to understanding the large scale structures in the universe . a number of studies have therefore been carried out to obtain the two - point correlation function @xmath8 of dm halos .. Please generate the next two sentences of the article
two distinctive approaches are widely adopted . one is analytical and is based on the press - schechter ( ps ) theories ( e.g. kashlinsky @xcite , @xcite ; cole & kaiser @xcite ; mann , heavens , & peacock @xcite ; mo & white @xcite , hereafter mw96 ; catelan et al .
3,714
Suppose that you have an abstract for a scientific paper: by correcting an example by polyanskii , we show that there exist reduced polytopes in three - dimensional euclidean space . this partially answers the question posed by lassak @xcite on the existence of reduced polytopes in @xmath0-dimensional euclidean space for @xmath1 . * keywords : * polytope , reducedness * msc(2010 ) : * http://www.ams.org/mathscinet/msc/msc2010.html?t=52b10[52b10 ] . And you have already written the first three sentences of the full article: constant width bodies , i.e. , convex bodies for which parallel supporting hyperplanes have constant distance , have a long and rich history in mathematics @xcite . due to meissner @xcite , constant width bodies in euclidean space can be characterized by _ diametrical completeness _. Please generate the next two sentences of the article
, that is , the property of not being properly contained in a set of the same diameter . constant width bodies also belong to a related class of _ reduced _ convex bodies introduced by heil @xcite .
3,715
Suppose that you have an abstract for a scientific paper: the asteroid belt is characterized by the radial mixing of bodies with different physical properties , a very low mass compared to minimum mass solar nebula expectations and has an excited orbital distribution , with eccentricities and inclinations covering the entire range of values allowed by the constraints of dynamical stability . models of the evolution of the asteroid belt show that the origin of its structure is strongly linked to the process of terrestrial planet formation . the grand tack model presents a possible solution to the conundrum of reconciling the small mass of mars with the properties of the asteroid belt , including the mass depletion , radial mixing and orbital excitation . however , while the inclination distribution produced in the grand tack model is in good agreement with the one observed , the eccentricity distribution is skewed towards values larger than those found today . here , we evaluate the evolution of the orbital properties of the asteroid belt from the end of the grand tack model ( at the end of the gas nebula phase when planets emerge from the dispersing gas disk ) , throughout the subsequent evolution of the solar system including an instability of the giant planets approximately 400 my later . before the instability , the terrestrial planets were modeled on dynamically cold orbits with jupiter and saturn locked in a 3:2 mean motion resonance . the model continues for an additional 4.1 gy after the giant planet instability . our results show that the eccentricity distribution obtained in the grand tack model evolves towards one very similar to that currently observed , and the semimajor axis distribution does the same . the inclination distribution remains nearly unchanged with a slight preference for depletion at low inclination ; this leads to the conclusion that the inclination distribution at the end of the grand tack is a bit over - excited . also , we constrain the primordial eccentricities of jupiter and saturn , which have a major influence on.... And you have already written the first three sentences of the full article: the asteroid belt is challenging to understand but is critical for studies of the formation and early evolution of the solar system . the orbital configuration of the asteroid belt is believed to have been established in two phases . the first phase dates back to the first few million years of solar system s formation and should be studied in conjunction with the formation of the inner and outer planets , especially jupiter and saturn .. Please generate the next two sentences of the article
the second phase occurred when the asteroid belt witnessed a giant planet instability , long after the damping effects of the gaseous solar nebula had dissipated in general , simulations of the dynamical re - shaping of the asteroid belt are made in conjunction with the formation of the inner planets . the first simulations of terrestrial planet formation @xcite included a set of planetary embryos uniformly distributed in the inner region of the solar system with orbits initially dynamically cold ( low eccentricity and inclination ) . through numerical integrations of the equations of motion of these embryos , adding a model of accretion by collisions , the system evolves to form planets in the inner region of the solar system on stable orbits . while early results about the formation of terrestrial planets were promising , one of the problems found in these integrations was related with the final eccentricities of the planets , which were systematically larger than the real ones .
3,716
Suppose that you have an abstract for a scientific paper: we investigate the influence of correlated initial conditions on the temporal evolution of a ( @xmath0 + 1)-dimensional critical directed percolation process . generating initial states with correlations @xmath1 we observe that the density of active sites in monte - carlo simulations evolves as @xmath2 . the exponent @xmath3 depends continuously on @xmath4 and varies in the range @xmath5 . our numerical results are confirmed by an exact field - theoretical renormalization group calculation . . And you have already written the first three sentences of the full article: it is well known that initial conditions influence the temporal evolution of nonequilibrium systems . the systems `` memory '' for the initial state usually depends on the dynamical rules . for example , stochastic processes with a finite temporal correlation length relax to their stationary state in an exponentially short time .. Please generate the next two sentences of the article
an interesting situation emerges when a system undergoes a nonequilibrium phase transition where the temporal correlation length diverges . this raises the question whether it is possible construct initial states that affect the _ entire _ temporal evolution of such systems . to address this question , we consider the example of directed percolation ( dp ) which is the canonical universality class for nonequilibrium phase transitions from an active phase into an absorbing state @xcite .
3,717
Suppose that you have an abstract for a scientific paper: we analyse the chances of detecting charged higgs bosons of the minimal supersymmetric standard model ( mssm ) at the large hadron collider ( lhc ) in the @xmath0 mode , followed by the dominant decay of the lightest higgs scalar , @xmath1 . if the actual value of @xmath2 is already known , this channel offers possibly the optimal final state kinematics for charged higgs discovery , thanks to the narrow resonances appearing around the @xmath3 and @xmath4 masses . besides , within the mssm , the @xmath5 decay rate is significant for not too large @xmath6 values , thus offering the possibility of accessing a region of mssm parameter space left uncovered by other search channels . we consider both strong ( qcd ) and electroweak ( ew ) ` irreducible ' backgrounds in the @xmath7-tagged channel to the @xmath8 production process that had not been taken into account in previous analyses . after a series of kinematic cuts , the largest of these processes is @xmath9 production in the continuum . however , for optimum @xmath6 , i.e. , between 2 and 3 , the charged higgs boson signal overcomes this background and a narrow discovery region survives around @xmath10 gev . 0 pt .7truecm 33by .05 truein .05 truein 1 in 0.75 in 6.125 truein = 100000 # 1 # 2 # 3 _ phys . lett . _ * # 1 * ( # 2 ) # 3 # 1 # 2 # 3 _ nucl . phys . _ * # 1 * ( # 2 ) # 3 # 1 # 2 # 3 _ z. phys . _ * # 1 * ( # 2 ) # 3 # 1 # 2 # 3 _ phys . rev . _ * # 1 * ( # 2 ) # 3 # 1 # 2 # 3 _ phys . rep . _ * # 1 * ( # 2 ) # 3 # 1 # 2 # 3 _ phys . rev . lett . _ * # 1 * ( # 2 ) # 3 # 1 # 2 # 3 _ mod . phys . lett . _ * # 1 * ( # 2 ) # 3 # 1 # 2 # 3 _ rev . mod . phys . _ * # 1 * ( # 2 ) # 3 # 1 # 2 # 3 _ sov . j. nucl . phys . _ * # 1 * ( # 2 ) # 3 # 1 # 2 # 3 _ comp . phys . comm . _ * # 1 * ( # 2 ) # 3 # 1 # 2 # 3 * # 1 * , ( # 2 ) # 3 ral - tr-2000 - 005 + march 2000 + * the @xmath0 decay channel + as a probe of charged higgs boson production + at the large hadron collider * + stefano moretti.... And you have already written the first three sentences of the full article: the discovery of charged higgs bosons @xcite will provide a concrete evidence of the multi - doublet structure of the higgs sector . recent efforts have focused on their relevance to supersymmetry ( susy ) , in particular in the mssm , which incorporates exactly two higgs doublets , yielding after spontaneous ew symmetry breaking five physical higgs states : the neutral pseudoscalar ( @xmath11 ) , the lightest ( @xmath4 ) and heaviest ( @xmath12 ) neutral scalars and two charged ones ( @xmath13 ) . in much of the parameter space preferred by susy , namely @xmath14 and @xmath15 @xcite , the lhc will provide the greatest opportunity for the discovery of @xmath13 particles . in fact , over the above @xmath6 region , the tevatron ( run 2 ) discovery potential is limited to charged higgs masses smaller than @xmath16 @xcite .. Please generate the next two sentences of the article
however , at the lhc , whereas the detection of light charged higgs bosons ( with @xmath17 ) is rather straightforward in the decay channel @xmath18 for most @xmath6 values , thanks to the huge top - antitop production rate , the search is notoriously difficult for heavy masses ( when @xmath19 ) , because of the large reducible and irreducible backgrounds associated with the main decay mode @xmath20 , following the dominant production channel @xmath21 @xcite . ( notice that the rate of the latter exceeds by far other possible production modes @xcite@xcite , this rendering it the only viable channel at the cern machine in the heavy mass region . ) the analysis of the @xmath20 signature has been the subject of many debates @xcite@xcite , whose conclusion is that the lhc discovery potential is satisfactory , but only provided that @xmath6 is small ( @xmath22 ) or large ( @xmath23 ) enough and the charged higgs boson mass is below 600 gev or so .
3,718
Suppose that you have an abstract for a scientific paper: we describe a general radiative equilibrium and temperature correction procedure for use in monte carlo radiation transfer codes with sources of temperature - independent opacity , such as astrophysical dust . the technique utilizes the fact that monte carlo simulations track individual photon packets , so we may easily determine where their energy is absorbed . when a packet is absorbed , it heats a particular cell within the envelope , raising its temperature . to enforce radiative equilibrium , the absorbed packet is immediately re - emitted . to correct the cell temperature , the frequency of the re - emitted packet is chosen so that it corrects the temperature of the spectrum previously emitted by the cell . the re - emitted packet then continues being scattered , absorbed , and re - emitted until it finally escapes from the envelope . as the simulation runs , the envelope heats up , and the emergent spectral energy distribution ( sed ) relaxes to its equilibrium value , _ without iteration_. this implies that the equilibrium temperature calculation requires no more computation time than the sed calculation of an equivalent pure scattering model with fixed temperature . in addition to avoiding iteration , our method conserves energy exactly , because all injected photon packets eventually escape . furthermore , individual packets transport energy across the entire system because they are never destroyed . this long - range communication , coupled with the lack of iteration , implies that our method does not suffer the convergence problems commonly associated with @xmath0-iteration . to verify our temperature correction procedure , we compare our results to standard benchmark tests , and finally we present the results of simulations for two - dimensional axisymmetric density structures . . And you have already written the first three sentences of the full article: there is an ever increasing wealth of observational evidence indicating the non - sphericity of almost every type of astronomical object ( e.g. , extended circumstellar environments , novae shells , planetary nebulae , galaxies , and agns ) . to accurately interpret this data , detailed two- and three - dimensional radiation transfer techniques are required . with the availability of fast workstations , many researchers are turning to monte carlo techniques to produce model images and spectra for the asymmetric objects they are investigating . in monte carlo radiation transfer simulations , packets of energy or `` photons '' are followed as they are scattered and absorbed within a prescribed medium . one of the features of this technique is that the locations of the packets are known when they are absorbed , so we can determine where their energy is deposited .. Please generate the next two sentences of the article
this energy heats the medium , and to conserve radiative equilibrium , the absorbed energy must be reradiated at other wavelengths , depending on the opacity sources present . tracking these photon packets , while enforcing radiative equilibrium , permits the calculation of both the temperature structure and emergent spectral energy distribution ( sed ) of the envelope . the ability of monte carlo techniques to easily follow the transfer of radiation through complex geometries makes them very attractive methods for determining the temperature structure within non - spherical environments a task which is very difficult with traditional ray tracing techniques .
3,719
Suppose that you have an abstract for a scientific paper: we report on two surveys of radio - weak agn to look for radio variability . we find significant variability with an rms of 10 - 20% on a timescale of months in radio - quiet and radio - intermediate quasars . this exceeds the variability of radio cores in radio - loud quasars ( excluding blazars ) , which vary only on a few percent level . the variability in radio - quiet quasars confirms that the radio emission in these sources is indeed related to the agn . the most extremely variable source is the radio - intermediate quasar iii zw 2 which was recently found to contain a relativistic jet . in addition we find large amplitude variabilities ( up to 300% peak - to - peak ) in a sample of nearby low - luminosity agn , liners and dwarf - seyferts , on a timescale of 1.5 years . the variability could be related to the activity of nuclear jets responding to changing accretion rates . simultaneous radio / optical / x - ray monitoring also for radio - weak agn , and not just for blazars , is therefore a potentially powerful tool to study the link between jets and accretion flows . # 1_#1 _ # 1_#1 _ = # 1 1.25 in .125 in .25 in . And you have already written the first three sentences of the full article: in the past a lot of emphasis has been put on studying the radio variability of radio - loud agn and specifically those of blazars @xcite . there the radio emission is most certainly due to a relativistically beamed jet and one goal of multi - wavelength monitoring , including radio , is to understand particle acceleration processes in the jet plasma as well as the relativistic effects associated with the changing geometry and structure of jets . on the other hand , for radio - weak agn here meant to include everything but radio - loud quasars the situation is somewhat different and the database is much sparser .. Please generate the next two sentences of the article
in fact , very few surveys exist that address the issue of radio variability in either radio - quiet quasars or low - luminosity agn such as liners and dwarf - seyferts ( e.g. , ) . in many of these cases we are not even entirely sure that the radio emission is indeed related to the agn itself . it has been proposed that radio jets are a natural product of agn , even that accretion flow and jet form a symbiotic system @xcite , and this view seems to catch on ( e.g. , ) .
3,720
Suppose that you have an abstract for a scientific paper: we establish a connection between recent developments in the study of vortices in the abelian higgs models , and in the theory of structure - preserving discrete conformal maps . we explain how both are related via conformal mapping problems involving prescribed linear combinations of the curvature and volume form , and show how the discrete conformal theory can be used to construct discrete vortex solutions . . And you have already written the first three sentences of the full article: an important class of problems in mathematics take the following prototypical form : given a riemmanian manifold , find a conformally equivalent metric satisfying certain prescribed properties . conformal mapping problems _ arise ubiquitiously , and their solutions are invaluable to many areas of applied mathematics , physics , and engineering . inspired largely by these applications , considerable work has been done in developing notions of _ discrete conformal maps_. of particular interest have been discretizations that preserve certain structural properties and characteristics of their continuum counterparts .. Please generate the next two sentences of the article
such discretizations have shown to contain a surprisingly profound and rich theory of their own , and their study is an exciting and flourishing area of mathematics today . the main purpose of this paper is to note a correspondence between continuum conformal mapping problems that arise in the study of vortices in the abelian higgs field theory ( see @xcite ) and certain discrete conformal mapping problems studied in @xcite . the abelian higgs field theory first arose as a phenomenological model of superconductivity .
3,721
Suppose that you have an abstract for a scientific paper: the final stage of terrestrial planet formation is known as the giant impact stage where protoplanets collide with one another to form planets . so far this stage has been mainly investigated by @xmath0-body simulations with an assumption of perfect accretion in which all collisions lead to accretion . however , this assumption breaks for collisions with high velocity and/or a large impact parameter . we derive an accretion condition for protoplanet collisions in terms of impact velocity and angle and masses of colliding bodies , from the results of numerical collision experiments . for the first time , we adopt this realistic accretion condition in @xmath0-body simulations of terrestrial planet formation from protoplanets . we compare the results with those with perfect accretion and show how the accretion condition affects terrestrial planet formation . we find that in the realistic accretion model , about half of collisions do not lead to accretion . however , the final number , mass , orbital elements , and even growth timescale of planets are barely affected by the accretion condition . for the standard protoplanetary disk model , typically two earth - sized planets form in the terrestrial planet region over about @xmath1 years in both realistic and perfect accretion models . we also find that for the realistic accretion model , the spin angular velocity is about 30% smaller than that for the perfect accretion model that is as large as the critical spin angular velocity for rotational instability . the spin angular velocity and obliquity obey gaussian and isotropic distributions , respectively , independently of the accretion condition . . And you have already written the first three sentences of the full article: it is generally accepted that the final stage of terrestrial planet formation is the giant impact stage where protoplanets or planetary embryos formed by oligarchic growth collide with one another to form planets ( e.g. , * ? ? ? * ; * ? ? ? this stage has been mainly studied by @xmath0-body simulations .. Please generate the next two sentences of the article
so far all @xmath0-body simulations have assumed perfect accretion in which all collisions lead to accretion ( e.g. , * ? ? ? * ; * ? ? ?
3,722
Suppose that you have an abstract for a scientific paper: in this work , we investigate the nature of the host galaxies of long gamma - ray bursts ( lgrbs ) using a galaxy catalogue constructed from the _ millennium simulation_. we developed a lgrb synthetic model based on the hypothesis that these events originate at the end of the life of massive stars following the collapsar model , with the possibility of including a constraint on the metallicity of the progenitor star . a complete observability pipeline was designed to calculate a probability estimation for a galaxy to be observationally identified as a host for lgrbs detected by present observational facililties . this new tool allows us to build an observable host galaxy catalogue which is required to reproduce the current stellar mass distribution of observed hosts . this observability pipeline predicts that the minimum mass for the progenitor stars should be @xmath0 m@xmath1 in order to be able to reproduce batse observations . systems in our observable catalogue are able to reproduce the observed properties of host galaxies , namely stellar masses , colours , luminosity , star formation activity and metallicities as a function of redshift . at @xmath2 , our model predicts that the observable host galaxies would be very similar to the global galaxy population . we found that @xmath3 per cent of the observable host galaxies with mean gas metallicity lower than @xmath4 have stellar masses in the range @xmath5@xmath6m@xmath1 in excellent agreement with observations . interestingly , in our model observable host galaxies remain mainly within this mass range regardless of redshift , since lower stellar mass systems would have a low probability of being observed while more massive ones would be too metal - rich . observable host galaxies are predicted to preferentially inhabit dark matter haloes in the range @xmath7@xmath8m@xmath1 , with a weak dependence on redshift . they are also found to preferentially map different density environments at different stages of evolution of the universe . at.... And you have already written the first three sentences of the full article: gamma - ray bursts ( grbs ) are brief pulses of @xmath9-ray radiation observed on average once a day at random directions in the sky . they are the brightest sources in this region of the electromagnetic spectrum , and they have been systematically studied in the past two decades ( e.g. * ? ? ? * and references therein ) .. Please generate the next two sentences of the article
the origin of grbs is cosmological , as determined from the measurement of their redshifts ( e.g. * ? ? ? * ; * ? ? ?
3,723
Suppose that you have an abstract for a scientific paper: the core accretion theory of planet formation has at least two fundamental problems explaining the origins of uranus and neptune : ( 1 ) dynamical times in the trans - saturnian solar nebula are so long that core growth can take @xmath0 myr , and ( 2 ) the onset of runaway gas accretion that begins when cores reach @xmath1 necessitates a sudden gas accretion cutoff just as uranus and neptune s cores reach critical mass . both problems may be resolved by allowing the ice giants to migrate outward after their formation in solid - rich feeding zones with planetesimal surface densities well above the minimum - mass solar nebula . we present new simulations of the formation of uranus and neptune in the solid - rich disk of @xcite using the initial semimajor axis distribution of the nice model ( gomes et al . 2005 ; morbidelli et al . 2005 ; tsiganis et al . 2005 ) , with one ice giant forming at 12 au and the other at 15 au . the innermost ice giant reaches its present mass after 3.8 - 4.0 myr and the outermost after 5.3 - 6 myr , a considerable time decrease from previous one - dimensional simulations ( e.g. pollack et al . 1996 ) . the core masses stay subcritical , eliminating the need for a sudden gas accretion cutoff . our calculated carbon mass fractions of @xmath2 are in excellent agreement with the ice giant interior models of @xcite and @xcite . based on the requirement that the ice giant - forming planetesimals contain @xmath3 mass fractions of methane ice , we can reject any solar system formation model that initially places uranus and neptune inside of saturn s orbit . we also demonstrate that a large population of planetesimals must be present in both ice giant feeding zones throughout the lifetime of the gaseous nebula . this research marks a substantial step forward in connecting both the dynamical and chemical aspects of planet formation . although we can not say that the solid - rich solar nebula model of dodson - robinson et al . ( 2009 ) gives _ exactly _.... And you have already written the first three sentences of the full article: the canonical core accretion theory of planet formation , in which planetesimals collide to form solid cores which then destabilize the surrounding gas to accrete an atmosphere ( safronov 1969 ; pollack et al.1996 ) , has at least two fundamental problems explaining the origins of uranus and neptune . first , dynamical times in the trans - saturnian solar nebula are so long and solid surface densities @xmath4 are so low ( @xmath5 g @xmath6 ) according to the assumed @xmath7 mass distribution ( pollack et al . 1996 ) that planet growth takes @xmath0 myr , far longer than both observed and theoretical protostellar disk lifetimes ( haisch et al . 2001. Please generate the next two sentences of the article
; alexander et al . second , runaway gas accretion begins when solid cores reach 10 to 15 @xmath8 , requiring a sudden and complete gas accretion cutoff just as uranus and neptune reach their current masses .
3,724
Suppose that you have an abstract for a scientific paper: some chemical reactions are described by electron transfer ( et ) processes . the underlying mechanism could be modeled as a polaron motion in the molecular crystalthe holstein model . by taking spin degrees of freedom into consideration , we generalize the holstein model ( molecular crystal model ) to microscopically describe an et chemical reaction . in our model , the electron spins in the radical pair simultaneously interact with a magnetic field and their nuclear - spin environments . by virtue of the perturbation approach , we obtain the chemical reaction rates for different initial states . it is discovered that the chemical reaction rate of the triplet state demonstrates its dependence on the direction of the magnetic field while the counterpart of the singlet state does not . this difference is attributed to the explicit dependence of the triplet state on the direction when the axis is rotated . our model may provide a possible candidate for the microscopic origin of avian compass . . And you have already written the first three sentences of the full article: nowadays , it has been prevailing in both experimental and theoretical explorations that quantum coherence effect due to the role of phase in quantum superposition may exist in living processes . this essentially implies that there may exist quantum coherence effect in chemical reactions in some living processes , such as charge and energy transfer in photosynthesis @xcite and singlet - and - triplet transition in avian compass @xcite . it has long been questioned how migratory birds can navigate to their destination over hundreds of miles .. Please generate the next two sentences of the article
one of the possible answers is given by the radical pair mechanism @xcite . two unpaired electron spins in the radical pair are initially prepared in the singlet state . due to their interactions with the geomagnetic field and their environmental nuclear spins
3,725
Suppose that you have an abstract for a scientific paper: we conducted radio interferometric observations of six pulsars at 610 mhz using the giant metrewave radio telescope ( gmrt ) . all these objects were claimed or suspected to be the gigahertz - peaked spectra ( gps ) pulsars . for a half of the sources in our sample the interferometric imaging provides the only means to estimate their flux at 610 mhz due to a strong pulse scatter - broadening . in our case , these pulsars have very high dispersion measure values and we present their spectra containing for the first time low - frequency measurements . the remaining three pulsars were observed at low frequencies using the conventional pulsar flux measurement method . the interferometric imaging technique allowed us to re - examine their fluxes at 610 mhz . we were able to confirm the gps feature in the psr b1823@xmath013 spectrum and select a gps candidate pulsar . these results clearly demonstrate that the interferometric imaging technique can be successfully applied to estimate flux density of pulsars even in the presence of strong scattering . [ firstpage ] pulsars : general - pulsars : individual : b1750@xmath024 , b1800@xmath021 , b1815@xmath014 , b1822@xmath014 , b1823@xmath013 , b1849 + 00 . And you have already written the first three sentences of the full article: in the case of most pulsars , their observed radio spectra can be described using a power law with a negative spectral index of @xmath01.8 or ( for a small fraction of sources ) two power laws with spectral indices of @xmath00.9 and @xmath02.2 with a break frequency @xmath1 on average of 1.5 ghz @xcite . some pulsars also exhibit a low - frequency turnover in their spectra @xcite . a spectrum of that kind is characterized by a positive spectral index below a peak frequency @xmath2 of about 100 mhz ( with a few exceptions when the spectrum peaks at frequencies up to several hundred mhz ) .. Please generate the next two sentences of the article
however , @xcite pointed out a small sample of pulsars that peak around 1 ghz and above . such an object , called the gigahertz - peaked spectrum ( gps ) pulsar , is described as a relatively young source that has a high dispersion measure ( dm ) and usually adjoins a dense , sometimes extreme vicinity .
3,726
Suppose that you have an abstract for a scientific paper: we present new @xmath0 mm continuum observations of orion bn / kl with the very large array ( vla ) . we resolve the emission from the young stellar objects ( yso ) radio source i and bn at several epochs . radio source i is highly elongated northwest - southeast , and remarkably stable in flux density , position angle , and overall morphology over nearly a decade . this favors the extended emission component arising from an ionized edge - on disk rather than an outwardly propagating jet . we have measured the proper motions of source i and bn for the first time at 43 ghz . we confirm that both sources are moving at high speed ( 12 and 26 km s@xmath1 , respectively ) approximately in opposite directions , as previously inferred from measurements at lower frequencies . we discuss dynamical scenarios that can explain the large motions of both bn and source i and the presence of disks around both . our new measurements support the hypothesis that a close ( @xmath2 au ) dynamical interaction occurred around 500 years ago between source i and bn as proposed by gomez et al . from the dynamics of encounter we argue that source i today is likely to be a binary with a total mass on the order of 20 m@xmath3 , and that it probably existed as a softer binary before the close encounter . this enables preservation of the original accretion disk , though truncated to its present radius of @xmath2 au . n - body numerical simulations show that the dynamical interaction between a binary of 20 m@xmath3 total mass ( source i ) and a single star of 10 m@xmath3 mass ( bn ) may lead to the ejection of both and binary hardening . the gravitational energy released in the process would be large enough to power the wide - angle , high - velocity flow traced by h@xmath4 and co emission in the bn / kl nebula . assuming the proposed dynamical history is correct , the smaller mass for source i recently estimated from sio maser dynamics ( @xmath5 m@xmath3 ) by matthews et al . , suggests.... And you have already written the first three sentences of the full article: the orion bn / kl complex , at a distance of @xmath6 pc @xcite , contains the nearest region of ongoing high - mass star formation . a dense protostellar cluster lies within the region containing three radio sources that are believed to be massive young stellar objects ( ysos ) : the highly embedded radio source i , @xcite ; the bn object , which is the brightest source in the region in the mid - infrared ( ir ) at 12.4 @xmath7 m @xcite ; and source _ n _ , a relatively evolved yso with a disk observed in the mir @xcite and a jet observed in the radio at 8.4 ghz @xcite . despite intensive investigations at radio and ir wavelengths , the primary heating source(s ) for the orion kl region ( @xmath8 l@xmath3 ) is ( are ) still not known .. Please generate the next two sentences of the article
another long - standing puzzle is the geometry of outflow and the identification of driving sources . there are two large - scale outflows in the region . a powerful ( @xmath9 ergs ) , high - velocity ( 30@xmath10200 km s@xmath1 ) , wide - angle ( @xmath11 rad ) outflow extends northwest - southeast ( nw - se ) over @xmath12 pc .
3,727
Suppose that you have an abstract for a scientific paper: we perform a detailed study of the process @xmath0 and its sensitivity to anomalous gauge boson couplings of the @xmath1 vertex . we concentrate on lep ii energies , @xmath2 gev , and energies appropriate to the proposed next linear collider ( nlc ) high energy @xmath3 collider with center of mass energies @xmath4 and 1 tev . at 200 gev , the process offers , at best , a consistency check of other processes being considered at lep200 . at 500 gev , the parameters @xmath5 and @xmath6 can be measured to about @xmath7 and @xmath8 respectively at 95% c.l . while at 1 tev , they can be measured to about @xmath9 . at the high luminosities anticipated at high energy linear colliders precision measurements are likely to be limited by systematic rather than statistical errors . 1.0 cm measurement of the @xmath10 vertex through + single photon production at @xmath3 colliders dpartement de physique , universit du qubec montral + c.p . 8888 , succ . centre - ville , montral , qubec , canada , h3c 3p8 ottawa - carleton institute for physics + department of physics , carleton university , ottawa canada , k1s 5b6 . And you have already written the first three sentences of the full article: the major preoccupation of particle physics is the search for physics beyond the standard model or equivalently , for deviations from standard model predictions . to this end , measurements at the cern lep-100 @xmath3 collider and the slac slc @xmath3 collider@xcite have provided stringent tests @xcite of the standard model of the electroweak interactions @xcite . however , it is mainly the fermion - gauge boson couplings that have been tested and the gauge sector of the standard model remains poorly constrained . a stringent test of the gauge structure of the standard model is provided by the tri - linear gauge vertices ( tgv s ) ; the @xmath1 and @xmath11 vertices . within the standard model ,. Please generate the next two sentences of the article
these couplings are uniquely determined by @xmath12 gauge symmetry so that a precise measurement of the vertex poses a severe test of the gauge structure of the theory . if these couplings were observed to have different values than their standard model values , it would indicate the need for physics beyond the standard model .
3,728
Suppose that you have an abstract for a scientific paper: we present a model of @xmath0-ray emission through neutral pion production and decay in two - temperature accretion flows around supermassive black holes . we refine previous studies of such a hadronic @xmath0-ray emission by taking into account ( 1 ) relativistic effects in the photon transfer and ( 2 ) absorption of @xmath0-ray photons in the radiation field of the flow . we use a fully general relativistic description of both the radiative and hydrodynamic processes , which allows us to study the dependence on the black hole spin . the spin value strongly affects the @xmath0-ray emissivity within @xmath1 gravitational radii . the central regions of flows with the total luminosities @xmath2 of the eddington luminosity ( @xmath3 ) are mostly transparent to photons with energies below 10 gev , permitting investigation of the effects of space - time metric . for such @xmath4 , an observational upper limit on the @xmath0-ray ( 0.1 10 gev ) to x - ray ( 2 10 kev ) luminosity ratio of @xmath5 can rule out rapid rotation of the black hole ; on the other hand , a measurement of @xmath6 can not be regarded as the evidence of rapid rotation , as such a ratio can also result from a flat radial profile of @xmath0-ray emissivity ( which would occur for nonthermal acceleration of protons in the whole body of the flow ) . at @xmath7 , the @xmath0-ray emission from the innermost region is strongly absorbed and the observed @xmath0-rays do not carry information on the value of @xmath8 . we note that if the x - ray emission observed in centaurus a comes from an accretion flow , the hadronic @xmath0-ray emission from the flow should contribute significantly to the mev / gev emission observed from the core of this object , unless it contains a slowly rotating black hole and protons in the flow are thermal . [ firstpage ] accretion , accretion discs black hole physics gamma - rays : theory . And you have already written the first three sentences of the full article: early investigations of black hole accretion flows indicated that tenuous flows can develop a two - temperature structure , with proton temperature sufficient to produce a significant @xmath0-ray luminosity above 10 mev through @xmath9 production ( e.g. dahlbacka , chapline & weaver 1974 ) . the two - temperature structure is an essential feature of the optically - thin , advection dominated accretion flow ( adaf ) model , which has been extensively studied and successfully applied to a variety of black hole systems ( see , e.g. , reviews in yuan 2007 , narayan & mcclintock 2008 , yuan & narayan 2013 ) over the past two decades , following the work of narayan & yi ( 1994 ) . mahadevan , narayan & krolik ( 1997 ; hereafter m97 ) pointed out that @xmath0-ray emission resulting from proton - proton collisions in adafs may be a signature allowing to test their fundamental nature .. Please generate the next two sentences of the article
the model of m97 relied on a non - relativistic adaf model and their computations were improved by oka & manmoto ( 2003 ; hereafter om03 ) who used a fully general relativistic ( gr ) model of the flow . however , both m97 and om03 neglected the doppler and gravitational shifts of energy as well as gravitational focusing and capturing by the black hole , which is a major deficiency because the @xmath0-ray emission is produced very close to the black hole s horizon . furthermore , both works neglected the internal absorption of @xmath0-ray photons to pair creation , which effect should be important in more luminous systems .
3,729
Suppose that you have an abstract for a scientific paper: recent observations of high redshift quasar spectra reveal long gaps with little flux . a small or no detectable flux does not by itself imply the intergalactic medium ( igm ) is neutral . inferring the average neutral fraction from the observed absorption requires assumptions about clustering of the igm , which the gravitational instability model supplies . our most stringent constraint on the neutral fraction at @xmath0 is derived from the mean lyman - beta transmission measured from the @xmath1 sdss quasar of becker et al . the neutral hydrogen fraction at mean density has to be larger than @xmath2 . this is substantially higher than the neutral fraction of @xmath3 at @xmath4 , suggesting that dramatic changes take place around or just before @xmath0 , even though current constraints are still consistent with a fairly ionized igm at @xmath0 . these constraints translate also into constraints on the ionizing background , subject to uncertainties in the igm temperature . an interesting alternative method to constrain the neutral fraction is to consider the probability of having many consecutive pixels with little flux , which is small unless the neutral fraction is high . it turns out that this constraint is slightly weaker than the one obtained from the mean transmission . we show that while the derived neutral fraction at a given redshift is sensitive to the power spectrum normalization , the size of the jump around @xmath0 is not . we caution that the main systematic uncertainties include spatial fluctuations in the ionizing background , and the continuum placement . tests are proposed . in particular , the sightline to sightline dispersion in mean transmission might provide a useful diagnostic . we express the dispersion in terms of the transmission power spectrum , and develop a method to calculate the dispersion for spectra that are longer than the typical simulation box . psfig.sty ifundefinedchapter # 1 . And you have already written the first three sentences of the full article: 40004000 = 1000 # 1 40004000 = 1000 recent spectroscopic observations of @xmath5 quasars discovered by the sloan digital sky survey ( sdss ) have opened up new windows into the study of the high redshift intergalactic medium ( igm ) ( fan et al . 2000 , zheng et al .. Please generate the next two sentences of the article
2000 , schneider et al . 2001 , anderson et al .
3,730
Suppose that you have an abstract for a scientific paper: the discovery of rapid synchrotron gamma - ray flares above @xmath0mev from the crab nebula has attracted new interest in alternative particle acceleration mechanisms in pulsar wind nebulae . diffuse shock - acceleration fails to explain the flares because particle acceleration and emission occur during a single or even sub - larmor timescale . in this regime , the synchrotron energy losses induce a drag force on the particle motion that balances the electric acceleration and prevents the emission of synchrotron radiation above @xmath1mev . previous analytical studies and 2d particle - in - cell ( pic ) simulations indicate that relativistic reconnection is a viable mechanism to circumvent the above difficulties . the reconnection electric field localized at x - points linearly accelerates particles with little radiative energy losses . in this paper , we check whether this mechanism survives in 3d , using a set of large pic simulations with radiation reaction force and with a guide field . in agreement with earlier works , we find that the relativistic drift kink instability deforms and then disrupts the layer , resulting in significant plasma heating but few non - thermal particles . a moderate guide field stabilizes the layer and enables particle acceleration . we report that 3d magnetic reconnection can accelerate particles above the standard radiation reaction limit , although the effect is less pronounced than in 2d with no guide field . we confirm that the highest energy particles form compact bunches within magnetic flux ropes , and a beam tightly confined within the reconnection layer , which could result in the observed crab flares when , by chance , the beam crosses our line of sight . . And you have already written the first three sentences of the full article: the non - thermal radiation emitted in pulsar wind nebulae is commonly associated with ultra - relativistic electron - positron pairs injected by the pulsar and accelerated at the termination shock . in the crab nebula , the particle spectrum above @xmath2tev responsible for the x - ray to gamma - ray synchrotron emission is well modeled by a single power - law distribution of index @xmath3 , which is usually associated with first - order fermi acceleration at the shock front ( see e.g. , @xcite ) . since the detections of the first flares of high - energy gamma rays in 2010 and the following ones detected since then @xcite , we know that the crab nebula occasionally accelerates particles up to a few @xmath4 ev ( see reviews by @xcite and @xcite ) . this discovery is very puzzling because the particles are accelerated to such energies within a few days , which corresponds to their larmor gyration time in the nebula .. Please generate the next two sentences of the article
this is far too fast for fermi - type acceleration mechanisms which operate over multiple crossings of the particles through the shock ( e.g. , @xcite ) . in addition , the observed particle spectrum is very hard , which is not compatible with the steep power - law @xmath5 expected with diffuse shock - acceleration @xcite .
3,731
Suppose that you have an abstract for a scientific paper: we present [ n ii ] and h@xmath0 images and high resolution long slit spectra of the planetary nebula ic4846 , which reveal , for the first time , its complex structure and the existence of collimated outflows . the object consists of a moderately elongated shell , two ( and probably three ) pairs of collimated bipolar outflows at different orientations , and an attached circular shell . one of the collimated pairs is constituted by two curved , extended filaments whose properties indicate a high velocity , bipolar precessing jet . a difference of @xmath1 10 kms@xmath2 is found between the systemic velocity of the precessing jets and the centroid velocity of the nebula , as recently report for hu2 - 1 . we propose that this difference is due to orbital motion of the ejection source in a binary central star . the orbital separation of @xmath3 30 au and period @xmath3 100 yr estimated for the binary are similar to those in hu2 - 1 , linking the central stars of both planetary nebulae to interacting binaries . extraordinary similarities also exist between ic4846 and the bewildering planetary nebula ngc6543 , suggesting a similar formation history for both objects . . And you have already written the first three sentences of the full article: ic4846 ( [email protected] ) is a compact planetary nebula ( pn ) whose morphology has not been studied in detail yet . the only available information on its structure is provided by the vla 6 cm continuum observations by kwok ( 1985 , see also aaquist & kwok 1990 ) , showing several knots embedded in a faint elongated structure of @xmath1 3@xmath52 arcsec@xmath6 in size . the h@xmath7 surface brightness ( @xmath8 , acker et al . 1992 ) suggests that ic4846 has a high electron density .. Please generate the next two sentences of the article
this is corroborated by the small [ s ii]@xmath96717,@xmath96731 doublet ratio ( barker 1978 ; acker et al . 1992 ) which reaches the limiting ratio for high electron density ( @xmath10 @xmath11 ) .
3,732
Suppose that you have an abstract for a scientific paper: the statistics of earthquakes in a heterogeneous fault zone is studied analytically and numerically in a mean field version of a model for a segmented fault system in a three - dimensional elastic solid@xcite . the studies focus on the interplay between the roles of disorder , dynamical effects , and driving mechanisms . a two - parameter phase diagram is found , spanned by the amplitude of dynamical weakening ( or `` overshoot '' ) effects @xmath0 and the normal distance @xmath1 of the driving forces from the fault . in general , small @xmath0 and small @xmath1 are found to produce gutenberg - richter type power law statistics with an exponential cutoff , while large @xmath0 and large @xmath1 lead to a distribution of small events combined with characteristic system - size events . in a certain parameter regime the behavior is bistable , with transitions back and forth from one phase to the other on time scales determined by the fault size and other model parameters . the implications for realistic earthquake statistics are discussed . . And you have already written the first three sentences of the full article: the statistics of earthquakes has been a subject of research for a long time . one spectacular feature is the wide range of observed earthquake sizes , spanning over ten decades in earthquake moment magnitude ( which is defined to scale as the logarithm of the integral of slip along the fault during the earthquake@xcite ) . gutenberg and richter@xcite found in the 50 s that the size distribution of regional earthquakes follows a power law over the entire range of observed events .. Please generate the next two sentences of the article
the exponent @xmath2 of the power law distribution appears to be universal , _ i.e. _ it is approximately the same ( within statistical errors and possible secondary dependency on the tectonic domain ) for all studied regions .
3,733
Suppose that you have an abstract for a scientific paper: we study the complete evolution of a flat and homogeneous universe dominated by tachyonic matter . we demonstrate the attractor behaviour of the tachyonic inflation using the hamilton - jacobi formalism . we else obtain analytical approximations to the trajectories of the tachyon field in different regions . the numerical calculation shows that an initial non - vanishing momentum does not prevent the onset of inflation . the slow - rolling solution is an attractor . . And you have already written the first three sentences of the full article: the study of non - bps objects such as non - bps branes , brane - antibrane configurations and spacelike branes has recently attracted great attention given its implications for string / m - theory and cosmology . the tachyon field associated with unstable d - branes , might be responsible for cosmological inflation at early epochs due to tachyon condensation near the top of the effective potential @xcite , and could contribute to some new form of cosmological dark matter at late times @xcite . several authors have investigated the process of rolling of the tachyon in the cosmological background @xcite . in the slow roll limit in frw cosmology , the exact solution of tachyonic inflation with exponential potential is found @xcite .. Please generate the next two sentences of the article
a question which has not yet been addressed in the literature on tachyonic inflation is the issue of constraints on the phase space of initial conditions for inflation which arise when one takes into account the fact that in the context of cosmology the momenta of the tachyon field can not be neglected in the early universe . for models of the type of chaotic inflation , the work of @xcite shows that most of the energetically accessible field value space give rise to a sufficiently long period of slow roll inflation .
3,734
Suppose that you have an abstract for a scientific paper: shor s algorithms for factorization and discrete logarithms on a quantum computer employ fourier transforms preceding a final measurement . it is shown that such a fourier transform can be carried out in a semi - classical way in which a `` classical '' ( macroscopic ) signal resulting from the measurement of one bit ( embodied in a two - state quantum system ) is employed to determine the type of measurement carried out on the next bit , and so forth . in this way the two - bit gates in the fourier transform can all be replaced by a smaller number of one - bit gates controlled by classical signals . success in simplifying the fourier transform suggests that it may be worthwhile looking for other ways of using semi - classical methods in quantum computing . 0 cm 0 cm -2.0 cm 23.5 cm 16.5 cm recently shor @xcite has shown that a quantum computer @xcite , if it could be built , would be capable of solving certain problems , such as factoring long numbers , much more rapidly than is possible using currently available algorithms on a conventional computer . this has stimulated a lot of interest in the subject @xcite , and various proposals have been made for actually constructing such a computer @xcite . the basic idea is that bits representing numbers can be embodied in two - state quantum systems , for example , in the spin degree of freedom of a spin half particle , and the computation proceeds by manipulating these bits using appropriate gates . it turns out that quantum computations can be carried out using circuits employing one - bit gates , which produce a unitary transformation on the two - dimensional hilbert space representing a single bit , together with two - bit gates producing appropriate unitary transformations on a four - dimensional hilbert space @xcite . one - bit gates should be much easier to construct than two - bit gates , since , for example , an arbitrary unitary transformation on the spin degree of freedom of a spin half particle can be produced by subjecting it.... And you have already written the first three sentences of the full article: we are grateful to dr . d. divincenzo for some helpful comments on the manuscript . financial support for this research has been provided by the national science foundation through grant phy-9220726 .. Please generate the next two sentences of the article
11 p. w. shor , in _ proceedings of the 35th annual symposium on foundations of computer science , santa fe , 1994 _ , edited by s. goldwasser ( ieee computer society press , los alamitos , california , 1994 ) , p. 124 . w. shor , preprint ( quant - ph/9508027 ) , submitted to siam j. computing .
3,735
Suppose that you have an abstract for a scientific paper: we propose and analyze the concept of the vertical hot - electron terahertz ( thz ) graphene - layer detectors ( glds ) based on the double - gl and multiple - gl structures with the barrier layers made of materials with a moderate conduction band off - set ( such as tungsten disulfide and related materials ) . the operation of these detectors is enabled by the thermionic emissions from the gls enhanced by the electrons heated by incoming thz radiation . hence , hence , these detectors are the hot - electron bolometric detectors . the electron heating is primarily associated with the intraband absorption ( the drude absorption ) . in the frame of the developed model , we calculate the responsivity and detectivity as functions of the photon energy , gl doping , and the applied voltage for the gl detectors ( glds ) with different number of gls . the detectors based on the cascade multiple - gl structures can exhibit a substantial photoelectric gain resulting in the elevated responsivity and detectivity . the advantages of the thz detectors under consideration are associated with their high sensitivity to the normal incident radiation and efficient operation at room temperature at the low end of the thz frequency range . such glds with a metal grating , supporting the excitation of plasma oscillations in the gl - structures by the incident thz radiation , can exhibit a strong resonant response at the frequencies of several thz ( in the range , where the operation of the conventional detectors based on a@xmath0b@xmath1 materials , in particular thz quantum - well detectors , is hindered due to a strong optical phonon radiation absorption in such materials ) . we also evaluate also the characteristics of glds in the mid- and far - infrared ranges where the electron heating is due to the interband absorption in gls . . And you have already written the first three sentences of the full article: the gapless energy spectrum of graphene @xcite enables using single- or multiple graphene - layer ( gl ) structures for different terahertz ( thz ) and infrared ( ir ) photodetectors based on involving the interband transitions @xcite ( see , also refs @xcite ) , where different thz and ir photodetectors based on gls were explored ) . the interband photodetectors use either the gls serving as photoconductors or the lateral p - i - n junctions . in the latter case , the electrons and holes are generated in the depleted i - region and move to the opposite gl contacts driven by the electric field in the depletion region @xcite . the multiple - gl structures with the lateral p - i - n junctions can consist of either several non - bernal stacked twisted ) gls as in ref .. Please generate the next two sentences of the article
@xcite or gls separated by the barrier layers such as thin layers of boron nitride ( hbn ) , tungsten disulfide ( ws@xmath2 ) , or similar materials . such heterostructures have recently attracted a considerable interest and enabled several novel devices being proposed and realized @xcite .
3,736
Suppose that you have an abstract for a scientific paper: we present initial results from observations and numerical analyses aimed at characterizing main - belt comet p/2012 t1 ( panstarrs ) . optical monitoring observations were made between october 2012 and february 2013 using the university of hawaii 2.2 m telescope , the keck i telescope , the baade and clay magellan telescopes , faulkes telescope south , the perkins telescope at lowell observatory , and the southern astrophysical research ( soar ) telescope . the object s intrinsic brightness approximately doubles from the time of its discovery in early october until mid - november and then decreases by @xmath060% between late december and early february , similar to photometric behavior exhibited by several other main - belt comets and unlike that exhibited by disrupted asteroid ( 596 ) scheila . we also used keck to conduct spectroscopic searches for cn emission as well as absorption at 0.7 @xmath1 m that could indicate the presence of hydrated minerals , finding an upper limit cn production rate of @xmath2 mol s@xmath3 , from which we infer a water production rate of @xmath4 mol s@xmath3 , and no evidence of the presence of hydrated minerals . numerical simulations indicate that p/2012 t1 is largely dynamically stable for @xmath5 myr and is unlikely to be a recently implanted interloper from the outer solar system , while a search for potential asteroid family associations reveal that it is dynamically linked to the @xmath0155 myr - old lixiaohua asteroid family . . And you have already written the first three sentences of the full article: main - belt comets ( mbcs ; * ? ? ? * ) exhibit cometary activity indicative of sublimating ice , yet orbit entirely within the main asteroid belt ( figure [ fig_aeimbcs ] ) . seven mbcs 133p / elst - pizarro , 176p / linear , 238p / read , 259p / garradd , p/2010 r2 ( la sagra ) , p/2006 vw@xmath6 , and p/2012 t1 ( panstarrs ) are currently known .. Please generate the next two sentences of the article
in addition , three other objects p/2010 a2 ( linear ) , ( 596 ) scheila , and p/2012 f5 ( gibbs ) have been observed to exhibit comet - like dust emission , though their active episodes have been attributed to impact events and are not believed to be sublimation - driven @xcite . as such , we do not consider these objects to be ice - bearing main - belt objects , and refer to them as disrupted asteroids ( figure [ fig_aeimbcs ] ) .
3,737
Suppose that you have an abstract for a scientific paper: this course reviews the rotational properties of non - degenerate stars as observed from the protostellar stage to the end of the main sequence . it includes an introduction to the various observational techniques used to measure stellar rotation . angular momentum evolution models developed over the mass range from the substellar domain to high - mass stars are briefly discussed . . And you have already written the first three sentences of the full article: the angular momentum content of a star at birth impacts on most of its subsequent evolution ( e.g. ekstrm et al . 2012 ) . the star s instantaneous spin rate and/or on its rotational history plays a central role in various processes , such as dynamo - driven magnetic activity , mass outflows and galactic yields , surface chemical abundances , internal flows and overall structure , and it may as well influences the planetary formation and migration processes . it is therefore of prime importance to understand the origin and evolution of stellar angular momentum , indeed one of the most challenging issues of modern stellar physics .. Please generate the next two sentences of the article
conversely , the evolution of stellar spin rate is governed by fundamental processes operating in the stellar interior and at the interface between the star and its immediate surroundings . the measurement of stellar rotation at various evolutionary stages and over a wide mass range thus provides a powerful means to probe these processes . in this introductory course , an overview of the rotational properties of stars and of angular momentum evolution models
3,738
Suppose that you have an abstract for a scientific paper: we have carried out an analysis of singularities in kohn variational calculations for low energy @xmath0elastic scattering . provided that a sufficiently accurate trial wavefunction is used , we argue that our implementation of the kohn variational principle necessarily gives rise to singularities which are not spurious . we propose two approaches for optimizing a free parameter of the trial wavefunction in order to avoid anomalous behaviour in scattering phase shift calculations , the first of which is based on the existence of such singularities . the second approach is a more conventional optimization of the generalized kohn method . close agreement is observed between the results of the two optimization schemes ; further , they give results which are seen to be effectively equivalent to those obtained with the complex kohn method . the advantage of the first optimization scheme is that it does not require an explicit solution of the kohn equations to be found . we give examples of anomalies which can not be avoided using either optimization scheme but show that it is possible to avoid these anomalies by considering variations in the nonlinear parameters of the trial function . . And you have already written the first three sentences of the full article: despite the absence of an explicit minimization principle , variational methods have been used successfully in many problems of quantum scattering theory . such calculations typically exploit a stationary principle in order to obtain an accurate description of scattering processes . the kohn variational method @xcite has been applied extensively to problems in electron - atom @xcite and electron - molecule @xcite scattering , as well as to the scattering of positrons , @xmath1 , by atoms @xcite and molecules @xcite .. Please generate the next two sentences of the article
it has been widely documented , however , that matrix equations derived from the kohn variational principle are inherently susceptible to spurious singularities . these singularities were discussed first by schwartz @xcite and have subsequently attracted considerable attention @xcite . in the region of these singularities
3,739
Suppose that you have an abstract for a scientific paper: theoretical analysis and fully atomistic molecular dynamics simulations reveal a brownian ratchet mechanism by which thermal fluctuations drive the net displacement of immiscible liquids confined in channels or pores with micro- or nanoscale dimensions . the thermally - driven displacement is induced by surface nanostructures with directional asymmetry and can occur against the direction of action of wetting or capillary forces . mean displacement rates in molecular dynamics simulations are predicted via analytical solution of a smoluchowski diffusion equation for the position probability density . the proposed physical mechanisms and derived analytical expressions can be applied to engineer surface nanostructures for controlling the dynamics of diverse wetting processes such as capillary filling , wicking , and imbibition in micro- or nanoscale systems . . And you have already written the first three sentences of the full article: advances in nanofabrication and characterization techniques have enabled the engineering of nanostructured surfaces with geometric features as small as a few nanometers @xcite . at nanoscales , the interplay between intermolecular forces , brownian motion , and surface structure can give rise to complex interfacial phenomena that are challenging for the application of conventional , continuum - based and deterministic , models @xcite . for example , nanoscale surface structures can induce energy barriers that lead to wetting processes governed by thermally - activated transitions between metastable states @xcite . these thermally - activated transitions can result in directed transport of fluids and solutes when there is directional asymmetry of the energy barriers induced by the physicochemical structure of the confining surfaces @xcite .. Please generate the next two sentences of the article
analogous mechanisms for rectification of thermal motion into directed transport underlie fundamental biological processes such as selective charge transport in ion channels or translocation of proteins across cellular membranes . physical systems where thermal fluctuations are able to drive net directional motion , while performing work against `` load '' or resistance forces , are known as thermal ratchets or brownian motors and have been extensively studied in the framework of statistical physics @xcite .
3,740
Suppose that you have an abstract for a scientific paper: we report the discovery of a new dwarf galaxy , andromeda xxviii , using data from the recently - released sdss dr8 . the galaxy is a likely satellite of andromeda , and , at a separation of @xmath0 kpc , would be one of the most distant of andromeda s satellites . its heliocentric distance is @xmath1 kpc , and analysis of its structure and luminosity show that it has an absolute magnitude of @xmath2 and half - light radius of @xmath3 pc , similar to many other faint local group dwarfs . with presently - available imaging we are unable to determine if there is ongoing or recent star formation , which prevents us from classifying it as a dwarf spheroidal or dwarf irregular . . And you have already written the first three sentences of the full article: in recent years the environment of andromeda has been a prime location for the discovery of dwarf galaxies and tidal structures , much of which has been enabled by large surveys on the isaac newton telescope @xcite and the canada - france - hawaii telescope @xcite . these surveys have obtained deep observations over a significant fraction of the area within 180 kpc of andromeda , and yielded a considerable number of new discoveries . in addition to these dedicated surveys , two satellites of andromeda have been found in the sloan digital sky survey ( sdss ) imaging ( and ix and x , * ? ? ?. Please generate the next two sentences of the article
* ; * ? ? ? * ) , using an early sdss scan targeting andromeda specifically .
3,741
Suppose that you have an abstract for a scientific paper: the possibility to excite surface plasmon polaritons ( spps ) at the interface between two media depends on the optical properties of both media and geometrical aspects . specific conditions allowing the coupling of light with a plasmon - active interface must be satisfied . plasmonic effects are well described in noble metals where the imaginary part of the dielectric permittivity is often neglected ( perfect medium approximation ) . however , some systems exist for which such approximation can not be applied , hence requiring a refinement of the common spp theory . in this context , several properties of spps such as excitation conditions , period of the electromagnetic field modulation and spp lifetime then may strongly deviate from that of the perfect medium approximation . in this paper , calculations taking into account the imaginary part of the dielectric permittivities are presented . the model identifies analytical terms which should not be neglected in the mathematical description of spps on lossy materials . these calculations are applied to numerous material combinations resulting in a prediction of the corresponding spp features . a list of plasmon - active interfaces is provided along with a quantification of the above mentioned spp properties in the regime where the perfect medium approximation is not applicable . may 21st , 2016 _ keywords _ : surface plasmon polaritons , lossy materials , plasmon lifetime . And you have already written the first three sentences of the full article: surface plasmon polaritons ( spps ) are collective oscillations of electrons occuring at the interface of materials . more than hundred years after their discovery @xcite , spps have promoted new applications in many fields such as microelectronics @xcite , photovoltaics @xcite , near - field sensing @xcite , laser techonology @xcite , photonics @xcite , meta - materials design @xcite , high order harmonics generation @xcite , or charged particles acceleration @xcite . most of these applications are based on expensive noble metals such as gold , silver or platinum , as these materials greatly support the plasmonic phenomena , exhibit very small ( plasmonic ) losses and the experimental results match well with the associated theory @xcite .. Please generate the next two sentences of the article
although there were numerous studies addressing spps in lossy materials @xcite , some specific aspects remain to be investigated . in this paper , a mathematical condition for spp excitation at flat interfaces is provided . this approach includes the widely accepted theory but reveals a wider ( material dependent ) domain of spp excitation than predicted by the existing literature .
3,742
Suppose that you have an abstract for a scientific paper: the optical light - curves of grb afterglows 990123 and 021211 exhibit a steep decay at 100600 seconds after the burst , the decay becoming slower after about 10 minutes . we investigate two scenarios for the fast decaying early optical emission of these grb afterglows . in the _ reverse - forward shock _ scenario , this emission arises in the reverse shock crossing the grb ejecta , the mitigation of the light - curve decay occurring when the forward shock emission overtakes that from the reverse shock . both a homogeneous and wind - like circumburst medium are considered . in the _ wind - bubble _ scenario , the steeply decaying , early optical emission arises from the forward shock interacting with a @xmath0 bubble , with a negligible contribution from the reverse shock , the slower decay starting when the blast wave reaches the bubble termination shock and enters a homogeneous region of the circumburst medium . we determine the shock microphysical parameters , ejecta kinetic energy , and circumburst density which accommodate the radio and optical measurements of the grb afterglows 990123 and 021211 . we find that , for a homogeneous medium , the radio and optical emissions of the afterglow 990123 can be accommodated by the reverse - forward shock scenario if the microphysical parameters behind the two shocks differ substantially . a wind - like circumburst medium also allows the reverse - forward shocks scenario to account for the radio and optical properties of the afterglows 990123 and 021211 , but the required wind densities are at least 10 times smaller than those of galactic wolf - rayet stars . the wind - bubble scenario requires a variation of the microphysical parameters when the afterglow fireball reaches the wind termination shock , which seems a contrived feature . . And you have already written the first three sentences of the full article: there are currently two grb afterglows for which a fast falling - off optical emission was detected at early times , only @xmath1 seconds after the burst . the general consensus is that this emission arises from the grb ejecta which is energized by the reverse shock ( * rs * ) crossing the ejecta and caused by the interaction of the ejecta with the circumburst medium ( * cbm * ) . this interaction also drives a forward shock ( * fs * ) energizing the swept - up cbm , to which the later afterglow emission is attributed ( the `` reverse - forward shock '' scenario ) .. Please generate the next two sentences of the article
the rs emission was first calculated by & rees ( 1997 ) , who considered the cases of a frozen - in and turbulent magnetic field in the ejecta , and showed that , in either case , a bright optical emission ( @xmath2 ) is obtained at the end of the burst . & rees ( 1999 ) extended their previous calculations of the rs emission to a radiative evolution of the fireball lorentz factor and pointed out the importance of spectral information in constraining the rs dynamics and the magnetic field origin from the observed @xmath3 power - law decay of the very early optical light - curve of the afterglow 990123 ( akerlof 1999 ) . they also pointed out the possibility that optical flashes arise in the same internal shocks which generate the burst emission .
3,743
Suppose that you have an abstract for a scientific paper: we call a diagram @xmath0 absolutely cartesian if @xmath1 is homotopy cartesian for all homotopy functors @xmath2 . this is a sensible notion for diagrams in categories @xmath3 where goodwillie s calculus of functors may be set up for functors with domain @xmath3 . we prove a classification theorem for absolutely cartesian squares of spaces and state a conjecture of the classification for higher dimensional cubes . [ multiblock footnote omitted ] let @xmath4 be a small indexing category with initial object @xmath5 and final object 1 . a diagram @xmath0 in a category @xmath3 is a functor @xmath6 ; we restrict ourselves here to @xmath3 being spaces.this diagram is cartesian when @xmath7 is equivalent to the homotopy limit of @xmath0 over @xmath4 with @xmath5 removed , denoted @xmath8 or @xmath9 when @xmath4 is clear from context . similarly , @xmath0 is cocartesian if @xmath10 is equivalent to the homotopy colimit over @xmath4 with the final object removed , denoted @xmath11 ; as in the cartesian case , the @xmath4 subscript is omitted if clear from context and we write @xmath12 . a functor @xmath2 is a homotopy functor if it is weak - equivalence - preserving . we call a diagram @xmath0 absolutely ( co)cartesian if @xmath1 is homotopy ( co)cartesian for all homotopy functors @xmath2 . note that a diagram is an @xmath13 cube if it is indexed by @xmath14)$ ] , the powerset on @xmath15=\{0,1,\ldots n\}$ ] . . And you have already written the first three sentences of the full article: we prove the following classification theorem for absolutely cartesian squares : [ thm : abscartsq ] a square of spaces is absolutely cartesian if and only if it is a map of two absolutely cartesan 1-cubes . that is , of the following form ( the other two maps may also be equivalences ) : @xmath16^{\sim}\ar[d ] & b\ar[d]\\ c \ar[r]^{\sim } & d\\ } \ ] ] theorem [ thm : abscartsq ] is the base case of our following conjecture : [ conj1 ] an @xmath17-cube of spaces is absolutely cartesian if and only if it can be written as either a map of two absolutely cartesian @xmath18-cubes or a chain of compositions of @xmath17-cubes of these types . it should be clear that building up an @xmath17-cube inductively as maps of these absolutely cartesian squares and compositions of such cubes will yield an absolutely cartesian @xmath17-cube , which is the @xmath19 direction of the if and only if . to be clear , two cubes @xmath20. Please generate the next two sentences of the article
may be composed if they can be written @xmath21 and @xmath22 ; their composition is then @xmath23 . geometrically , this looks like `` glueing '' the cubes along their shared face .
3,744
Suppose that you have an abstract for a scientific paper: in reinforcement learning ( rl ) , it is common to use optimistic initialization of value functions to encourage exploration . however , such an approach generally depends on the domain , viz . , the scale of the rewards must be known , and the feature representation must have a constant norm . we present a simple approach that performs optimistic initialization with less dependence on the domain . . And you have already written the first three sentences of the full article: one of the challenges in rl is the trade - off between exploration and exploitation . the agent must choose between taking an action known to give positive reward or to explore other possibilities hoping to receive a greater reward in the future . in this context , a common strategy in unknown environments is to assume that unseen states are more promising than those states already seen . one such approach is optimistic initialization of values ( * ? ? ?. Please generate the next two sentences of the article
* section 2.7 ) . several rl algorithms rely on estimates of expected values of states or expected values of actions in a given state @xcite .
3,745
Suppose that you have an abstract for a scientific paper: in the affleck - dine mechanism of baryogenesis , non - topological solitons called q - balls can be formed . in this work we propose that such q - balls decay during the bbn era and study the cosmological consequence of such late decays . we find that the late - decaying baryonic q - balls with lifetime of about @xmath0 can provide a new developing mechanism for the bbn through a rolling baryon - to - photon ratio @xmath1 , which can naturally explain the discrepancy of the bbn prediction with the wmap data on @xmath2 abundance . for the late - decaying leptonic q - balls with lifetime of about @xmath3 , we find that their decay product , gravitinos , can serve as a dark matter candidate and give an explanation for the approximate equality of dark and baryon matter densities . = # 1,nucl . phys . * b#1 * , # 1,phys . lett . b * # 1 * , # 1,phys . rev . d * # 1 * , # 1,phys . rev . . * # 1 * , # 1#20.5ex . And you have already written the first three sentences of the full article: the nature of the matter content of the universe is one of the mysteries in today s physical science . the wilkinson microwave anisotropy probe ( wmap ) collaboration gives fairly accurate values on the contents of our universe @xcite @xmath4 where @xmath1 denotes the baryon - to - photon ratio , and @xmath5 , @xmath6 and @xmath7 denotes the density of total matter , baryonic matter and dark energy , respectively . one sees that , coincidentally , the dark matter density is comparable to the dark energy density as well as to the baryonic matter density .. Please generate the next two sentences of the article
such coincidences need to be understood . while an explanation for the coincidence between dark matter and dark energy can be provided in the quintessence scenario , it is hard to give a natural explanation for the coincidence between dark matter and baryonic matter although some efforts have been devoted @xcite . a natural explanation for such a coincidence requires some unification scenario which correlates the baryongenesis with the dark matter generation .
3,746
Suppose that you have an abstract for a scientific paper: the fields of occultation and microlensing are linked historically . early this century , occultation of the sun by the moon allowed the apparent positions of background stars projected near the limb of the sun to be measured and compared with their positions six months later when the sun no longer influenced the light path to earth . the measured shift in the stellar positions was consistent with lensing by the gravitational field of the sun during the occultation , as predicted by the theory of general relativity . this series of lectures explores the principles , possibilities and challenges associated with using occultation and microlensing to discover and characterize unseen planets orbiting distant stars . the two techniques are complementary in terms of the information that they provide about planetary systems and the range of system parameters to which they are most sensitive . although the challenges are large , both microlensing and occultation may provide avenues for the discovery of extra - solar planets as small as earth . # 1#1 _get ref _ . And you have already written the first three sentences of the full article: indirect methods to search for extra - solar planets do not measure emission from the planet itself , but instead seek to discover and quantify the tell - tale effects that the planet would have on the position ( astrometry ) and motion ( radial velocity ) of its parent star , or on the apparent brightness of its parent star ( occultation ) or random background sources ( gravitational microlensing ) . all of these indirect signals have a characteristic temporal behavior that aids in the discrimination between planetary effects and other astrophysical causes . the variability can be due to the changing position of the planet with respect to the parent star ( astrometry , radial velocity , occultation ) , or the changing position of the complete planetary system with respect to background stars ( microlensing ) .. Please generate the next two sentences of the article
the time - variable photometric signals that can be measured using occultation and microlensing techniques are the focus of this small series of lectures . an occultation is the temporary dimming of the apparent brightness of a parent star that occurs when a planet transits the stellar disk ; this can occur only when the orbital plane is nearly perpendicular to the plane of the sky . because the planet is considerably cooler than its parent star ,
3,747
Suppose that you have an abstract for a scientific paper: the properties of nuclear matter are discussed with the relativistic mean - field theory ( rmf ) . then , we use two models in studying the in - medium properties of @xmath0 : one is the point - like @xmath1 in the usual rmf and the other is a k@xmath2n structure for the pentaquark . it is found that the in - medium properties of @xmath0 are dramatically modified by its internal structure . the effective mass of @xmath0 in medium is , at normal nuclear density , about 1030 mev in the point - like model , while it is about 1120 mev in the model of k@xmath2n pentaquark . the nuclear potential depth of @xmath0 in the k@xmath2n model is approximately @xmath3 mev , much shallower than @xmath4 mev in the usual point - like rmf model . . And you have already written the first three sentences of the full article: the relativistic mean field theory ( rmf ) is one of the most popular methods in modern nuclear physics . it has been successful in describing the properties of ordinary nuclei / nuclear matter and hyper - nuclei / nuclear matter . appropriate effective meson - baryon interactions are essential to the rmf calculation . to describe the nuclear matter and/or finite nuclei , nonlinear self - interactions for @xmath5 and @xmath6 mesons. Please generate the next two sentences of the article
are introduced @xcite . in recent years , a number of effective interactions for meson - baryon couplings , e.g. , the nl - z @xcite , nl3 @xcite , nl - sh @xcite , tm1 , and tm2 @xcite etc . , have been developed . given that rmf has been a favorite model in describing the properties of ordinary nuclei / nuclear matter and hyper - nuclei / nuclear matter , we will study the in - medium properties of @xmath7 within the framework of the relativistic mean field theory .
3,748
Suppose that you have an abstract for a scientific paper: the general features of the mller scattering and its use as an electron polarimeter are described and studied in view of the planned future high energy @xmath0 linear colliders . in particular the study concentrates on the tesla collider which is envisaged to operate with longitudinal polarised beams at a centre of mass energy of the order of 0.5 tev with a luminosity of about @xmath1 = @xmath2 . * desy 00 - 118 * + * mller scattering polarimetry + * for + high energy @xmath3 linear colliders + gideon alexander@xmath4 + institut fr physik + humboldt - universitt zu berlin , germany + 11015 berlin , germany + and + iuliana cohen school of physics and astronomy + raymond and beverly sackler faculty of exact sciences + tel - aviv university , tel - aviv 69978 , israel + . And you have already written the first three sentences of the full article: it is for some time that the high energy physics community is of the opinion that in the near future there will be a need for the facility of a high energy linear @xmath0 collider with a nominal energy around 0.5 tev in the centre of mass ( cm ) system . a conceptual design of such a collider , known under the name tesla , and its physics program is described in some details in ref . it has further been pointed out that the option of longitudinal polarized electron beams in such high energy colliders , like tesla , will enrich significantly the physics capabilities of the device @xcite . the use of polarised beams requires however a continuous monitoring and sufficient accurate measurement of the beam polarisation during the entire collider operation .. Please generate the next two sentences of the article
+ in addition to the widely used compton scattering polarimeter , the @xmath5 mller scattering process has also been utilised to evaluate the polarisation level of the electron beams . unlike the compton polarimeter the operation of a mller polarimeter may need dedicated accelerator runs but its relatively simple construction and operation and the large counting rates makes it nevertheless a rather attractive device . here
3,749
Suppose that you have an abstract for a scientific paper: we use a lattice boltzmann method to study pattern formation in chemically reactive binary fluids in the regime where hydrodynamic effects are important . the coupled equations solved by the method are a cahn - hilliard equation , modified by the inclusion of a reactive source term , and the navier - stokes equations for conservation of mass and momentum . the coupling is two - fold , resulting from the advection of the order - parameter by the velocity field and the effect of fluid composition on pressure . we study the the evolution of the system following a critical quench for a linear and for a quadratic reaction source term . comparison is made between the high and low viscosity regimes to identify the influence of hydrodynamic flows . in both cases hydrodynamics is found to influence the pathways available for domain growth and the eventual steady - states . . And you have already written the first three sentences of the full article: the process of phase separation in chemically reactive mixtures has been considered by several authors . et al _ @xcite and christensen _ et al _ @xcite used a modification of the cahn - hilliard equation to investigate the effects of a linear reaction of the type @xmath0 occurring simultaneously with phase separation following an instantaneous quench .. Please generate the next two sentences of the article
in contrast to phase separation alone , domain coarsening was halted at a length - scale dependent on system parameters resulting in the ` freezing in ' of a spatially heterogeneous pattern . it was recognized that the steady - states resulted from competition between the demixing effects of phase separation and the equivalence of the chemical reaction term to an effective long - range repulsion @xcite .
3,750
Suppose that you have an abstract for a scientific paper: the cortex is a very large network characterized by a complex connectivity including at least two scales : a microscopic scale at which the interconnections are non - specific and very dense , while macroscopic connectivity patterns connecting different regions of the brain at larger scale are extremely sparse . this motivates to analyze the behavior of networks with multiscale coupling , in which a neuron is connected to its @xmath0 nearest - neighbors where @xmath1 , and in which the probability of macroscopic connection between two neurons vanishes . these are called singular multi - scale connectivity patterns . we introduce a class of such networks and derive their continuum limit . we show convergence in law and propagation of chaos in the thermodynamic limit . the limit equation obtained is an intricate non - local mckean - vlasov equation with delays which is universal with respect to the type of micro - circuits and macro - circuits involved . ' '' '' ' '' '' the purpose of this paper is to provide a general convergence and propagation of chaos result for large , spatially extended networks of coupled diffusions with multi - scale disordered connectivity . such networks arise in the analysis of neuronal networks of the brain . indeed , the brain cortical tissue is a large , spatially extended network whose dynamics is the result of a complex interplay of different cells , in particular neurons , electrical cells with stochastic behaviors . in the cortex , neurons interact depending on their anatomical locations and on the feature they code for . the neuronal tissue of the brain constitute spatially - extended structures presenting complex structures with local , dense and non - specific interactions ( microcircuits ) and long - distance lateral connectivity that are function - specific . in other words , a given cell in the cortex sends its projections at ( i ) a local scale : the neurons connect extensively to anatomically close cells ( the _ microcircuits _ ) , forming.... And you have already written the first three sentences of the full article: we consider a piece of cortex @xmath5 ( the _ neural field _ ) , which is a regular compact subset when representing locations on the cortex , or periodic domains such as the torus of dimension 1 @xmath6 in the case of the representation of the visual field , in which neurons code for a specific orientation in the visual stimulus : in that model , @xmath5 is considered to be the feature space @xcite . ] of @xmath7 for some @xmath8 , and the density of neurons on @xmath5 is given by a probability measure @xmath9 assumed to be absolutely continuous with respect to lebesgue s measure @xmath10 on @xmath5 , with strictly positive and bounded density @xmath11 $ ] . on @xmath5 , we consider a spatially extended network composed of @xmath12 neurons at random locations @xmath13 drawn independently with law @xmath14 in a probability space @xmath15 , and we will denote by @xmath16 the expectation with respect to this probability space . a given neuron @xmath17 projects local connections in its neighborhood @xmath18 , and long - range connections over the whole neural field . we will consider here that the local microcircuit connectivity consists of a fully connected graph with @xmath1 nearest - neighbors .. Please generate the next two sentences of the article
the synaptic weights corresponding to these connections are assumed equal to @xmath19 where @xmath20 ( it is generally positive since local interactions in the cortex tend to be excitatory ) . a central example is the case @xmath21 with @xmath22 . with zero probability
3,751
Suppose that you have an abstract for a scientific paper: the pamela apparatus has been assembled and it is ready to be launched in a satellite mission to study mainly the antiparticle component of cosmic rays . in this paper the performances obtained for the silicon microstrip detectors used in the magnetic spectrometer are presented . this subdetector reconstructs the curvature of a charged particle in the magnetic field produced by a permanent magnet and consequently determines momentum and charge sign , thanks to a very good accuracy in the position measurements ( better than @xmath0 m in the bending coordinate ) . a complete simulation of the silicon microstrip detectors has been developed in order to investigate in great detail the sensor s characteristics . simulated events have been then compared with data gathered from minimum ionizing particle ( mip ) beams during the last years in order to tune free parameters of the simulation . finally some either widely used or original position finding algorithms , designed for such kind of detectors , have been applied to events with different incidence angles . as a result of the analysis , a method of impact point reconstruction can be chosen , depending on both the particle s incidence angle and the cluster multiplicity , so as to maximize the capability of the spectrometer in antiparticle tagging . silicon microstrip detectors , spatial resolution , position finding algorithms 29.40.gx , 29.40.wk , 07.05.tp . And you have already written the first three sentences of the full article: the pamela telescope @xcite will be put in orbit within the 2005 on board of the resurs dk1 russian satellite for a three year long mission on a orbit ( @xmath1 deg . inclination , @xmath2 to @xmath3 km height ) to study the cosmic ray flux , with a special interest on the antimatter component . the detector is composed of several subsystems , schematically shown in fig .. Please generate the next two sentences of the article
[ fig : pamela ] : a time of flight ( tof ) apparatus , which also provides the trigger signal , a solid state magnetic spectrometer @xcite , surrounded by an anticoincidence shield , and an electromagnetic calorimeter @xcite in which single sided silicon detector planes are interleaved with tungsten absorber up to a total thickness of about @xmath4 radiation lengths . anticoincidence scintillators define the external geometry of the detector and their signals will be exploited in the off line rejection of spurious tracks ; below the calorimeter another scintillator plane ( s4 ) and a neutron detector can provide additional information when showers are not fully contained in the calorimeter .
3,752
Suppose that you have an abstract for a scientific paper: we study the interaction between gas and dust particles in a protoplanetary disk , comparing analytical and numerical results . we first calculate analytically the trajectories of individual particles undergoing gas drag in the disk , in the asymptotic cases of very small particles ( epstein regime ) and very large particles ( stokes regime ) . using a boltzmann averaging method , we then infer their collective behavior . we compare the results of this analytical formulation against numerical computations of a large number of particles . using successive moments of the boltzmann equation , we derive the equivalent fluid equations for the average motion of the particles ; these are intrinsically different in the epstein and stokes regimes . we are also able to study analytically the temporal evolution of a collection of particles with a given initial size - distribution provided collisions are ignored . . And you have already written the first three sentences of the full article: in an attempt to account for the coplanar nature of the orbits of all known solar - system planets , laplace ( 1796 ) postulated that they were formed in a common disk around the protosun . today , the detection of protostellar disks around most young t - tauri stars ( prosser _ et al . _ 1994 ) is a strong evidence that the laplace nebula hypothesis is universally applicable .. Please generate the next two sentences of the article
the recent discovery of planets around at least 10% of nearby solar - type stars ( marcy _ et al . _ 2000 ) suggests that their formation may be a robust process .
3,753
Suppose that you have an abstract for a scientific paper: we propose a geometric phase gate in a decoherence - free subspace with trapped ions . the quantum information is encoded in the zeeman sublevels of the ground state and two physical qubits to make up one logical qubit with ultra long coherence time . single- and two - qubit operations together with the transport and splitting of linear ion crystals allow for a robust and decoherence - free scalable quantum processor . for the ease of the phase gate realization we employ one raman laser field on four ions simultaneously , i.e. no tight focus for addressing . the decoherence - free subspace is left neither during gate operations nor during the transport of quantum information . . And you have already written the first three sentences of the full article: trapped ions are among the most promising physical systems for implementing quantum information due to their long coherence time as compared with the times required for quantum logic operations @xcite . a robust quantum memory is a crucial part of the realization of an ion trap based quantum computer @xcite . one may distinguish different possibilities for encoding a qubit in a trapped ion , either one uses a long lived metastable state and drives coherent transitions on the corresponding optical transition @xcite , which sets challenging requirements on the laser source and ultimately limits the coherence time to the lifetime of the metastable state .. Please generate the next two sentences of the article
alternatively , a qubit can be encoded in sublevels of the electronic ground state . this may be either hyperfine ground state levels @xcite or zeeman ground states @xcite which are coherently manipulated by means of stimulated raman transitions . for conventional single - ion qubits encoded in the zeeman sublevels as in @xmath0ca@xmath1
3,754
Suppose that you have an abstract for a scientific paper: markov chain monte carlo sampling methods often suffer from long correlation times . consequently , these methods must be run for many steps to generate an independent sample . in this paper a method is proposed to overcome this difficulty . the method utilizes information from rapidly equilibrating coarse markov chains that sample marginal distributions of the full system . this is accomplished through exchanges between the full chain and the auxiliary coarse chains . results of numerical tests on the bridge sampling and filtering / smoothing problems for a stochastic differential equation are presented . n order to understand the behavior of a physical system it is often necessary to generate samples from complicated high dimensional distributions . the usual tools for sampling from these distributions are markov chain monte carlo methods ( mcmc ) by which one constructs a markov chain whose trajectory averages converge to averages with respect to the distribution of interest . for some simple systems it is possible to construct markov chains with independent values at each step . in general , however , spatial correlations in the system of interest result in long correlation times in the markov chain and hence slow convergence of the chain s trajectory averages . in this paper , a method is proposed to alleviate the difficulties caused by spatial correlations in high dimensional systems . the method , parallel marginalization , is tested on two stochastic differential equation conditional path sampling problems . parallel marginalization takes advantage of the shorter correlation lengths present in marginal distributions of the target density . auxiliary markov chains that sample approximate marginal distributions are evolved simultaneously with the markov chain that samples the distribution of interest . by swapping their configurations , these auxiliary chains pass information between themselves and with the chain sampling the original distribution . as shown below , these.... And you have already written the first three sentences of the full article: for the purposes of the discussion in this section , we assume that appropriate approximate marginal distributions are available . as discussed in a later section , they may be provided by coarse models of the physical problem as in the examples below , or they may be calculated via the methods in @xcite and @xcite . assume that the @xmath0 dimensional system of interest has a probability density , @xmath1 , where @xmath2 .. Please generate the next two sentences of the article
suppose further that , by the metropolis - hastings or any other method ( see @xcite ) , we can construct a markov chain , @xmath3 , which has @xmath4 as its stationary measure . that is , for two points @xmath5 @xmath6 where @xmath7 is the probability density of a move to @xmath8 given that @xmath9 .
3,755
Suppose that you have an abstract for a scientific paper: the antares telescope is well - suited to detect neutrinos produced in astrophysical transient sources as it can observe a full hemisphere of the sky at all times with a high duty cycle . radio - loud active galactic nuclei with jets pointing almost directly towards the observer , the so - called blazars , are particularly attractive potential neutrino point sources . the all - sky monitor lat on board the fermi satellite probes the variability of any given gamma - ray bright blazar in the sky on time scales of hours to months . assuming hadronic models , a strong correlation between the gamma - ray and the neutrino fluxes is expected . selecting a narrow time window on the assumed neutrino production period can significantly reduce the background . an unbinned method based on the minimization of a likelihood ratio was applied to a subsample of data collected in 2008 ( 61 days live time ) . by searching for neutrinos during the high state periods of the agn light curve , the sensitivity to these sources was improved by about a factor of two with respect to a standard time - integrated point source search . first results on the search for neutrinos associated with ten bright and variable fermi sources are presented . , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , .... And you have already written the first three sentences of the full article: neutrinos are unique messengers to study the high - energy universe as they are neutral and stable , interact weakly and therefore travel directly from their point of creation to the earth without absorption . neutrinos could play an important role in understanding the mechanisms of cosmic ray acceleration and their detection from a cosmic source would be a direct evidence of the presence of hadronic acceleration . the production of high - energy neutrinos has been proposed for several kinds of astrophysical sources , such as active galactic nuclei ( agn ) , gamma - ray bursters ( grb ) , supernova remnants and microquasars , in which the acceleration of hadrons may occur ( see ref .. Please generate the next two sentences of the article
@xcite for a review ) . flat - spectrum radio quasars ( fsrqs ) and bl lacs , classified as agn blazars , exhibit relativistic jets pointing almost directly towards the earth and are some of the most violent variable high energy phenomena in the universe @xcite .
3,756
Suppose that you have an abstract for a scientific paper: understanding the mechanism of electroweak symmetry breaking and the origin of boson and fermion masses is among the most pressing questions raised in contemporary particle physics . if these issues involve one ( several ) higgs boson(s ) , a precise measurement of all its ( their ) properties will be of prime importance . among those , the higgs coupling to matter fermions ( the yukawa coupling ) . at a linear collider , the process @xmath0 will allow in principle a direct measurement of the top - higgs yukawa coupling . we present a realistic feasibility study of the measurement in the context of the tesla collider . four channels are studied and the analysis is repeated for several higgs mass values within the range 120 gev / c@xmath1 - 200 gev / c@xmath1 . addtoresetequationsection = 20 cm = 15 cm . And you have already written the first three sentences of the full article: the gauge sector of electroweak interactions has been checked to coincide with the standard model ( sm ) prediction to the per - mil level , at lep and slc . on the contrary , there is no direct experimental evidence for the higgs mechanism , supposed to be responsible for electroweak symmetry breaking and the generation of masses . direct search of the higgs boson at lep yields the lower limit @xcite : @xmath2 gev / c@xmath1 at @xmath3 cl . precision measurements on the other hand give @xcite : @xmath4 gev / c@xmath1 at @xmath5 cl . once a higgs particle is found , if ever , all its properties should be measured precisely to completely characterise the higgs mechanism . among those ,. Please generate the next two sentences of the article
the coupling of the higgs boson to fermions ( the yukawa coupling ) , which is supposed to scale with the fermion mass : @xmath6 where @xmath7 is the yukawa coupling of a fermion f of mass @xmath8 and @xmath9 is the vacuum expectation value of the higgs field , @xmath10 gev . the top quark is the heaviest fermion , thus the top - higgs yukawa coupling should be the easiest to measure . if @xmath11 , this parameter can be measured through the branching ratio of the higgs boson decay into a pair of top quarks .
3,757
Suppose that you have an abstract for a scientific paper: the stochastic mutual repressor model is analysed using perturbation methods . this simple model of a gene circuit consists of two genes and three promotor states . either of the two protein products can dimerize , forming a repressor molecule that binds to the promotor of the other gene . when the repressor is bound to a promotor , the corresponding gene is not transcribed and no protein is produced . either one of the promotors can be repressed at any given time or both can be unrepressed , leaving three possible promotor states . this model is analysed in its bistable regime in which the deterministic limit exhibits two stable fixed points and an unstable saddle , and the case of small noise is considered . on small time scales , the stochastic process fluctuates near one of the stable fixed points , and on large time scales , a metastable transition can occur , where fluctuations drive the system past the unstable saddle to the other stable fixed point . to explore how different intrinsic noise sources affect these transitions , fluctuations in protein production and degradation are eliminated , leaving fluctuations in the promotor state as the only source of noise in the system . perturbation methods are then used to compute the stability landscape and the distribution of transition times , or first exit time density . to understand how protein noise affects the system , small magnitude fluctuations are added back into the process , and the stability landscape is compared to that of the process without protein noise . it is found that significant differences in the random process emerge in the presence of protein noise . . And you have already written the first three sentences of the full article: random molecular interactions can have profound effects on gene expression . because the expression of a gene can be regulated by a single promotor , and because the number of mrna copies and protein molecules is often small , deterministic models of gene expression can miss important behaviors . a deterministic model might show multiple possible stable behaviors , any of which can be realized depending on the initial conditions of the system .. Please generate the next two sentences of the article
different stable behavior that depend on initial conditions allows for variability in response and adaptation to environmental conditions @xcite . although in some cases , noise from multiple sources can push the behavior far from the deterministic model , here we focus on situation where the system fluctuates close to the deterministic trajectory ( i.e. , weak noise ) . of particular interest
3,758
Suppose that you have an abstract for a scientific paper: we compute the light hadron mass spectrum at @xmath0 using the @xmath1-improved sheikholeslami - wohlert ( sw ) fermion action with two choices of the clover coefficient : the classical value , @xmath2 , and a mean - field or tadpole - improved estimate @xmath3 . we compare our results with those of the gf11 collaboration who use the wilson fermion action ( @xmath4 ) . we find that changing @xmath5 from zero to 1 and 1.57 leads to significant differences in the masses of the chirally extrapolated and strange pseudoscalar and vector mesons , the nucleon , the @xmath6 , and also in the edinburgh plot . a number of other quantities , for example @xmath7 , @xmath8 , @xmath9 and @xmath10 do not appear to change significantly . we also investigate the effect of changing the lattice volume from approximately @xmath11 to @xmath12 . we find that the meson masses are consistent to within one standard deviation and baryon masses are consistent to within two standard deviations . . And you have already written the first three sentences of the full article: the _ ab initio _ calculation of the light hadron spectrum is a major goal of lattice qcd . a calculation of the light - hadron spectrum giving results in good agreement with experiment would be a demonstration that qcd describes long - distance strong - interaction physics . furthermore , the calculation is an essential precursor to the calculation of other non - perturbative observables in qcd , such as @xmath13 , @xmath14 , leptonic and semi - leptonic decay matrix elements and the moments of the nucleon structure function .. Please generate the next two sentences of the article
lattice calculations are however subject to systematic errors from the non zero lattice spacing , the finite volume of the lattice , the extrapolation in the valence quark mass to the chiral limit , and the quenched approximation . in this paper , the effects of the first two sources of error will be examined .
3,759
Suppose that you have an abstract for a scientific paper: kinetics of phase separation transition in boson - fermion cold atom mixtures is investigated . we identify the parameters at which the transition is governed by quantum nucleation mechanism , responsible for the formation of critical nuclei of a stable phase . we demonstrate that for low fermion - boson mass ratio the density dependence of quantum nucleation transition rate is experimentally observable . the crossover to macroscopic quantum tunneling regime is analyzed . based on a microscopic description of interacting cold atom boson - fermion mixtures we derive an effective action for the critical droplet and obtain an asymptotic expression for the nucleation rate in the vicinity of the phase transition and near the spinodal instability of the mixed phase . we show that dissipation due to excitations in fermion subsystem play a dominant role close to the transition point . . And you have already written the first three sentences of the full article: macroscopic metastable states of trapped cold atom systems have been a subject of active experimental and theoretical study for more than a decade @xcite . unlike a homogeneous system of bosons , where infinitesimally small attractive interaction between atoms leads to a collapse , trapped bosons are known to form long lived bose - einstein condensates @xcite ( bec ) due to zero - point energy which , for sufficiently low densities , can compensate the negative interaction energy thus maintaining the system in equilibrium . upon increasing the bec density , interaction energy grows , and , at some instability point ( i.e. , at a certain number of particles in the trap @xmath0 , with @xmath1 for a typical trap ) , zero - point energy can no longer sustain the negative pressure due to the interactions and the system collapses .. Please generate the next two sentences of the article
it has been argued in the literature @xcite that near the instability point ( for bec densities slightly lower than the instability density ) , the effective energy barrier that prevents bec from collapsing becomes so low that the system can quantum mechanically tunnel into the dense ( collapsed ) state . such phenomenon of macroscopic quantum tunneling ( mqt ) , however , has never been observed experimentally due to a strong dependence of the barrier height on the total number of particles in the trap ( @xmath2 ) .
3,760
Suppose that you have an abstract for a scientific paper: gauged @xmath0 model has been advocated for a long time in light of muon @xmath1 anomaly , which is a more than @xmath2 discrepancy between the experimental measurement and the standard model prediction . we augment this model with three right - handed neutrinos @xmath3 and a vector - like singlet fermion @xmath4 to explain simultaneously the non - zero neutrino mass and dark matter content of the universe , while satisfying anomalous muon @xmath1 constraints . it is shown that in a large parameter space of this model we can explain positron excess , observed at pamela , fermi - lat and ams-02 , through dark matter annihilation , while satisfying the relic density and direct detection constraints . . And you have already written the first three sentences of the full article: the standard model ( sm ) of elementary particle physics , which is based on the gauge group @xmath5 is very successful in explaining the fundamental interactions of nature . with the recent discovery of higgs at lhc , the sm seems to be complete . however , it has certain limitations .. Please generate the next two sentences of the article
for example , the muon @xmath1 anomaly , which is a discrepancy between the observation and sm measurement with more than @xmath6 confidence level @xcite . similarly , it does not explain sub - ev masses of active neutrinos as confirmed by long baseline oscillation experiments @xcite .
3,761
Suppose that you have an abstract for a scientific paper: we report a detailed spectroscopic investigation of temperature - induced valence and structural instability of the mixed - stack organic charge - transfer ( ct ) crystal 4,4-dimethyltetrathiafulvalene - chloranil ( dmttf - ca ) . dmttf - ca is a derivative of tetrathiafulvalene - chloranil ( ttf - ca ) , the first ct crystal exhibiting the neutral - ionic transition by lowering temperature . we confirm that dmttf - ca undergoes a continuous variation of the ionicity on going from room temperature down to @xmath0 20 k , but remains on the neutral side throughout . the stack dimerization and cell doubling , occurring at 65 k , appear to be the driving forces of the transition and of the valence instability . in a small temperature interval just below the phase transition we detect the coexistence of molecular species with slightly different ionicities . the peierls mode(s ) precursors of the stack dimerization are identified . . And you have already written the first three sentences of the full article: organic charge - transfer ( ct ) crystals made up by @xmath1 electron - donor ( d ) and electron acceptor ( a ) molecules often exhibit a typical stack structure , with d and a molecules alternating along one direction.@xcite the quasi - one - dimensional electronic structure is stabilized by the ct interaction between d and a , so that the ground state average charge on the molecular sites , or degree of ionicity , @xmath2 , assumes values between 0 and 1 . crystals characterized by @xmath3 0.5 are _ conventionally _ classified as quasi - neutral ( n ) , as opposed to the quasi - ionic ( i ) ones , with @xmath4 0.5 . as discussed for the prototypical system of tetrathiafulvalene - chloranil ( ttf - ca),@xcite a few ct salts have n - i and peierls transition , in which @xmath2 changes rapidly and the regular stack dimerizes , yielding a potentially ferroelectric ground state.@xcite n - i transitions are valence instabilities implying a _ collective _ ct between d and a sites , and as such are accompanied by many intriguing phenomena , such as dielectric constant anomalies , current - induced resistance switching , relaxor ferroelectricity , and so on.@xcite the isostructural series formed by 4,4-dimethyltetrathiafulvalene ( dmttf ) with substituted cas , in which one or more chlorine atom is replaced by a bromine atom , is particularly interesting . in this case , in fact , the transition temperature and related anomalies can be lowered towards zero by chemical or physical pressure , attaining the conditions of a quantum phase transition.@xcite albeit several aspects of the n - i transition in br substituted dmttf - ca family are worth further studies , the motivation of the present work is far more limited , as we want first of all clarify the mechanism of the transition in the pristine compound , dmttf - ca . despite intensive studies,@xcite the transition still presents controversial aspects . through visible reflectance spectra of single crystals and absorption spectra of the powders , aoki@xcite.... Please generate the next two sentences of the article
work:@xcite at 65 k the unit cell doubles along the _ c _ axis ( _ a _ is the stack axis ) . the order parameter of the transition , which is second - order , is the cell doubling coupled with the dimerization.@xcite so above 65 k the cell contains one stack , and at 40 k contains two stacks , both dimerized , and inequivalent ( space group @xmath7 ) . from the bond distances , @xmath2 is estimated at 0.3 and 0.7 - 0.8 for the two stacks , respectively.@xcite in this view , and considering that the two stacks are dimerized in anti - phase , at low temperature dmttf - ca has a _ ferrielectric _ ground state .
3,762
Suppose that you have an abstract for a scientific paper: the decay rate of late time tails in the kerr spacetime have been the cause of numerous conflicting results , both analytical and numerical . in particular , there is much disagreement on whether the decay rate of an initially pure multipole moment @xmath0 is according to @xmath1 , where @xmath2 is the least multipole moment whose excitation is not disallowed , or whether the decay rate is according to @xmath3 , where @xmath4 . we do careful 2 + 1d numerical simulations , and explain the various results . in particular , we show that pure multipole outgoing initial data in either boyer lindquist on ingoing kerr coordinates on the corresponding slices lead to the same late time tail behavior . we also show that similar initial data specified in terms of the poisson spherical coordinates lead to the simpler @xmath1 late time tail . we generalize the rule @xmath4 to subdominant modes , and also study the behavior of non axisymmetric initial data . we discuss some of the causes for possible errors in 2 + 1d simulations , demonstrate that our simulations are free of those errors , and argue that some conflicting past results may be attributed to them . . And you have already written the first three sentences of the full article: the late - time tails of black holes have been studied in much detail since price s seminal work @xcite . the formulation of the problem is a straightforward one : place an observer in a circular orbit around a black hole , and have her measure at late times a generic perturbation field , that had compact support at some initial time . it is generally accepted that the observer measures the late - time perturbation field to drop off as an inverse power law of time , specifically as @xmath3 .. Please generate the next two sentences of the article
it is the value of @xmath5 that has been controversial in the literature , with some conflicting results reported . in the case of a schwarzschild black hole , @xmath6 , where @xmath7 is the multipole moment of the initial perturbation field . namely , if the initial ( compactly supported ) perturbation field has the angular dependence of @xmath8 , the angular dependence remains unchanged ( spherical harmonics are eigenvectors of the laplacian operator " ) , and the decay rate of the field is governed by the @xmath7 value of the initial perturbation .
3,763
Suppose that you have an abstract for a scientific paper: this is the second paper of a series in which we present new measurements of the observed rates of supernovae ( sne ) in the local universe , determined from the lick observatory supernova search ( loss ) . in this paper , a complete sn sample is constructed , and the observed ( uncorrected for host - galaxy extinction ) luminosity functions ( lfs ) of sne are derived . these lfs solve two issues that have plagued previous rate calculations for nearby sne : the luminosity distribution of sne and the host - galaxy extinction . we select a volume - limited sample of 175 sne , collect photometry for every object , and fit a family of light curves to constrain the peak magnitudes and light - curve shapes . the volume - limited lfs show that they are not well represented by a gaussian distribution . there are notable differences in the lfs for galaxies of different hubble types ( especially for sne ia ) . we derive the observed fractions for the different subclasses in a complete sn sample , and find significant fractions of sne ii - l ( 10% ) , iib ( 12% ) , and iin ( 9% ) in the sn ii sample . furthermore , we derive the lfs and the observed fractions of different sn subclasses in a magnitude - limited survey with different observation intervals , and find that the lfs are enhanced at the high - luminosity end and appear more standard " with smaller scatter , and that the lfs and fractions of sne do not change significantly when the observation interval is shorter than 10 d. we also discuss the lfs in different galaxy sizes and inclinations , and for different sn subclasses . some notable results are that there is not a strong correlation between the sn lfs and the host - galaxy size , but there might be a preference for sne iin to occur in small , late - type spiral galaxies . the lfs in different inclination bins do not provide strong evidence for extreme extinction in highly inclined galaxies , though the sample is still small . the lfs of different sn subclasses show significant.... And you have already written the first three sentences of the full article: the luminosity function ( lf ) is used to describe the distribution of intrinsic brightness for a particular type of celestial object , and it is always intimately connected to the physical processes leading to the formation of the object of interest . specifically , the lf of supernovae ( sne ) , among the most luminous and exciting transients , will provide important information on their progenitor systems and their evolutionary paths . the intrinsic lf of core - collapse sne ( cc sne , hereafter ) can constrain the distribution of ways that massive stars die at different initial masses ( smith et al .. Please generate the next two sentences of the article
2011a ) , and that of sne ia can illuminate how accreting white dwarfs in the various binary systems result in a thermonuclear explosion . the observed lf of sne will provide information on the extinction they experienced in their host galaxies and their immediate environments , thus giving further clues to their physical origins . from an observational point of view
3,764
Suppose that you have an abstract for a scientific paper: semiclassical theories like the thomas - fermi and wigner - kirkwood methods give a good description of the smooth average part of the total energy of a fermi gas in some external potential when the chemical potential is varied . however , in systems with a fixed number of particles @xmath0 , these methods overbind the actual average of the quantum energy as @xmath0 is varied . we describe a theory that accounts for this effect . numerical illustrations are discussed for fermions trapped in a harmonic oscillator potential and in a hard wall cavity , and for self - consistent calculations of atomic nuclei . in the latter case , the influence of deformations on the average behavior of the energy is also considered . . And you have already written the first three sentences of the full article: a basic problem in the physics of finite fermion systems such as , e.g. , atoms , nuclei , helium clusters , metal clusters , or semiconductor quantum dots , is the determination of the ground - state energy @xmath1 . a standard decomposition , deeply rooted in the connection of classical and quantum physics , is to write @xmath1 as the sum of an average energy @xmath2 and a fluctuating part @xmath3 @xcite : @xmath4 the largest contribution , @xmath2 , is a smooth function of the number @xmath0 of fermions . the shell correction @xmath3 has a pure quantal origin and displays , instead , an oscillatory behavior as a function of @xmath0 . equation ( [ eq1 ] ) underlies the usefulness of the so - called mass formulae , like the liquid drop model for nuclei or for metal clusters , of which the oldest example is the well - known bethe - von weizscker mass formula for the binding energy of nuclei .. Please generate the next two sentences of the article
the decomposition ( [ eq1 ] ) is also at the basis of semiclassical and statistical techniques that are used to investigate how the properties of global character of fermion systems vary with the particle number @xmath0 . such is the case for instance of the celebrated thomas - fermi and wigner - kirkwood theories @xcite .
3,765
Suppose that you have an abstract for a scientific paper: heavy fermion ( hf ) materials exhibit a rich array of phenomena due to the strong kondo coupling between their localized moments and itinerant electrons . a central question in their study is to understand the interplay between magnetic order and charge transport , and its role in stabilizing new quantum phases of matter . particularly promising in this regard is a family of tetragonal intermetallic compounds ce@xmath0@xmath1 ( @xmath2 transition metal , @xmath3 pnictogen ) , that includes a variety of hf compounds showing @xmath4-linear electronic specific heat @xmath5 , with @xmath6 20 - 500 mj@xmath7mol@xmath8 k@xmath9 , reflecting an effective mass enhancement ranging from small to modest . here , we study the low - temperature field - tuned phase diagram of high - quality ceagbi@xmath1 using magnetometry and transport measurements . we find an antiferromagnetic transition at @xmath10 k with weak magnetic anisotropy and the easy axis along the @xmath11-axis , similar to previous reports ( @xmath12 k ) . this scenario , along with the presence of two anisotropic ruderman - kittel - kasuya - yosida ( rkky ) interactions , leads to a rich field - tuned magnetic phase diagram , consisting of five metamagnetic transitions of both first and second order . in addition , we unveil an anomalous hall contribution for fields @xmath13 koe which is drastically altered when @xmath14 is tuned through a trio of transitions at 57 , 78 , and 84 koe , suggesting that the fermi surface is reconstructed in a subset of the metamagnetic transitions . in heavy fermion ( hf ) materials , the kondo coupling between local moments and itinerant electrons plays a central role in determining magnetic and transport properties , particularly at low temperatures . classic examples of the unusual behavior include quantum criticality in ybrh@xmath15si@xmath15 @xcite , unconventional superconductivity in cecoin@xmath16 @xcite , and metamagnetism in ceru@xmath15si@xmath15 @xcite . ce - based hf materials.... And you have already written the first three sentences of the full article: [ fig : randchi]a shows the temperature dependence of in - plane resistivity , @xmath30 , down to 0.5 k at zero magnetic field . at high temperatures ( @xmath46 k ) , @xmath30 shows metallic behavior , decreasing linearly with decreasing temperature . however , further decrease in temperature reveals a resistivity minimum , followed by a logarithmic increase due to incoherent kondo scattering . below @xmath47. Please generate the next two sentences of the article
k , @xmath30 drops abruptly , suggesting that this is the energy scale of either cef depopulation or kondo coherence . we will discuss these possibilities below .
3,766
Suppose that you have an abstract for a scientific paper: we introduce a natural generalization of the forward - starting options , first discussed by m. rubinstein ( @xcite ) . the main feature of the contract presented here is that the strike - determination time is not fixed ex - ante , but allowed to be random , usually related to the occurrence of some event , either of financial nature or not . we will call these options * random time forward starting ( rtfs)*. we show that , under an appropriate martingale preserving " hypothesis , we can exhibit arbitrage free prices , which can be explicitly computed in many classical market models , at least under independence between the random time and the assets prices . practical implementations of the pricing methodologies are also provided . finally a credit value adjustment formula for these otc options is computed for the unilateral counterparty credit risk . * keywords * : random times , forward - starting options , cva . * jel classification * : g13 . And you have already written the first three sentences of the full article: forward - starting options are path dependent put / call financial contracts characterized by having a strike price expressed in terms of a pre - specified percentage @xmath0 of the asset price taken at a contractually fixed intermediate date @xmath1 $ ] , @xmath2 being the option maturity . the time @xmath3 is known as _ strike - determination time_. the payoff of a forward starting call is therefore @xmath4 these products represent the fundamental component of the so - called cliquets ( see @xcite ) , which are indeed equivalent to a series of forward starting at - the - money options , activated along a sequence of intermediate dates , upon payment of an initial premium . cliquets are often employed to buy protection against downside risk , though preserving an upside potential , for instance in pension plans in order to hedge the guarantees attached to embedded equity linked products .. Please generate the next two sentences of the article
wilmott in @xcite showed that these products are particularly sensitive to the model that one chooses for the dynamics of the underlying s price . in this paper we study a generalization of forward starting options allowing for random strike - determination times .
3,767
Suppose that you have an abstract for a scientific paper: ( submitted to physical review d , rapid communication ) . And you have already written the first three sentences of the full article: in the experiment in which they try to detect the neutrino oscillation , by using the size of the earth and measuring the zenith angle distribution of the atmospheric neutrino events , such as , superkamiokande experiment[1 ] hereafter , simply sk , it is demanded that the measurements of the direction of the incident neutrino are being carried out as reliably as possible . among the experiments concerned on the neutrino oscillation , the analysis of fully contained events in sk is regarded as mostly ambiguity - free one , because the essential information to extract clear conclusion is stored inside the detector . in sk , they assume that the direction of the neutrino concerned is the same as that of the produced charged lepton ( hereafter , simply sk assumption)[2,3 ] . however , the sk assumption does not hold in the just energies concerned for neutrino events produced inside the detector , which is shown later .. Please generate the next two sentences of the article
+ in the energy region where fully contained events and parially contained events ( single ring events ) are analysed , quasi elastic scattering of neutrino interaction(qel ) is the dominant source for the atmospheric neutrino concerned[4 ] the differential cross section for qel is given as follows [ 5 ] .
3,768
Suppose that you have an abstract for a scientific paper: in mean field approximation , the grand canonical potential of su(3 ) polyakov linear-@xmath0 model ( plsm ) is analysed for chiral phase - transition , @xmath1 and @xmath2 and for deconfinement order - parameters , @xmath3 and @xmath4 of light- and strange - quarks , respectively . various plsm parameters are determined from the assumption of global minimization of the real part of the potential . then , we have calculated the subtracted condensates ( @xmath5 ) . all these results are compared with recent lattice qcd simulations . accordingly , essential plsm parameters are determined . the modelling of the relaxation time is utilized in estimating the conductivity properties of the qcd matter in thermal medium , namely electric [ @xmath6 and heat [ @xmath7 conductivities . we found that the plsm results on the electric conductivity and on the specific heat agree well with the available lattice qcd calculations . also , we have calculated bulk and shear viscosities normalized to the thermal entropy , @xmath8 and @xmath9 , respectively , and compared them with recent lattice qcd . predictions for @xmath10 and @xmath11 are introduced . we conclude that our results on various transport properties show some essential ingredients , that these properties likely come up with , in studying qcd matter in thermal and dense medium . . And you have already written the first three sentences of the full article: the characterization of the electro - magnetic properties of hadron and parton matter , which in turn can be described by quantum chromodynamics ( qcd ) and quantum electrodynamics ( qed ) , gains increasing popularity among particle physicists . one of the main gaols of the relativistic heavy - ion facilities such as the relativistic heavy - ion collider ( rhic ) at bnl , uppton - usa and the large hadron collider ( lhc ) at cern , near geneva - switzerland and the future nuclotron - based ion collider facility ( nica ) at jinr , dubna - russia , is precise determination of the hadron - parton phase - diagram , which can also be studied in lattice qcd numerical simulations @xcite and various qcd - like approaches . the polyakov nambu - jona lasinio ( pnjl ) model @xcite , the polyakov linear-@xmath0 model ( plsm ) or the polyakov quark meson model ( pqm ) @xcite , and the dynamical quasi - particle model ( dqpm ) @xcite are examples on qcd - like models aiming to characterizing the strongly interacting matter in dense and thermal medium and also in finite electro - magnetic field .. Please generate the next two sentences of the article
it is conjectured that , the [ electrical and thermal ( heat ) ] conductivity and ( bulk and shear ) viscous properties of the qcd matter come up with significant modifications in the chiral phase - transition @xcite . the influence of finite magnetic field on qcd phase - diagram , which describes the variation of the confinement - deconfinement phase - transition at various baryon chemical potentials @xcite , has been studied in lattice qcd @xcite . in relativistic heavy - ion collisions
3,769
Suppose that you have an abstract for a scientific paper: a tetrad - based procedure is presented for solving einstein s field equations for spherically - symmetric systems ; this approach was first discussed by lasenby , doran & gull in the language of geometric algebra . the method is used to derive metrics describing a point mass in a spatially - flat , open and closed expanding universe respectively . in the spatially - flat case , a simple coordinate transformation relates the metric to the corresponding one derived by mcvittie . nonetheless , our use of non - comoving ( ` physical ' ) coordinates greatly facilitates physical interpretation . for the open and closed universes , our metrics describe different spacetimes to the corresponding mcvittie metrics and we believe the latter to be incorrect . in the closed case , our metric possesses an image mass at the antipodal point of the universe . we calculate the geodesic equations for the spatially - flat metric and interpret them . for radial motion in the newtonian limit , the force acting on a test particle consists of the usual @xmath0 inwards component due to the central mass and a cosmological component proportional to @xmath1 that is directed outwards ( inwards ) when the expansion of the universe is accelerating ( decelerating ) . for the standard @xmath2cdm concordance cosmology , the cosmological force reverses direction at about @xmath3 . we also derive an invariant fully general - relativistic expression , valid for arbitrary spherically - symmetric systems , for the force required to hold a test particle at rest relative to the central point mass . [ firstpage ] gravitation cosmology : theory black hole physics . And you have already written the first three sentences of the full article: among the known exact solutions of einstein s field equations in general relativity there are two commonly studied metrics that describe spacetime in very different regimes . first , the friedmann robertson walker ( frw ) metric describes the expansion of a homogeneous and isotropic universe in terms of the scale factor @xmath4 .. Please generate the next two sentences of the article
the frw metric makes no reference to any particular mass points in the universe but , rather , describes a continuous , homogeneous and isotropic fluid on cosmological scales . instead of using a ` physical ' ( non - comoving ) radial coordinate @xmath1 , it is usually written in terms of a comoving radial coordinate @xmath5 , where @xmath6 , such that the spatial coordinates of points moving with the hubble flow do not depend on the cosmic time @xmath7 .
3,770
Suppose that you have an abstract for a scientific paper: we present new mathematical alternatives for explaining rotation curves of spiral galaxies in the mond context . for given total masses , it is shown that various mathematical alternatives to mond , while predicting flat rotation curves for large radii ( @xmath0 , where @xmath1 is the characteristic radius of the galactic disc ) , predict curves with different peculiar features for smaller radii ( @xmath2 ) . they are thus testable against observational data . , gravitation : phenomenology , galaxies : internal motions 04.90.+e , 95.30.sf , 98.62.dm . And you have already written the first three sentences of the full article: the first mathematical descriptions of the effects of gravity , made by galileo in his study of the free fall of bodies and by kepler in his study of planetary motions , were purely empirical . though newton offered a coherent explanation of what was behind the laws governing gravitational effects , it was only with einstein s general relativity that we had an apparently complete theory of gravity . however , at the end of the 20@xmath3 century , a new enigma concerning the motion of ` celestial bodies ' emerged , in particular , in studying rotation curves of spiral galaxies . while newton s law of gravity predicts that the velocity of rotation in the interior of a galaxy should fall with increasing distance from the galactic center if the observed light traces mass , what is observed is the maintenance of a constant velocity with increasing radius , generating flat rotation curves @xcite . two simple ways of dealing with this problem have been suggested : 1 . assuming that there is more mass ( _ i.e. _ , dark matter ) in galaxies than is observed ; 2 . modifying the law of gravity .. Please generate the next two sentences of the article
while much work has been done in the search for possible particle candidates for dark matter @xcite , very little has been done to explore the possibilities of modified gravity laws . until now , the most popular suggestion for a modified gravitational law has been modified newtonian dynamics , or , mond @xcite . in mond the acceleration @xmath4 of a body in an external gravitational field
3,771
Suppose that you have an abstract for a scientific paper: we investigate the effect of including a significant `` binary twin '' population ( binaries with almost equal mass stars , @xmath0 ) for the production of double compact objects and some resulting consequences , including ligo inspiral rate and some properties of short - hard gamma - ray bursts . we employ very optimistic assumptions on the twin fraction ( @xmath1 ) among all binaries , and therefore our calculations place an upper limits on the influence of twins on double compact object populations . we show that for ligo the effect of including twins is relatively minor : although the merger rates does indeed increase when twins are considered , the rate increase is fairly small ( @xmath2 ) . also , chirp mass distribution for double compact objects formed with or without twins are almost indistinguishable . if double compact object are short - hard grb progenitors , including twins in population synthesis calculations does not alter significantly the earlier rate predictions for the event rate . however , for one channel of binary evolution , introducing twins more than doubles the rate of `` very prompt '' ns - ns mergers ( time to merger less than @xmath3 years ) compared to models with the `` flat '' @xmath4 distribution . in that case , @xmath5 of all ns - ns binaries merge within @xmath6 years after their formation , indicating a possibility of a very significant population of `` prompt '' short - hard gamma - ray bursts , associated with star forming galaxies . we also point out that , independent of assumptions , fraction of such prompt neutron star mergers is always high , @xmath7 . we note that recent observations ( e.g. , berger et al . ) indicate that fraction of short - hard grbs found in young hosts is at least @xmath8 and possibly even @xmath9 . . And you have already written the first three sentences of the full article: a majority of stars are in binaries , and a substantial fraction of binaries have short enough orbital periods that they are likely to interact during either their main sequence or post - main sequence evolution . many of the most interesting phenomena in astronomy can be directly traced to the interaction of close binaries ; an incomplete list would include binary neutron stars and white dwarfs , supernovae ia , cataclysmic variables , and blue stragglers . there is a vast literature on the subject ( e.g. , paczynski 1971 ; wellstein & langer 1999 ; hurley , tout & pols 2002 ; belczynski , kalogera & bulik 2002b ) .. Please generate the next two sentences of the article
although there are many ingredients that must be considered in interacting binaries , an implicit assumption in much theoretical work has been that the lifetimes of the stars are almost always quite different . this assumption arises naturally from two considerations .
3,772
Suppose that you have an abstract for a scientific paper: we propose an effective route to fully control the phase of plane waves reflected from electrically ( optically ) thin sheets . this becomes possible using engineered artificial full - reflection layers ( metamirrors ) as arrays of electrically small resonant bi - anisotropic particles . in this scenario , fully reflecting mirrors do not contain any continuous ground plane , but only arrays of small particles . bi - anisotropic omega coupling is required to get asymmetric response in reflection phase for plane waves incident from the opposite sides of the composite mirror . it is shown that with this concept one can independently tailor the phase of electromagnetic waves reflected from both sides of the mirror array . radi : tailoring reflections from thin composite metamirrors reflectarray , magnetic conductor , high - impedance surface , bi - anisotropic particle , reflection , transmission , resonance . . And you have already written the first three sentences of the full article: sec : introduction the reflecting properties of mirrors and the focusing properties of lenses have been known since ancient times , general possibilities to tailor reflection and transmission of plane waves using thin metasufaces have been realized only recently . in what concerns the extended control over transmission , the transmitarray ( e.g. @xcite ) is the known technique based on the use of two parallel antenna arrays . this concept has been recently generalized as the meta - transmit - array in @xcite , where subwavelength ( in the transverse plane ) elements are used .. Please generate the next two sentences of the article
another class of transmission - phase controlling layers is the phase - shifting surface @xcite . most of these structures contain several layers and are considerably thick in terms of the wavelength . but using various frequency - selective surfaces ( e.g. @xcite ) including inhomogeneous in the layer plane @xcite , transmission phase can be controlled also by electrically thin layers . eliminating reflection while controlling transmission phase is possible using huygens s metasurfaces @xcite .
3,773
Suppose that you have an abstract for a scientific paper: we present a green s function approach based on a lcao scheme to compute the elastic propagation of electrons injected from a stm tip into a metallic film . the obtained 2d current distributions in real and reciprocal space furnish a good representation of the elastic component of ballistic electron emission microscopy ( beem ) currents . since this component accurately approximates the total current in the near threshold region , this procedure allows in contrast to prior analyses to take into account effects of the metal band structure in the modeling of these experiments . the au band structure , and in particular its gaps appearing in the [ 111 ] and [ 100 ] directions , provides a good explanation for the previously irreconcilable results of nanometric resolution and similarity of beem spectra on both au / si(111 ) and au / si(100 ) . . And you have already written the first three sentences of the full article: ballistic electron emission microscopy ( beem)@xcite is a new technique based on the scanning tunneling microscope ( stm ) . it has been primarily designed for the study of buried metal - semiconductor interfaces , in particular for the investigation of the schottky barrier . the experimental setup consists of a stm injecting current in a metallic film deposited on a semiconductor material . after propagation through the metal , a fraction of these electrons still has sufficient energy to surpass the schottky barrier and may enter into the semiconductor to be finally detected as beem current . using the tunneling tip. Please generate the next two sentences of the article
as a localized electron source gives beem its unparalleled power to provide spatially resolved information on the buried interface , that can additionally be related to the surface topography via the simultaneously recorded tunneling current . the energy of the electrons contributing to the final beem current depends on the bias voltage between tip and metal , and is typically 1 to 10 ev above the fermi energy in the metal . for energies close to the threshold voltage
3,774
Suppose that you have an abstract for a scientific paper: the critical temperature of an underdoped cuprate superconductor is limited by its phase stiffness @xmath0 . in this article we argue that the dependence of @xmath0 on doping @xmath1 should be understood as a consequence of deleterious competition with antiferromagnetism at large electron densities , rather than as evidence for pairing of holes in the @xmath2 mott insulator state . @xmath0 is suppressed at small @xmath1 because the correlation energy of a @xmath3-wave superconductor has a significant pairing - wavevector dependence when antiferromagnetic fluctuations are strong . . And you have already written the first three sentences of the full article: the fascinating and rich phenomenology of high temperature cuprate superconductors has been very thoroughly studied over the past 20 years . although there is substantial variability in detail from material to material , all cuprates exhibit robust mott insulator antiferromagnetism when the hole - doping fraction @xmath1 is very small , superconductivity which appears when @xmath1 exceeds a minimum value @xmath4 , and a maximum @xmath5 in optimally doped materials with @xmath6 . in the underdoped regime , the superconducting transition temperature is limited by phase fluctuations@xcite , and experiments hint at a wide variety of ( typically ) short - range correlations associated with competing charge and spin orders . the underdoped regime poses a fundamental challenge to theory because its electronic properties are not fully consistent with any of the various well - understood _ fixed - point _. Please generate the next two sentences of the article
behaviors that often help us to classify and predict the properties of very complex materials . the phenomenological parameter @xmath0 used to characterize phase - fluctuation stiffness in a superconductor is normally expressed in terms of the superfluid density @xmath7 by writing @xmath8 , an identification that is partly justified by bcs mean - field theory .
3,775
Suppose that you have an abstract for a scientific paper: we analyze the electrical characteristics of a circuit consisting of a free thin - film magnetic layer and source and drain electrodes that have opposite magnetization orientations along the free magnet s two hard directions . we find that when the circuit s current exceeds a critical value there is a sudden resistance increase which can be large in relative terms if the currents to source or drain are strongly spin polarized and the free magnet is thin . this behavior can be partly understood in terms of a close analogy between the magnetic circuit and a josephson junction . . And you have already written the first three sentences of the full article: electronic transport can usually be described in terms of effectively independent electrons . recently , with the discovery and exploitation of spin - transfer torque@xcite ( stt ) effects , magnetism has joined superconductivity as an instance in which collective and quasiparticle contributions to transport are entwined in an essential way . the similarity between the non - equilibrium properties of magnetic and superconducting@xcite systems is especially close when comparing the properties of a superconducting circuit containing a josephson junction to a magnetic circuit containing a free ferromagnetic layer with strong easy - plane anisotropy . as we explain in more detail below , the role of the josephson junction bias current in the superconducting circuit is played in the magnetic case by the component of the spin - current injected into the nanoparticle that is perpendicular to the easy plane .. Please generate the next two sentences of the article
the electrical properties of a circuit containing a josephson junction typically change drastically when the junction s critical current is exceeded . in this paper we propose that the magnetic circuit illustrated in fig .
3,776
Suppose that you have an abstract for a scientific paper: besides the well - known existence of andreev bound states , the zero - energy local density of states at the boundary of a @xmath0-wave superconductor strongly depends on the boundary geometry itself . in this work , we examine the influence of both a simple wedge - shaped boundary geometry and a more complicated polygonal or faceted boundary structure on the local density of states . for a wedge - shaped boundary geometry , we find oscillations of the zero - energy density of states in the corner of the wedge , depending on the opening angle of the wedge . furthermore , we study the influence of a single abrikosov vortex situated near a boundary , which is of either macroscopic or microscopic roughness . . And you have already written the first three sentences of the full article: the local density of states at the boundary of a superconductor is a crucial factor in many experiments , for example tunneling measurements . for conventional @xmath1-wave superconductors , the local density of states at an insulating boundary is practically the same as in the bulk . in particular , the specific boundary geometry is irrelevant . in the case of @xmath0-wave symmetry , however , the situation is completely different . due to andreev bound states @xcite , a drastic enhancement of the low - energy density of states can be observed at a straight flat surface appearing as a pronounced zero - bias conductance peak @xcite .. Please generate the next two sentences of the article
this effect is maximal if the @xmath0-wave nodal direction is perpendicular to the boundary and shrinks when the orientation is changed @xcite . for an angle of 45 degrees between nodal direction and boundary , the andreev bound states disappear completely . besides this well - known effect , it is important to realize that for @xmath0-wave symmetry also the boundary geometry itself can have strong influence on the local density of states . in this work we examine the local density of states at the surface of a @xmath0-wave superconductor for some basic examples of polygonal boundary geometries and show that andreev bound states are sensitive to the boundary geometry .
3,777
Suppose that you have an abstract for a scientific paper: thermodynamical properties of nuclear matter at sub - saturation densities were investigated using a simple van der waals - like equation of state with an additional term representing the symmetry energy . first - order isospin - asymmetric liquid - gas phase transition appears restricted to isolated isospin - asymmetric systems while the symmetric systems will undergo fragmentation decay resembling the second - order phase transition . the density dependence of the symmetry energy scaling with the fermi energy satisfactorily describes the symmetry energy at sub - saturation nuclear densities . the deconfinement - confinement phase transition from the quark - gluon plasma to the confined quark matter appears in the isolated systems continuous in energy density while discontinuous in quark density . a transitional state of the confined quark matter has a negative pressure and after hadronization an explosion scenario can take place which can offer explanation for the hbt puzzle as a signature of the phase transition . . And you have already written the first three sentences of the full article: the knowledge of the phase diagram of nuclear matter is one of the principal open questions in modern nuclear physics with far reaching cosmological consequences . detailed investigations have been carried out in the recent years in particle - nucleus and nucleus - nucleus collisions in a wide range of projectile energies . the process of multifragmentation was investigated at intermediate and high energies in order to study the properties of the expected liquid - gas phase transition at sub - saturation nuclear densities .. Please generate the next two sentences of the article
for instance , using the calorimetry of the hot quasi - projectile nuclei formed in the damped nucleus - nucleus collisions ( see ref . @xcite and ref .
3,778
Suppose that you have an abstract for a scientific paper: we present a high - resolution elemental - abundance analysis for a sample of 23 very metal - poor ( vmp ; @xmath0 } < -2.0 $ ] ) stars , 12 of which are extremely metal - poor ( emp ; @xmath0 } < -3.0 $ ] ) , and 4 of which are ultra metal - poor ( ump ; @xmath0 } < -4.0 $ ] ) . these stars were targeted to explore differences in the abundance ratios for elements that constrain the possible astrophysical sites of element production , including li , c , n , o , the @xmath1-elements , the iron - peak elements , and a number of neutron - capture elements . this sample substantially increases the number of known carbon - enhanced metal - poor ( cemp ) and nitrogen - enhanced metal - poor ( nemp ) stars our program stars include eight that are considered `` normal '' metal - poor stars , six cemp-@xmath2 stars , five cemp-@xmath3 stars , two cemp-@xmath4 stars , and two cemp-@xmath5 stars . one of the cemp-@xmath4 stars and one of the cemp-@xmath5 stars are possible nemp stars . we detect lithium for three of the six cemp-@xmath2 stars , all of which are li - depleted with respect to the spite plateau . the majority of the cemp stars have @xmath6}>0 $ ] . the stars with @xmath6}<0 $ ] suggest a larger degree of mixing ; the few cemp-@xmath2 stars that exhibit this signature are only found at @xmath0}<-3.4 $ ] , a metallicity below which we also find the cemp-@xmath2 stars with large enhancements in na , mg , and al . we confirm the existence of two plateaus in the absolute carbon abundances of cemp stars , as suggested by spite et al . we also present evidence for a `` floor '' in the absolute ba abundances of cemp-@xmath2 stars at @xmath7 . . And you have already written the first three sentences of the full article: in recent years , high - resolution spectroscopic analyses of samples of stars with metallicities significantly below solar have grown to the point that one can begin to establish the general behaviors of elemental abundance ratios associated with production by the first few generations of stars to form the galaxy ( for a recent review see , e.g. , frebel & norris 2015 ) . these `` statistical '' samples are particularly valuable when the data are analysed in a self - consistent manner ( e.g. * ? ? ? * ) , so that comparisons of derived abundance ratios are not plagued by the scatter introduced from the different assumptions and procedures used by individual researchers , which can be sufficiently large as to obscure important details . of particular interest to this effort is the class of stars that , despite their overall low abundances of iron - peak elements , exhibit large over - abundances of c ( as well as n and o ) in their atmospheres , the so - called carbon - enhanced metal - poor ( cemp ) stars @xcite .. Please generate the next two sentences of the article
this class comprises a number of sub - classes ( originally defined by beers & christlieb 2005 ) , based on the behavior of their neutron - capture elements : ( 1 ) cemp-@xmath2 stars , which exhibit no over - abundances of n - capture elements , ( 2 ) cemp-@xmath3 stars , which show n - capture over - abundances consistent with the slow neutron - capture process , ( 3 ) cemp-@xmath4 stars , with n - capture over - abundances associated with the rapid neutron - capture process , and ( 4 ) cemp-@xmath5 stars , which exhibit n - capture over - abundances that suggest contribution from both the slow and rapid neutron - capture processes . each of these sub - classes appear to be associated with different element - production histories , thus their study provides insight into the variety of astrophysical sites in the early galaxy that were primarily responsible for their origin .
3,779
Suppose that you have an abstract for a scientific paper: gravitational lensing effects arise from the light ray deflection by all of the mass distribution along the line of sight . it is then expected that weak lensing cluster surveys can provide us true mass - selected cluster samples . with numerical simulations , we analyze the correspondence between peaks in the lensing convergence @xmath0-map and dark matter halos . particularly we emphasize the difference between the peak @xmath0 value expected from a dark matter halo modeled as an isolated and spherical one , which exhibits a one - to - one correspondence with the halo mass at a given redshift , and that of the associated @xmath0-peak from simulations . for halos with the same expected @xmath0 , their corresponding peak signals in the @xmath0-map present a wide dispersion . at an angular smoothing scale of @xmath1 , our study shows that for relatively large clusters , the complex mass distribution of individual clusters is the main reason for the dispersion . the projection effect of uncorrelated structures does not play significant roles . the triaxiality of dark matter halos accounts for a large part of the dispersion , especially for the tail at high @xmath0 side . thus lensing - selected clusters are not really mass - selected . to better predict @xmath0-selected cluster abundance for a cosmological model , one has to take into account the triaxial mass distribution of dark matter halos . on the other hand , for a significant number of clusters , their mass distribution is even more complex than that described by the triaxial model . our analyses find that large substructures affect the identification of lensing clusters considerably . they could show up as separate peaks in the @xmath0-map , and cause a mis - association of the whole cluster with a peak resulted only from a large substructure . the lower - end dispersion of @xmath0 is attributed mostly to this substructure effect . for @xmath2 , the projection effect can be significant and contributes to the dispersion at both high.... And you have already written the first three sentences of the full article: because they are directly associated with the mass distribution of the universe , gravitational lensing effects are powerful probes of spatial structures of the dark matter . strong lensing phenomena , such as multiple images of background quasars and giant arcs , have been used to constrain inner mass profiles of lensing galaxies and clusters of galaxies ( e.g. , gavazzi et al . 2003 ; bartelmann & meneghetti 2004 ; ma 2003 ; zhang 2004 ) .. Please generate the next two sentences of the article
weak lensing effects , on the other hand , enable us to study mass distributions of clusters of galaxies out to large radii ( e.g. , bartelmann & schneider 2001 ) . cosmic shears , coherent shape distortions of background galaxies induced by large - scale structures in the universe , provide us a promising means to map out the dark matter distribution of the universe ( e.g. , tereno et al . 2005 ; van waerbeke 2005 ) . of many important studies on lensing effects ,
3,780
Suppose that you have an abstract for a scientific paper: to understand the evolutionary stage of the peculiar supergiant irc+10420 , we have been taking spectra for several years at the 6 m telescope . the optical spectrum of irc+10420 of the years from 1992 through 1996 points to the increase in the temperature : spectral class a5 instead of the former f8 , as was pointed out by humpreys et al . , ( 1973 ) . now it resembles the spectra of late - type b[e ] stars . the spectrum contains absorptions ( mainly of ions ) formed in the photosphere , apparently stationary with respect to the star center of mass , and emissions too , which can be formed in the fossil expanding envelope as well as partly in its compressing region . using our spectra and spectral data obtained by oudmaijer ( 1995 ) we estimated the atmospheric parameters @xmath0 , logg=1.0 , @xmath1 and concluded that metallicity of irc+10420 is solar : the average value @xmath2_{\odot } = -0.03}$ ] . combination of results allows us to consider irc+10420 as a massive supergiant evolving to the wr - stage . = -3 cm = -2 cm _ special astrophysical observatory , nizhnij arkhyz , 357147 russia _ * keywords : * stars : evolution stars : hypergiants stars : individual : irc+10420 . And you have already written the first three sentences of the full article: the oh / ir source irc+10420 = iras19244 + 1115 identified with the peculiar high luminosity star v1302aql is a unique object , which has been carefully and comprehensively studied over the last decades but still remains a puzzle . of the two hypotheses about its nature neither seems to be convincingly preponderant as yet . according to fix and cobb ( 1987 ) , hrivnak et al . , ( 1989 ) and others this is a degenerate core giant evolving through the proto - planetary nebula stage with a luminosity no higher than @xmath3 . according to jones et al . , ( 1993 ) , humphreys and davidson ( 1994 ) and oudmaijer et al . ( 1996 ) this is a core - burning hypergiant of @xmath4 .. Please generate the next two sentences of the article
the difficulty of choice is due to : * the uncertainty of fundamental observational parameters , such as spectral class and distance ; * the fact that with the difference in mass , age , even type of stellar population the evolutionary processes and their observational evidence are similar : in both alternatives the effective temperature of the star increases , there is a gaseous - dust envelope inherited from the red giant or supergiant phase , which interacts with the stellar wind ; * the presence of several competing models : thin chromosphere in the expanding gaseous - dust envelope such optically thick that we see the light of the star being multiple scaterring by circumstellar dust ( fix and cobb , 1987 ) ; a gaseous - dust disk in a clumpy envelope ( jones et al . , 1993 ) ; jets with a small angle of opening ( oudmaijer et al . , 1994 ) ; and at last infall of circumstellar material onto photosphere ( oudmaijer , 1995 ) .
3,781
Suppose that you have an abstract for a scientific paper: in this review , we describe the physical processes driving the dynamical evolution of binary stars , namely the circularization of the orbit and the synchronization of their spin and orbital rotation . we also discuss the possible role of the elliptic instability which turns out to be an unavoidable ingredient of the evolution of binary stars . # 1_#1 _ # 1_#1 _ = # 1 1.25 in .125 in .25 in . And you have already written the first three sentences of the full article: the evolution of rotation is usually associated with an evolution of angular momentum ; changing the angular momentum of any body requires torques and stars do not escape from this law of physics . in binary stars there is a permanent source of torques : the tides . hence , understanding the evolution of rotation of stars in a binary system demands the understanding of the action of tides . this compulsory exercise was started more than thirty years ago by jean - paul zahn during his thse dtat " , _ les mares dans une toiles double serre _ ( zahn 1966 ) .. Please generate the next two sentences of the article
all the concepts needed to understand tides and their actions in the dynamical evolution of binaries are presented in this work . surely , as in isolated stars , rotation is an important ingredient of evolution through the induced mixing processes : turbulence in stably stratified radiative zones , circulations ... all these processes will influence the abundances of elements in the atmospheres or the internal profile of angular velocity , for instance .
3,782
Suppose that you have an abstract for a scientific paper: for applications regarding transition prediction , wing design and control of boundary layers , the fundamental understanding of disturbance growth in the flat - plate boundary layer is an important issue . in the present work we investigate the stability of boundary layer in poiseuille flow . we normalize pressure and time by inertial and viscous effects . the disturbances are taken to be periodic in the spanwise direction and time . we present a set of linear governing equations for the parabolic evolution of wavelike disturbances . then , we derive modified orr - sommerfeld equations that can be applied in the layer . contrary to what one might think , we find that squire s theorem is not applicable for the boundary layer . we find also that normalization by inertial or viscous effects leads to the same order of stability or instability . for the @xmath0 disturbances flow ( @xmath1 ) , we found the same critical reynolds number for our two normalizations . this value coincides with the one we know for neutral stability of the known orr - sommerfeld equation . we noticed also that for all overs values of @xmath2 in the case @xmath1 correspond the same values of @xmath3 at @xmath4 whatever the normalization . we therefore conclude that in the boundary layer with a 2d - disturbance , we have the same neutral stability curve whatever the normalization . we find also that for a flow with hight hydrodynamic reynolds number , the neu- tral disturbances in the boundary layer are two - dimensional . at last , we find that transition from stability to instability or the opposite can occur according to the reynolds number and the wave number . institut de mathmatiques et de sciences physiques , bp : 613 porto novo , bnin the abdus salam international centre for theoretical physics , trieste , italy . And you have already written the first three sentences of the full article: boundary - layer theory is crucial in understanding why certain phenomena occur . it is well known that the instability of boundary layers is sensitive to the mean velocity profile , so that a small distortion to the basic flow may have a detrimental effect on its stability . prandtl ( 1904)*@xcite * proposed that viscous effects would be confined to thin layers adjacent to boundaries in the case of the motion of fluids with very little viscosity i.e. in the case of flows for which the characteristic reynolds number , @xmath5 , is large . in a more general sense. Please generate the next two sentences of the article
we will use boundary - layer theory ( blt ) to refer to any large - reynolds - number . ho and denn studied low reynolds number stability for plane poiseuille flow by using a numerical scheme based on the shooting method . they found that at low reynolds numbers no instabilities occur , but the numerical method led to artificial instabilities.lee and finlayson used a similar numerical method to study both poiseuille and couette flow , and confirmed the absence of instabilities at low reynolds number .
3,783
Suppose that you have an abstract for a scientific paper: we calculate analytically the flavor non - singlet @xmath0 massive wilson coefficients for the inclusive neutral current non - singlet structure functions @xmath1 and @xmath2 and charged current non - singlet structure functions @xmath3 , at general virtualities @xmath4 in the deep - inelastic region . numerical results are presented . we illustrate the transition from low to large virtualities for these observables , which may be contrasted to basic assumptions made in the so - called variable flavor number scheme . we also derive the corresponding results for the adler sum rule , the unpolarized and polarized bjorken sum rules and the gross - llewellyn smith sum rule . there are no logarithmic corrections at large scales @xmath4 and the effects of the power corrections due to the heavy quark mass are of the size of the known @xmath5 corrections in the case of the sum rules . the complete charm and bottom corrections are compared to the approach using asymptotic representations in the region @xmath6 . we also study the target mass corrections to the above sum rules . desy 15171 + do th 15/14 + may 2016 + * the complete @xmath0 non - singlet heavy flavor corrections * johannes blmlein , giulio falcioni , and abilio de freitas + _ platanenallee 6 , d-15738 zeuthen , germany _ + . And you have already written the first three sentences of the full article: deep - inelastic scattering provides one of the most direct methods to measure the strong coupling constant from precision data on the scaling violations of the nucleon structure functions @xcite . the present accuracy of these data also allows to measure the mass of the charm , cf . @xcite , and bottom quarks due to the heavy flavor contributions .. Please generate the next two sentences of the article
the wilson coefficients are known to 2-loop order in semi - analytic form @xcite in the tagged - flavor case , i.e. for the subset in which the hadronic final state contains at least one heavy quark , having been produced in the hard scattering process . the corresponding reduced cross section does not correspond to the notion of structure functions , since those are purely inclusive quantities and terms containing massless final states contribute as well .
3,784
Suppose that you have an abstract for a scientific paper: a coalescence model using the observed properties of pre - stellar condensations ( pscs ) shows how an initially steep imf that might be characteristic of primordial cloud fragmentation can change into a salpeter imf or shallower imf in a cluster of normal density after one dynamical time , even if the pscs are collapsing on their own dynamical time . the model suggests that top - heavy imfs in some starburst clusters originate with psc coalescence . [ firstpage ] stars : formation , stars : mass function , ism : clouds . And you have already written the first three sentences of the full article: a recent study of observations of the stellar initial mass function ( imf ) suggest there are systematic variations where the imf gets flatter , or more top - heavy , in denser regions ( elmegreen 2004 ) . this paper proposes that the high mass part of the imf varies with density as a result of the coalescence of dense pre - stellar condensations ( pscs ) , such as those observed by motte , andr , & neri ( 1998 ) , testi & sargent ( 1998 ) , onishi et al . ( 2002 ) and nomura & miller ( 2004 ) .. Please generate the next two sentences of the article
these objects have densities in the range from @xmath0 @xmath1 to @xmath2 @xmath1 , and masses from 0.1 m@xmath3 to a few m@xmath3 , giving them sizes of @xmath4 au . the largest pscs may be self - gravitating ( johnstone et al . 2000 , 2001 ; motte et al .
3,785
Suppose that you have an abstract for a scientific paper: lovelock theory is the natural extension of general relativity to higher dimensions . it can be also thought of as a toy model for ghost - free higher curvature corrections in gravitational theories . it admits a family of ads vacua , which provides an appealing arena to explore different holographic aspects in a broader setup within the context of the ads / cft correspondence . we will elaborate on these features and review previous work concerning the constraints that lovelock theory entails on the cft parameters when imposing conditions like unitarity , positivity of the energy or causality . . And you have already written the first three sentences of the full article: lovelock theories are the natural extension of the general relativity theory of gravity given by the einstein - hilbert action to higher dimensions and higher curvature interactions . the equations of motion do not involve terms with more than two derivatives of the metric , avoiding the appearance of ghosts @xcite . much work has been done on the main properties of lovelock gravity due to their interest as models where our knowledge of gravity can be tested and extended .. Please generate the next two sentences of the article
for example , the vacua structure , the existence and properties of black holes such as their mass , entropy and thermodynamics , the gravitational phase transitions , the cosmological implications , etc . have been the object of an important amount of literature during the last years . nevertheless , the main motivation for this review article comes from the ads / cft correspondence , famously conjectured by juan maldacena some 15 years ago @xcite .
3,786
Suppose that you have an abstract for a scientific paper: we establish a connection between quantum inequalities , known from quantum field theory on curved spacetimes , and the degree of squeezing in quantum - optical experiments . we prove an inequality which binds the reduction of the electric - field fluctuations to their duration . the bigger the level of fluctuation - suppression the shorter its duration . as an example of an application of this inequality is the case of squeezed light whose phase is controlled with @xmath0 accuracy for which we derive a limit of @xmath1 on the allowed degree of squeezing . . And you have already written the first three sentences of the full article: in quantum field theory the normal - ordered energy density does not need to be positive . in other words the expectation value of the energy density at a point @xmath2 @xmath3 for certain states @xmath4 of the quantum field , can be arbitrarily negative . let us give a simple example , consider the following state @xmath5 which is a superposition of the vacuum state @xmath6 and two particle state @xmath7 . and. Please generate the next two sentences of the article
@xmath8 i.e. @xmath9 , where for simplicity the polarization was disregarded . ] a calculation shows that the energy density , at a certain point @xmath2 , contains two , generally non - vanishing , terms @xmath10 evidently we can choose the sign and the magnitude of @xmath11 in such a way , that @xmath12 becomes negative at the point @xmath2 . since the time that the appearance of negative energies in quantum field theory has been recognized we have learned a great deal about this phenomenon .
3,787
Suppose that you have an abstract for a scientific paper: remarkable observational advances have established a compelling cross - validated model of the universe . yet , two key pillars of this model dark matter and dark energy remain mysterious . sky surveys that map billions of galaxies to explore the ` dark universe ' , demand a corresponding extreme - scale simulation capability ; the hacc ( hybrid / hardware accelerated cosmology code ) framework has been designed to deliver this level of performance now , and into the future . with its novel algorithmic structure , hacc allows flexible tuning across diverse architectures , including accelerated and multi - core systems . on the ibm bg / q , hacc attains unprecedented scalable performance currently 13.94 pflops at 69.2% of peak and 90% parallel efficiency on 1,572,864 cores with an equal number of mpi ranks , and a concurrency of 6.3 million . this level of performance was achieved at extreme problem sizes , including a benchmark run with more than 3.6 trillion particles , significantly larger than any cosmological simulation yet performed . . And you have already written the first three sentences of the full article: modern cosmology is one of the most exciting areas in physical science . decades of surveying the sky have culminated in a cross - validated , `` cosmological standard model '' . yet , key pillars of the model dark matter and dark energy together accounting for 95% of the universe s mass - energy remain mysterious @xcite .. Please generate the next two sentences of the article
deep fundamental questions demand answers : what is the dark matter ? why is the universe s expansion accelerating ? what is the nature of primordial fluctuations ? should general relativity be modified ? to address these questions , ground and space - based observatories operating at multiple wavebands @xcite are aiming to unveil the true nature of the `` dark universe '' . driven by advances in semiconductor technology ,
3,788
Suppose that you have an abstract for a scientific paper: we study the propagation of non - classical light through arrays of coupled linear photonic waveguides and introduce some sets of refractive indices and coupling parameters that provide a closed form propagator in terms of orthogonal polynomials . we present propagation examples of non - classical states of light : single photon , coherent state , path - entangled state and two - mode squeezed vacuum impinging into two - waveguide couplers and a photonic lattice producing coherent transport . . And you have already written the first three sentences of the full article: classical light propagating through arrays of coupled waveguides has provided a fertile ground for the simulation of quantum physics @xcite . these optical analogies of quantum phenomena are changing the way photonic integrated devices are designed ; e.g. one - directional couplers @xcite , light rectifiers @xcite , isolators and polarization splitters @xcite . as the manufacturing quality for experimental devices increases @xcite , it will soon be possible to propagate non - classical light states through linear photonic devices and a full - quantum analysis of the problem is at hand . in quantum mechanics ,. Please generate the next two sentences of the article
propagation through an array of @xmath0 coupled linear waveguides is ruled by the schrdinger - like equation @xmath1 with a hamiltonian @xcite , @xmath2 where the real parameters @xmath3 and @xmath4 are related to the effective refractive index of the @xmath5th waveguide and to the distance between the @xmath5th and @xmath6th waveguides , in that order . the operators @xmath7 ( @xmath8 ) annihilate ( create ) a photon and @xmath9 gives the number of photons at the @xmath5th waveguide .
3,789
Suppose that you have an abstract for a scientific paper: we show that the carrier - mediated exchange interaction , the so - called rkky coupling , between two magnetic impurity moments in graphene is significantly modified in the presence of electron - electron interactions . using the mean - field approximation of the hubbard-@xmath0 model we show that the @xmath1-oscillations present in the bulk for non - interacting electrons disappear and the power - law decay become more long ranged with increasing electron interactions . in zigzag graphene nanoribbons the effects are even larger with any finite @xmath0 rendering the long - distance rkky coupling distance independent . comparing our mean - field results with first - principles results we also extract a surprisingly large value of @xmath0 indicating that graphene is very close to an antiferromagnetic instability . several novel features of graphene , such as two - dimensionality , linear energy dispersion , a tunable chemical potential by gate voltage , and a high mobility have helped raising the expectation of graphene being a serious post - silicon era candidate @xcite . in this context , functionalization of graphene , especially with magnetic atoms or defects which also opens the door to spintronics @xcite , is of large interest . one of the most important properties of magnetic impurities is their effective interaction propagated by the conduction electrons in the host , the so - called ruderman - kittel - kasuya - yoshida ( rkky ) coupling @xcite . this coupling is crucial for magnetic ordering of impurities but also offers access to the intrinsic magnetic properties of the host . several studies exist for the rkky coupling in graphene , where both the standard perturbative approach applied to a continuum field - theoretic description of graphene @xcite and exact diagonalization @xcite have been shown to give similar results . however , consistently , the rkky coupling in graphene has been calculated for non - interacting electrons . this is in spite of growing evidence for the.... And you have already written the first three sentences of the full article: figure [ fig : bulk ] shows the magnitude of the rkky coupling as function of impurity distance @xmath21 along both the zigzag ( a ) and armchair directions ( b ) of the graphene lattice for several values of @xmath22 . the rkky coupling in the large @xmath21-limit for non - interacting graphene is @xmath23/|{\bf r}|^3 $ ] with @xmath24 for a - a sublattice coupling , i.e. for impurities on the same sublattice , ( black ) and @xmath25 and three times larger for a - b ( or different ) sublattice coupling ( red ) @xcite . here @xmath26 is the reciprocal vector for the dirac points .. Please generate the next two sentences of the article
apart from minor effects due to a small @xmath21 , these results are displayed in the lowest black and red curves in fig . [ fig : bulk ] .
3,790
Suppose that you have an abstract for a scientific paper: when comparing new wireless technologies , it is common to consider the effect that they have on the capacity of the network ( defined as the maximum number of simultaneously satisfiable links ) . for example , it has been shown that giving receivers the ability to do interference cancellation , or allowing transmitters to use power control , never decreases the capacity and can in certain cases increase it by @xmath0 , where @xmath1 is the ratio of the longest link length to the smallest transmitter - receiver distance and @xmath2 is the maximum transmission power . but there is no reason to expect the optimal capacity to be realized in practice , particularly since maximizing the capacity is known to be np - hard . in reality , we would expect links to behave as self - interested agents , and thus when introducing a new technology it makes more sense to compare the values reached at game - theoretic equilibria than the optimum values . in this paper we initiate this line of work by comparing various notions of equilibria ( particularly nash equilibria and no - regret behavior ) when using a supposedly better " technology . we show a version of braess s paradox for all of them : in certain networks , upgrading technology can actually make the equilibria _ worse _ , despite an increase in the capacity . we construct instances where this decrease is a constant factor for power control , interference cancellation , and improvements in the sinr threshold ( @xmath3 ) , and is @xmath4 when power control is combined with interference cancellation . however , we show that these examples are basically tight : the decrease is at most @xmath5 for power control , interference cancellation , and improved @xmath3 , and is at most @xmath6 when power control is combined with interference cancellation . . And you have already written the first three sentences of the full article: due to the increasing use of wireless technology in communication networks , there has been a significant amount of research on methods of improving wireless performance . while there are many ways of measuring wireless performance , a good first step ( which has been extensively studied ) is the notion of _ capacity_. given a collection of communication links , the capacity of a network is simply the maximum number of simultaneously satisfiable links . this can obviously depend on the exact model of wireless communication that we are using , but is clearly an upper bound on the usefulness " of the network .. Please generate the next two sentences of the article
there has been a large amount of research on analyzing the capacity of wireless networks ( see e.g. @xcite ) , and it has become a standard way of measuring the quality of a network . because of this , when introducing a new technology it is interesting to analyze its affect on the capacity . for example , we know that in certain cases giving transmitters the ability to control their transmission power can increase the capacity by @xmath4 or @xmath7 @xcite , where @xmath1 is the ratio of the longest link length to the smallest transmitter - receiver distance , and can clearly never decrease the capacity . however , while the capacity might improve , it is not nearly as clear that the _ achieved _ capacity will improve . after all , we do not expect our network to actually have performance that achieves the maximum possible capacity .
3,791
Suppose that you have an abstract for a scientific paper: the background due to the direct diffractive dissociation of the photon into the @xmath0-pair to the `` elastic '' diffractive @xmath1-meson production in electron - proton collisions is calculated . at large @xmath2 the interference between resonant and non - resonant @xmath3 production changes the @xmath4 ratio with the mass of the @xmath5 ( i.e. @xmath1-meson ) state . 1.5 truecm @xmath6 * in the * @xmath1 * -meson diffractive electroproproduction . * + m.g.ryskin and yu.m.shabelski + petersburg nuclear physics institute , + gatchina , st.petersburg 188350 russia + e - mail : @xmath7 [email protected] + e - mail : @xmath7 [email protected] + . And you have already written the first three sentences of the full article: it was noted many years ago that the form of the @xmath8-meson peak is distorted by the interference between resonant and non - resonant @xmath0 production . for the case of `` elastic '' @xmath1 photoproduction the effect was studied by p.sding in @xcite and s.drell @xcite ( who considered the possibility to produce the pion beam via the @xmath9 process ) . at high energies the main ( and the only ) source of background is the drell - hiida - deck process @xcite ( see fig .. Please generate the next two sentences of the article
the incoming photon fluctuates into the pion pair and then @xmath10-elastic scattering takes place . thus the amplitude for the background may be written in terms of the pion - proton cross section .
3,792
Suppose that you have an abstract for a scientific paper: we have calculated the tsallis entropy and fisher information matrix ( entropy ) of spatially - correlated nonextensive systems , by using an analytic non - gaussian distribution obtained by the maximum entropy method . effects of the correlated variability on the fisher information matrix are shown to be different from those on the tsallis entropy . the fisher information is increased ( decreased ) by a positive ( negative ) correlation , whereas the tsallis entropy is decreased with increasing an absolute magnitude of the correlation independently of its sign . this fact arises from the difference in their characteristics . it implies from the cramr - rao inequality that the accuracy of unbiased estimate of fluctuation is improved by the negative correlation . a critical comparison is made between the present study and previous ones employing the gaussian approximation for the correlated variability due to multiplicative noise . = 1.333 * effects of correlated variability on information entropies + in nonextensive systems * hideo hasegawa _ department of physics , tokyo gakugei university + koganei , tokyo 184 - 8501 , japan _ ( ) pacs number(s ) : 05.70.-a,05.10.gg,05.45.-a _ key words _ : tsallis entropy , fisher information , correlated variability , nonextensive systems . And you have already written the first three sentences of the full article: it is well known that the tsallis entropy and fisher information entropy ( matrix ) are very important quantities expressing information measures in nonextensive systems . the tsallis entropy for @xmath0-unit nonextensive system is defined by @xcite-@xcite @xmath1 with @xmath2^q \:\pi_i d x_i , \label{eq : a2}\end{aligned}\ ] ] where @xmath3 is the entropic index ( @xmath4 ) , and @xmath5 denotes the probability distribution of @xmath0 variables @xmath6 . in the limit of @xmath7 , the tsallis entropy reduces to the boltzman - gibbs - shannon entropy given by @xmath8 the boltzman - gibbs - shannon entropy is extensive in the sense that for a system consisting @xmath0 independent but equivalent subsystems , the total entropy is a sum of constituent subsystems : @xmath9 . in contrast , the tsallis entropy is nonextensive : @xmath10 for @xmath11 , and @xmath12 expresses the degree of the nonextensivity of a given system .. Please generate the next two sentences of the article
the tsallis entropy is a basis of the nonextensive statistical mechanics , which has been successfully applied to a wide class of systems including physics , chemistry , mathematics , biology , and others @xcite . the fisher information matrix provides us with an important measure on information .
3,793
Suppose that you have an abstract for a scientific paper: thin liquid films with floating active protein machines are considered . cyclic mechanical motions within the machines , representing microscopic swimmers , lead to molecular propulsion forces applied to the air - liquid interface . we show that , when the rate of energy supply to the machines exceeds a threshold , the flat interface becomes linearly unstable . as the result of this instability , the regime of interface turbulence , characterized by irregular traveling waves and propagating machine clusters , is established . numerical investigations of this nonlinear regime are performed . conditions for the experimental observation of the instability are discussed . . And you have already written the first three sentences of the full article: molecular machines are protein molecules which can transform chemical energy into ordered internal mechanical motions . the classical examples of such machines are molecular motors kinesin and myosin , where internal mechanical motions are used to transport cargo along microtubules and filaments . many enzymes operate as machines , using internal conformational motions to facilitate chemical reactions .. Please generate the next two sentences of the article
other kinds of machines , operating as ion pumps or involved in genetic processes , are also known . moreover , artificial nonequilibrium nanodevices , similar to protein machines , are being developed @xcite .
3,794
Suppose that you have an abstract for a scientific paper: the method of matched asymptotic expansions is applied to the problem of a collisionless plasma generated by uv illumination localized in a central part of the plasma in the limiting case of small debye length @xmath0 . a second - approximation asymptotic solution is found for the double layer positioned at the boundary of the illuminated region and for the un - illuminated plasma for the plane geometry . numerical calculations for different values of @xmath0 are reported and found to confirm the asymptotic results . the net integral space charge of the double layer is asymptotically small , although in the plane geometry it is just sufficient to shield the ambipolar electric field existing in the illuminated region and thus to prevent it from penetrating into the un - illuminated region . the double layer has the same mathematical nature as the intermediate transition layer separating an active plasma and a collisionless sheath , and the underlying physics is also the same . in essence , the two layers represent the same physical object : a transonic layer . . And you have already written the first three sentences of the full article: in the first part of this work @xcite a collisionless plasma , generated by uv illumination localized in a central part of the plasma , was analyzed . the ions were assumed to be cold and the fluid description was used . both plane and cylindrical geometries were treated . an approximate analytical solution was found under the approximation of quasi - neutrality and the exact solution was computed numerically for one value of the debye length @xmath0 for each geometry , this value being much smaller than widths of both illuminated and un - illuminated regions .. Please generate the next two sentences of the article
it was found that the ions generated in the illuminated region are accelerated up to approximately the bohm speed inside the illuminated region . in plane geometry , the ions flow across the un - illuminated region towards the near - wall positive space - charge sheath with a speed which is virtually constant and slightly exceeds the bohm speed . in cylindrical geometry , the ions continue to be accelerated in the un - illuminated region and enter the near - wall space - charge sheath with a speed significantly exceeding the bohm speed . in both geometries , a double layer forms where the illuminated and un - illuminated regions meet .
3,795
Suppose that you have an abstract for a scientific paper: a mechanical model of swimming and flying in an incompressible viscous fluid in the absence of gravity is studied on the basis of assumed equations of motion . the system is modeled as an assembly of rigid spheres subject to elastic direct interactions and to periodic actuating forces which sum to zero . hydrodynamic interactions are taken into account in the virtual mass matrix and in the friction matrix of the assembly . an equation of motion is derived for the velocity of the geometric center of the assembly . the mean power is calculated as the mean rate of dissipation . the full range of viscosity is covered , so that the theory can be applied to the flying of birds , as well as to the swimming of fish or bacteria . as an example a system of three equal spheres moving along a common axis is studied . . And you have already written the first three sentences of the full article: the swimming of fish and the flying of birds continue to pose challenging theoretical problems . the physics of bird flight was first studied in detail by otto lilienthal in the nineteenth century @xcite . since then , significant progress has been made in many years of dedicated research @xcite-@xcite .. Please generate the next two sentences of the article
the goal of theory is to calculate the time - averaged speed and power for given periodic shape variations of the body , at least for a simple model system . it is assumed that the motion of the fluid is well described by the navier - stokes equations for an incompressible viscous fluid . on average over a period the force exerted by the body on the fluid vanishes , so that thrust and drag cancel . in early work by lighthill @xcite and wu @xcite
3,796
Suppose that you have an abstract for a scientific paper: in this note we discuss the possibility to define a space - time with a dsr based approach . we show that the strategy of defining a non linear realization of the lorentz symmetry with a consistent vector composition law can not be reconciled with the extra request of an invariant length ( time ) scale . the latter request forces to abandon the group structure of the translations and leaves a space - time structure where points with relative distances smaller or equal to the invariant scale can not be unambiguously defined . . And you have already written the first three sentences of the full article: it is widely believed that the space - time , where physical process and measurements take place , might have a structure different from a continuous and differentiable manifold , when it is probed at the planck length @xmath0 . for example , the space - time could have a foamy structure @xcite , or it could be non - commutative in a sense inspired by string theory results @xcite or in the sense of @xmath1- minkowski approach @xcite . if this happens in the space - time , in the momentum space there must also be a scale , let say @xmath2 , that signs this change of structure of the space - time , even if the interplay between length and momentum ( @xmath3 ) will presumably change when we approach such high energy scales .. Please generate the next two sentences of the article
one could argue that , if the planck length gives a limit at which one expects that quantum gravity effects become relevant , then it would be independent from observers , and one should look for symmetries that reflect this property . such argument gave rise to the so called dsr proposals , that is , a deformation of the lorentz symmetry ( in the momentum space ) with two invariant scales : the speed of light @xmath4 and @xmath2 ( or @xmath5 ) @xcite .
3,797
Suppose that you have an abstract for a scientific paper: we discuss decrease of coherence in a massive system due to the emission of gravitational waves . in particular we investigate environmental gravitational decoherence in the context of an interference experiment . the time - evolution of the reduced density matrix is solved analytically using the path - integral formalism . furthermore , we study the impact of a tensor noise onto the coherence properties of massive systems . we discuss that a particular choice of tensor noise shows similarities to a mechanism proposed by disi and penrose . . And you have already written the first three sentences of the full article: in recent years , there has been a growing interest in testing gravitational decoherence or possible gravitational effects on quantum mechanics ( qm ) in condensed matter and quantum - optical systems @xcite . decoherence can be studied in the framework of quantum mechanics and it does not require any additional assumptions . the dynamics of a system which is coupled with the environment follows from the schrdinger equation .. Please generate the next two sentences of the article
an observer who has only access to system degrees of freedom observes a nonunitary dynamics which can be obtained by tracing out the environmental degrees of freedom from the total density matrix . this averaging generically reduces the coherence of the reduced density matrix describing the system .
3,798
Suppose that you have an abstract for a scientific paper: being inspired by a recent study [ v. dimitriadis et al . phys . rev . b * 92 * , 064420 ( 2015 ) ] , we study the finite temperature magnetic properties of the spherical nanoparticles with core - shell structure including quenched ( i ) surface and ( ii ) interface nonmagnetic impurities ( static holes ) as well as ( iii ) roughened interface effects . the particle core is composed of ferromagnetic spins , and it is surrounded by a ferromagnetic shell . by means of monte carlo simulation based on an improved metropolis algorithm , we implement the nanoparticles using classical heisenberg hamiltonians . particular attention has also been devoted to elucidate the effects of the particle size on the thermal and magnetic phase transition features of these systems . for nanoparticles with imperfect surface layers , it is found that bigger particles exhibit lower compensation point which decreases gradually with increasing amount of vacancies , and vanishes at a critical value . in view of nanoparticles with diluted interface , our monte carlo simulation results suggest that there exists a region in the disorder spectrum where compensation temperature linearly decreases with decreasing dilution parameter . for nanoparticles with roughened interface , it is observed that the degree of roughness does not play any significant role on the variation of both the compensation point and critical temperature . however , the low temperature saturation magnetizations of the core and shell interface regions sensitively depend on the roughness parameter . . And you have already written the first three sentences of the full article: when the size of a magnetic system is reduced to a characteristic length , the system has a bigger surface to volume ratio giving rise to a great many outstanding thermal and magnetic properties compared to the conventional bulk systems @xcite . advanced functional magnetic nanostructures in different geometries , such as nanowires , nanotubes , nanospheres , nanocubes are center of interest because of their technological @xcite and scientific importance as well as biomedical applications @xcite . from the experimental point of view , many studies have been carried out to discuss and understand the origin of the fascinating physical properties observed in magnetic nanoparticles @xcite . for example ,. Please generate the next two sentences of the article
recently the multi - functional core - shell nanowires have been synthesized by a facile low - cost fabrication process @xcite . based on this study , it has been shown that a multidomain state at remanence can be obtained , which is an attractive feature for the biomedical applications . in another interesting study
3,799
Suppose that you have an abstract for a scientific paper: motivated by the presence of numerous dark matter clumps in the milky way s halo as expected from the cold dark matter cosmological model , we conduct numerical simulations to examine the heating of the disk . we construct an initial galaxy model in equilibrium , with a stable thin disk . the disk interacts with dark matter clumps for about 5 gyr . three physical effects are examined : first the mass spectrum of the dark matter clumps , second the initial thickness of the galactic disk , and third the spatial distribution of the clumps . we find that the massive end of the mass spectrum determines the amount of disk heating . thicker disks suffer less heating . there is a certain thickness at which the heating owing to the interaction with the clumps becomes saturates . we also find that the heating produced by the model which mimics the distribution found in standard cdm cosmology is significant and too high to explain the observational constraints . on the other hand , our model that corresponds to the clump distribution in a @xmath0cdm cosmology produces no significant heating . this result suggests that the @xmath0cdm cosmology is preferable with respect to the standard cdm cosmology in explaining the thickness of the milky way . . And you have already written the first three sentences of the full article: hierarchical clustering governed by cold dark matter ( cdm ) is widely believed as a cosmological scenario which is responsible for the growth of the structures in the universe . according to the hierarchical scenario , small dark matter halos should collapse earlier , but later fall into larger structures . the process of smaller halos being assembled into a larger halo does not always destroy the smaller ones , thus hierarchical structures are seen in many objects , such as clusters of galaxies .. Please generate the next two sentences of the article
recent high - resolution simulations have successfully shown that hundreds of galaxy - size dm halos survive in clusters of galaxies @xcite . a remarkable outcome of the high - resolution cosmological simulation in standard cdm model by @xcite even shows that survival of substructures or satellites occurs not only on cluster scales , but also on galactic scales .