id
int64 0
203k
| input
stringlengths 66
4.29k
| output
stringlengths 0
3.83k
|
---|---|---|
11,000 |
Suppose that you have an abstract for a scientific paper: in the conference presentation we have reviewed the theory of non - gaussian geometrical measures for the 3d cosmic web of the matter distribution in the universe and 2d sky data , such as cosmic microwave background ( cmb ) maps that was developed in a series of our papers .
the theory leverages symmetry of isotropic statistics such as minkowski functionals and extrema counts to develop post- gaussian expansion of the statistics in orthogonal polynomials of invariant descriptors of the field , its first and second derivatives .
the application of the approach to 2d fields defined on a spherical sky was suggested , but never rigorously developed . in this paper
we present such development treating effects of the curvature and finiteness of the spherical space @xmath0 exactly , without relying on the flat - sky approximation .
we present minkowski functionals , including euler characteristic and extrema counts to the first non - gaussian correction , suitable for weakly non - gaussian fields on a sphere , of which cmb is the prime example .
random fields are ubiquitous phenomena in physics appearing in areas ranging from turbulence to the landscape of string theories . in cosmology , the sky - maps of the polarized cosmic microwave background ( cmb ) radiation a focal topic of current research is a prime example of such 2d random fields , specified on @xmath0 spherical space .
modern view of the cosmos , developed primarily through statistical analysis of these fields , points to a universe that is statistically homogeneous and isotropic with a hierarchy of structures arising from small gaussian fluctuations of quantum origin .
while the gaussian limit provides the fundamental starting point in the study of random fields @xcite , non - gaussian features of the cmb fields are of great interest .
indeed , cmb inherits a high level of gaussianity from initial fluctuations , but small non - gaussian deviations may provide a unique window into the details of processes in the early universe .
....
And you have already written the first three sentences of the full article: the statistics of minkowski functionals , including the euler number , as well as extrema counts requires the knowledge of the one - point joint probability distribution function ( jpdf ) @xmath1 of the field @xmath2 ( assumed to have zero mean ) , its first , @xmath3 , and second , @xmath4 , derivatives .
let us consider a random field @xmath2 defined on a 2d sphere @xmath0 of radius @xmath5 represented as the expansion in spherical harmonics @xmath6 where for the gaussian statistically homogeneous and isotropic field random coefficients @xmath7 are uncorrelated with @xmath8-independent variances @xmath9 of each harmonic @xmath10 the variance of the field is then given by @xmath11 when considering derivatives in the curved space , we use covariant derivatives @xmath12 , @xmath13 , @xmath14 , @xmath15 , @xmath16 where it will be seen immediately that mixed version for the second derivatives is the most appropriate choice .
the 2d rotation - invariant combinations of derivatives are @xmath17 where @xmath18 is linear in the field and @xmath19 and @xmath20 are quadratic , always positive , quantities ..
Please generate the next two sentences of the article
|
the derivatives are also random gaussian variables , which variances are easily computed @xmath21 where the fundamental difference between a sphere and the 2d cartesian space is in the fact that @xmath22 . among the cross - correlations the only non - zero one is between the field and its laplacian @xmath23 .
from now on we rescale all random quantities by their variances , so that rescaled variables have @xmath24 . introducing @xmath25 ( where the spectral parameter @xmath26 ) leads to the following simple jpdf for the gaussian 2d fields @xmath27 \ , .
|
11,001 |
Suppose that you have an abstract for a scientific paper: we study the hydrogen lyman emission in various solar features now including ly-@xmath0 observations free from geocoronal absorption and investigate statistically the imprint of flows and of the magnetic field on the line profile and radiance distribution . as a new result
, we found that in ly-@xmath0 rasters locations with higher opacity cluster in the cell interior , while the network has a trend to flatter profiles .
even deeper self reversals and larger peak distances were found in coronal hole spectra .
we also compare simultaneous ly-@xmath0 and ly-@xmath1 profiles .
there is an obvious correspondence between asymmetry and redshift for both lines , but , most surprisingly , the asymmetries of ly-@xmath0 and ly-@xmath1 are opposite . we conclude that in both cases downflows determine the line profile , in case of ly-@xmath0 by absorption and in the case of ly-@xmath1 by emission .
our results show that the magnetically structured atmosphere plays a dominating role in the line formation and indicate the presence of a persisting downflow at both footpoints of closed loops .
we claim that this is the manifestation of a fundamental mass transportation process , which foukal back in 1978 introduced as the coronal convection. .
And you have already written the first three sentences of the full article: the basic idea of this communication is to show that the well - known net redshift of transition region ( tr ) emission and the new observations of the ly-@xmath0 and ly-@xmath1 profiles obtained by soho - sumer are different manifestations of the same fundamental massflow process .
it is known since long that in tr emission all lines appear with a net redshift , which peaks at a temperature around log@xmath2/k=5 ( e.g. , brekke 1997 ) .
dammasch et al ..
Please generate the next two sentences of the article
|
( 2008 ) have demonstrated that this redshift can be explained by the fact that closed loops higher up in the tr have both footpoints redshifted .
such coronal downflows have also been reported by hinode ( tripathi 2009 , del zanna 2008 ) .
|
11,002 |
Suppose that you have an abstract for a scientific paper: we investigate how significant the spiral structure is on calculations concerning radiative transfer in dusty spiral galaxies seen edge - on . the widely adopted exponential disk model ( i.e. both the stars and the dust are distributed exponentially in the radial direction and also perpendicular to the plane of the disk )
is now subject to a detailed comparison with a realistic model that includes spiral structure for the stars and the dust in the disk .
in particular , model images of galaxies with logarithmic spiral arms are constructed , such that the azimuthally averaged disk is exponential in radius and in height , as the observations suggest .
then , pure exponential disk models ( i.e. with no spiral structure ) are used to fit the edge - on appearance of the model images . as a result ,
the parameters derived after the fit are compared to the real values used to create the spiral - structured images .
it turns out that the plain exponential disk model is able to give a very good description of the galactic disk with its parameters varying only by a few percent from their true values . .
And you have already written the first three sentences of the full article: modeling the dust and stellar content of spiral galaxies is a very crucial procedure needed for the correct interpretation of the observations .
the amount of interstellar dust embedded inside spiral galaxies , the way that dust is distributed within spiral galaxies and also the extinction effects of the dust to the starlight are some of the questions that can be answered by performing radiative transfer modeling of individual spiral galaxies .
one very important thing that needs consideration when doing such analysis is the right choice of the stellar and dust distributions . in particular ,.
Please generate the next two sentences of the article
|
the galactic disk is a quite complex system , where stars and dust are mixed together usually in a spiral formation .
for this reason , one has to use realistic distributions able to reproduce quite accurately the observations . on the other hand , simple mathematical expressions for these distributions
|
11,003 |
Suppose that you have an abstract for a scientific paper: it is well known that the density and anisotropy profile in the inner regions of a stellar system with positive phase - space distribution function are not fully independent . here
we study the interplay between density profile and orbital anisotropy at large radii in physically admissible ( consistent ) stellar systems .
the analysis is carried out by using two - component ( , @xmath0 ) spherical self - consistent galaxy models , in which one density distribution follows a generalized @xmath1 profile with external logarithmic slope @xmath2 , and the other a standard @xmath0 profile ( with external slope 4 ) .
the two density components have different `` core '' radii , the orbital anisotropy is controlled with the osipkov - merritt recipe , and for simplicity we assume that the mass of the @xmath0 component dominates the total potential everywhere . the necessary and sufficient conditions for phase - space consistency
are determined analytically , also in presence of a dominant massive central black hole , and the analytical phase - space distribution function of ( , 1 ) models , and of models with a central black hole , is derived for @xmath3 .
it is found that the density slope in the external regions of a stellar system can play an important role in determining the amount of admissible anisotropy : in particular , for fixed density slopes in the central regions , systems with a steeper external density profile can support more radial anisotropy than externally flatter models .
this is quantified by an inequality formally identical to the `` cusp slope - central anisotropy '' theorem ( an & evans 2006 ) , relating at all radii ( and not just at the center ) the density logarithmic slope and the anisotropy indicator in all osipkov - merritt systems .
stellar dynamics galaxies : ellipticals dark matter
black holes .
And you have already written the first three sentences of the full article: observationally it is well established that elliptical galaxies have dark matter halos , and also host central supermassive black holes .
these empirical facts motivate the study of multi - component dynamical models .
when studying dynamical models of stellar systems ( single or multi - component ) , the minimal requirement to be met by a physically acceptable model is the positivity of the phase - space distribution function ( df ) of each distinct component ..
Please generate the next two sentences of the article
|
a model satisfying this essential requirement ( which is much weaker than stability , but stronger than the fact that the jeans equations have a physically acceptable solution ) is called _ consistent _ ; moreover , when the total gravitational potential is determined by the total density profile through the poisson equation , the model is called _ self consistent_. in other words , we call self consistent a consistent self - gravitating system .
two general strategies can be used to construct a ( self ) consistent model , or check whether a proposed model is ( self ) consistent : they are commonly referred to as the `` @xmath4to@xmath5 '' and the `` @xmath5to@xmath4 '' approaches , where @xmath4 is the model df ( e.g. , see bertin 2000 , binney & tremaine 2008 ) .
|
11,004 |
Suppose that you have an abstract for a scientific paper: in this paper we prove lusztig s conjecture on based ring for an affine weyl group of type @xmath0 .
= -2truecm .
And you have already written the first three sentences of the full article: the kazhdan - lusztig theory deeply increases our understanding of coxeter groups and their representations and their role in lie representation theory .
the central concepts in kazhdan - lusztig theory include kazhdan - lusztig polynomial , kazhdan - lusztig basis , cell , based ring .
kazhdan - lusztig polynomials play an essential role in understanding certain remarkable representations in lie theory , for instances , the representations of quantum groups at roots of 1 , the modular representations of algebraic groups , the representations of kac - moody algebras ..
Please generate the next two sentences of the article
|
kazhdan - lusztig basis and cells are very useful in understanding structure and representations of coxeter groups and their hecke algebras .
the based ring of a two - sided cell of certain coxeter groups is defined through kazhdan - lusztig basis by lusztig in [ l5 ] .
|
11,005 |
Suppose that you have an abstract for a scientific paper: the sparse beyesian learning ( also referred to as bayesian compressed sensing ) algorithm is one of the most popular approaches for sparse signal recovery , and has demonstrated superior performance in a series of experiments .
nevertheless , the sparse bayesian learning algorithm has computational complexity that grows exponentially with the dimension of the signal , which hinders its application to many practical problems even with moderately large data sets . to address this issue ,
in this paper , we propose a computationally efficient sparse bayesian learning method via the generalized approximate message passing ( gamp ) technique .
specifically , the algorithm is developed within an expectation - maximization ( em ) framework , using gamp to efficiently compute an approximation of the posterior distribution of hidden variables .
the hyperparameters associated with the hierarchical gaussian prior are learned by iteratively maximizing the q - function which is calculated based on the posterior approximation obtained from the gamp .
numerical results are provided to illustrate the computational efficacy and the effectiveness of the proposed algorithm .
sparse bayesian learning , generalized approximate message passing , expectation - maximization . .
And you have already written the first three sentences of the full article: compressed sensing is a recently emerged technique for signal sampling and data acquisition which enables to recover sparse signals from much fewer linear measurements @xmath0 where @xmath1 is the sampling matrix with @xmath2 , @xmath3 denotes an @xmath4-dimensional sparse signal , and @xmath5 denotes the additive noise . such a problem has been extensively studied and a variety of algorithms , e.g. the orthogonal matching pursuit ( omp ) algorithm @xcite , the basis pursuit ( bp ) method @xcite , and the iterative reweighted @xmath6 and @xmath7 algorithms @xcite , were proposed . besides these methods , another important class of compressed sensing techniques that have received significant attention are bayesian methods , among which sparse bayesian learning ( also referred to as bayesian compressed sensing ) is considered as one of the most popular compressed sensing methods . sparse bayesian learning ( sbl )
was originally proposed by tipping in his pioneering work @xcite to address the regression and classification problems . later on in @xcite , sparse bayesian learning was adapted for sparse signal recovery , and demonstrated superiority over the greedy methods and the basis pursuit method in a series of experiments . despite its superior performance , a major drawback of the sparse bayesian learning method is that it requires to compute an inverse of an @xmath8 matrix at each iteration , and thus has computational complexity that grows exponentially with the dimension of the signal .
this high computational cost prohibits its application to many practical problems with even moderately large data sets . in this paper.
Please generate the next two sentences of the article
|
, we develop a computationally efficient generalized approximate message passing ( gamp ) algorithm for sparse bayesian learning .
gamp , introduced by donoho _ et .
|
11,006 |
Suppose that you have an abstract for a scientific paper: a description is given of the method used to extract quasar host - galaxy parameters from the deep hubble space telescope ( hst ) quasar images presented by mclure et al .
( 1999 ) and dunlop et al .
( 2000 ) .
we then give the results of extensive testing of this technique on a wide range of simulated quasar+host combinations spanning the redshift range of our hst study ( @xmath0 ) .
these simulations demonstrate that , when applied to our deep hst images , our method of analysis can easily distinguish the morphological type of a given host galaxy , as well as determining its scalelength , luminosity , axial ratio and position angle to within an accuracy of a few percent .
we also present new infrared tip - tilt images of 4 of the most luminous quasars in our hst sample , along with the outcome of modelling these data in a similar manner .
the results provide further confidence in the accuracy of the derived host - galaxy scalelengths , and allow accurate determination of @xmath1 colours for this subset of sources .
all 4 of these quasar host galaxies have very similar red colours , @xmath2 , indicative of a well - evolved stellar population .
galaxies : active galaxies : photometry infrared : galaxies quasars : general .
And you have already written the first three sentences of the full article: in two companion papers ( mclure et al . 1999 ; dunlop et al . 2000 ) we present initial and final results from a deep hubble space telescope ( hst ) imaging survey of radio - quiet quasars ( rqqs ) , radio - loud quasars ( rlqs ) and radio galaxies ( rgs ) .
the results presented in these papers were derived from the hst images using a two - dimensional modelling technique developed to cope with such complications as central image saturation , the undersampled nature of wide field ( wf ) camera images , accurate image centering , and the precise form of the hst point spread function ( psf ) .
the primary purpose of this paper is to provide a description of this image analysis method , and to present the results of extensive testing on simulated active - nucleus+host - galaxy images constructed to span the full range of parameter space , and to mimic as closely as possible the real hst data ..
Please generate the next two sentences of the article
|
these tests on simulated data were central to the development of our modelling technique , and also provide a means of estimating the typical errors in the derived host galaxy parameters as a function of redshift .
we also present new data in the form of infrared tip - tilt images of 4 of the most luminous quasars in our hst - imaging sample , and give the results of applying our two - dimensional modelling technique to these new @xmath3-band data .
|
11,007 |
Suppose that you have an abstract for a scientific paper: we study how low - energy charge carriers scatter off periodic and linear graphene grain boundaries oriented along the zigzag direction with a periodicity three times greater than that of pristine graphene .
these defects map the two dirac points into the same position , and thus allow for intervalley scattering to occur . starting from graphene s first - neighbor tight - binding model
we show how can we compute the boundary condition seen by graphene s massless dirac fermions at such grain boundaries .
we illustrate this procedure for the 3-periodic pentagon - only grain boundary , and then work out the low - energy electronic scattering off this linear defect .
we also compute the effective generalized potential seen by the dirac fermions at the grain boundary region . .
And you have already written the first three sentences of the full article: chemical vapor deposition ( cvd ) of graphene on metal surfaces@xcite is currently viewed as one of the most promising scalable methods for economically producing large and abundant high - quality monolayer graphene sheets .
it is thus greatly important to fully understand and control the behavior of electrons on this form of graphene .
cvd graphene , as any other solid grown by chemical vapor deposition , is generally a polycrystal composed by several grains with distinct crystallographic orientations ..
Please generate the next two sentences of the article
|
these grains are separated by grain boundaries ( gbs),@xcite which due to the @xmath0 bonding structure of carbon atoms in graphene , are typically made of pentagonal , heptagonal and octagonal rings of carbon atoms.@xcite grain boundaries generally intercept each other at random angles , being neither periodic nor perfect straight lines . the properties of cvd graphene flakes are strongly influenced by the quantity , distribution and microscopic character of its grain boundaries.@xcite each type of grain boundary exhibits distinctive chemical,@xcite mechanical@xcite and electronic@xcite properties .
this is particularly evident in what concerns the electronic transport in cvd graphene .
|
11,008 |
Suppose that you have an abstract for a scientific paper: stereoscopic spectral imaging is an observing technique that affords rapid acquisition of limited spectral information over an entire image plane simultaneously .
light from a telescope is dispersed into multiple spectral orders , which are imaged separately , and two or more of the dispersed images are combined using an analogy between the @xmath0 spectral data space and conventional @xmath1 three - space . because no photons are deliberately destroyed during image acquisition ,
the technique is much more photon - efficient in some observing regimes than existing techniques such as scanned - filtergraph or scanned - slit spectral imaging .
hybrid differential stereoscopy , which uses a combination of conventional cross - correlation stereoscopy and linear approximation theory to extract the central wavelength of a spectral line , has been used to produce solar stokes - v ( line - of - sight ) magnetograms in the 617.34 nm fe i line , and more sophisticated inversion techniques are currently being used to derive doppler and line separation data from euv images of the solar corona collected in the neighboring lines of he - ii and si - xi at 30.4 nm . in this paper
we develop an analytic _ a priori _ treatment of noise in the line shift signal derived from hybrid differential stereoscopy .
we use the analysis to estimate the noise level and measurement precision in a high resolution solar magnetograph based on stereoscopic spectral imaging , compare those estimates to a test observation made in 2003 , and discuss implications for future instruments . .
And you have already written the first three sentences of the full article: spectral imaging in general , and solar spectral imaging in particular , suffer from a fundamental problem in detector physics .
spectral images have three independent variables @xmath0 , while current image detectors only support two independent variables @xmath2 and integrate over wavelength @xmath3 . conventional techniques to overcome this problem generally use time - multiplexing : in filtergraph imaging spectroscopy ,
a narrow band filter is tuned slowly across the spectral range of interest and an image collected at each discrete @xmath3 ; in conventional scanned - slit imaging spectroscopy , the light is passed through a spatial filter ( the slit ) , selecting a single @xmath4 , and the remaining light is dispersed to project @xmath3 onto the detector s @xmath4 axis ..
Please generate the next two sentences of the article
|
even more sophisticated techniques such as fourier imaging spectroscopy use time multiplexing to collect multiple two - dimensional basis images of the three - dimensional data space .
multiplexing in time is photon - inefficient as photons that are not part of the current sample are discarded .
|
11,009 |
Suppose that you have an abstract for a scientific paper: we investigate the properties of the host galaxies of x ray selected ( high frequency peaked ) bl lac objects using a large and homogeneous data set of high spatial resolution @xmath0band observations of 52 bl lacs in the emss and slew samples .
the redshift distribution of the bl lacs ranges from z = 0.04 to [email protected] , with average and median redshifts z = 0.26 and z = 0.24 , respectively .
eight objects are at unknown redshift .
we are able to resolve 45 objects out of the 52 bl lacs .
for all the well resolved sources , we find the host to be a luminous elliptical galaxy . in a few cases a disk
is not ruled out but an elliptical model is still preferred .
the average absolute magnitude of the host galaxies is @xmath2 = [email protected] , while the average scale length of the host is @xmath4r(e)@xmath1 = 9@xmath35 kpc .
there is no difference in the host properties between the emss and slew samples .
we find a good agreement between the results derived by the surveys of wurtz et al .
( ground - based data ) and urry et al .
( hst data ) , and by our new deeper imaging .
the average luminosity of the bl lac hosts is between those of f - r i and f - r ii radio galaxies in govoni et al . ,
supporting the idea that both radio galaxy types could contribute to the parent population .
the bl lac hosts follow the f - p relation for giant ellipticals and exhibit a modest luminosity evolution with redshift .
finally , we find a slight correlation between the nuclear and host luminosity and a bimodal distribution in the nuclear / host luminosity ratio . .
And you have already written the first three sentences of the full article: bl lacertae objects are the most extreme class of active galactic nuclei ( agn ) , exhibiting strong , rapidly variable polarization and continuum emission , and core - dominated radio emission with apparent superluminal motion ( see e.g. kollgaard et al .
1992 for references ) .
these properties have led to the commonly accepted view that bl lacs are dominated by doppler - boosted synchrotron emission from a relativistic jet nearly along our line - of - sight ( blandford & rees 1978 ) ..
Please generate the next two sentences of the article
|
the line emission of bl lacs is absent or weak , making their redshift determination rather difficult . in the current unified models of radio - loud agn ( e.g. urry & padovani 1995 ) ,
bl lacs are identified as low luminosity , core - dominated f - r i ( fanaroff & riley 1974 ) radio galaxies ( rg ) viewed nearly along the axis of the relativistically boosted jet .
|
11,010 |
Suppose that you have an abstract for a scientific paper: given a collection of @xmath0 linear regression problems in @xmath1 dimensions , suppose that the regression coefficients share partially common supports .
this set - up suggests the use of @xmath2-regularized regression for joint estimation of the @xmath3 matrix of regression coefficients .
we analyze the high - dimensional scaling of @xmath4-regularized quadratic programming , considering both consistency rates in @xmath5-norm , and also how the minimal sample size @xmath6 required for performing variable selection grows as a function of the model dimension , sparsity , and overlap between the supports .
we begin by establishing bounds on the @xmath5-error as well sufficient conditions for exact variable selection for fixed design matrices , as well as designs drawn randomly from general gaussian matrices .
our second set of results applies to @xmath7 linear regression problems with standard gaussian designs whose supports overlap in a fraction @xmath8 $ ] of their entries : for this problem class , we prove that the @xmath2-regularized method undergoes a phase transition that is , a sharp change from failure to success characterized by the rescaled sample size @xmath9 .
more precisely , given sequences of problems specified by @xmath10 , for any @xmath11 , the probability of successfully recovering both supports converges to @xmath12 if @xmath13 , and converges to @xmath14 for problem sequences for which @xmath15 .
an implication of this threshold is that use of @xmath16-regularization yields improved statistical efficiency if the overlap parameter is large enough ( @xmath17 ) , but has _
worse _ statistical efficiency than a naive lasso - based approach for moderate to small overlap ( @xmath18 ) .
empirical simulations illustrate the close agreement between these theoretical predictions , and the actual behavior in practice .
these results indicate that some caution needs to be exercised in the application of @xmath4 block regularization : if the data does not match its structure....
And you have already written the first three sentences of the full article: this section contains the proofs of our three theorems .
our proofs are constructive in nature , based on a procedure that constructs pair of matrices @xmath27 and @xmath28 .
the goal of the construction is to show that matrix @xmath29 is an optimal primal solution to the convex program , and that the matrix @xmath30 is a corresponding dual - optimal solution , meaning that it belongs to the sub - differential of the @xmath31-norm ( see lemma [ lemsubdiff ] ) , evaluated at @xmath29 ..
Please generate the next two sentences of the article
|
if the construction succeeds , then the pair @xmath32 acts as a witness for the success of the convex program in recovering the correct signed support in particular , success of the primal - dual witness procedure implies that @xmath29 is the unique optimal solution of the convex program , with its row support contained with @xmath33 . to be clear
, the procedure for constructing this candidate primal - dual solution is _ not _ a practical algorithm ( as it exploits knowledge of the true support sets ) , but rather a proof technique for certifying the correctness of the block - regularized program .
|
11,011 |
Suppose that you have an abstract for a scientific paper: we present a comprehensive description of the population synthesis code startrack .
the original code has been significantly modified and updated .
special emphasis is placed here on processes leading to the formation and further evolution of compact objects ( white dwarfs , neutron stars , and black holes ) . both single and binary
star populations are considered .
the code now incorporates detailed calculations of all mass - transfer phases , a full implementation of orbital evolution due to tides , as well as the most recent estimates of magnetic braking .
this updated version of startrack can be used for a wide variety of problems , with relevance to many current and planned observatories , e.g. , studies of x - ray binaries ( chandra , xmm - newton ) , gravitational radiation sources ( ligo , lisa ) , and gamma - ray burst progenitors ( hete - ii , swift ) .
the code has already been used in studies of galactic and extra - galactic x - ray binary populations , black holes in young star clusters , type ia supernova progenitors , and double compact object populations . here
we describe in detail the input physics , we present the code calibration and tests , and we outline our current studies in the context of x - ray binary populations . .
And you have already written the first three sentences of the full article: the startrack population synthesis code was initially developed for the study of double compact object mergers in the context of gamma - ray burst ( grb ) progenitors ( belczynski , bulik & rudak 2002b ) and gravitational radiation inspiral sources ( belczynski , kalogera & bulik 2002c , hereafter bkb02 ) .
startrack has undergone major updates and revisions in the last few years . with this code
we are able to evolve isolated ( not dynamically interacting ) single stars and binaries for a wide range of initial conditions ..
Please generate the next two sentences of the article
|
the input physics incorporates our latest knowledge of processes governing stellar evolution , while the most uncertain aspects are parameterized to allow for systematic error analysis . during the code development ,
special emphasis was placed on the compact object populations : white dwarfs ( wds ) , neutron stars ( nss ) , and black holes ( bhs ) .
|
11,012 |
Suppose that you have an abstract for a scientific paper: a simple model based on the maximum energy that an athlete can produce in a small time interval is used to describe the high and long jump .
conservation of angular momentum is used to explain why an athlete should run horizontally to perform a vertical jump .
our results agree with world records . .
And you have already written the first three sentences of the full article: a few years ago , william harris asked how the kinetic energy acquired from running is converted in the long and high jump events.@xcite the question was whether athletes running horizontally can change their velocity into one that forms an angle of 45@xmath0 with the horizontal without changing their speed .
the three negative answers@xcite were that athletes can not convert their initial horizontal velocity into a vertical one,@xcite because it is impossible to generate the necessary power required by the task,@xcite or equivalently , athletes can not sustain the necessary acceleration to acquire the vertical velocity at takeoff.@xcite if the answer were positive , the center of mass of a world class athlete running at 10 m/s and taking off at 45@xmath0 would go a horizontal distance of about 10.2 m . because the athlete s center of mass is forward of the front edge of the runway at takeoff and behind the point where her heels hit the ground at landing ( see sec .
iii ) , the actual total jump length would be about 11 m ..
Please generate the next two sentences of the article
|
for the high jump , the athlete s center of mass would attain a maximum height of 3.5 m .
these results are much greater than actual records .
|
11,013 |
Suppose that you have an abstract for a scientific paper: we have obtained near - infrared camera and multi - object spectrometer images of 16 radio quiet quasars observed as part of a project to investigate the `` luminosity / host - mass limit . ''
the limit results were presented in mcleod , rieke , & storrie - lombardi ( 1999 ) . in this paper , we present the images themselves , along with 1- and 2-dimensional analyses of the host galaxy properties .
we find that our model - independent 1d technique is reliable for use on ground - based data at low redshifts ; that many radio - quiet quasars live in devaucouleurs - law hosts , although some of the techniques used to determine host type are questionable ; that complex structure is found in many of the hosts , but that there are some hosts that are very smooth and symmetric ; and that the nuclei radiate at @xmath0 of the eddington rate based on the assumption that all galaxies have central black holes with a constant mass fraction of 0.6% . despite targeting hard - to - resolve hosts ,
we have failed to find any that imply super - eddington accretion rates . .
And you have already written the first three sentences of the full article: host galaxy studies got the opportunity for a real boost in february 1997 when the near - infrared camera and multi - object spectrometer ( nicmos ) was installed on the hubble space telescope ( hst ) .
nicmos combines the superb spatial resolution of hst with the benefits that long wavelengths provide for imaging the redder hosts against the overwhelming glare of the bluer quasar nuclei .
we have used nicmos to image 16 radio quiet quasars as part of a project to investigate the `` luminosity / host - mass limit , '' the results of which were presented in mcleod , rieke , & storrie - lombardi ( 1999 ; hereafter mrs ) . in this paper , we present the images themselves , along with 1- and 2-dimensional analyses of the host galaxy properties ..
Please generate the next two sentences of the article
|
the sample , listed in table [ tab-1d ] , is composed of all 10 quasars from our `` high - luminosity sample '' ( the 26 highest - luminosity pg quasars with @xmath1 ; @xcite ) that had not been previously observed with hst . to this we added 6 luminous quasars out to @xmath2 for which ground - based attempts to resolve a host galaxy had failed .
all 16 objects are in the redshift range @xmath3 with an average @xmath4 .
|
11,014 |
Suppose that you have an abstract for a scientific paper: a systematic study of isotopic effects in the break - up of projectile spectators at relativistic energies has been performed with the aladin spectrometer at the gsi laboratory .
+ searching for signals of criticality in the fragment production we have applied the model - independent universal fluctuations theory already proposed to track criticality signals in multifragmentation to our data .
the fluctuation of the largest fragment charge and of the asymmetry of the two and three largest fragments and their bimodal distribution have also been analysed . .
And you have already written the first three sentences of the full article: one of the most fascinating phenomena in physics is that of a phase transition .
initially observed in macroscopic systems and in electromagnetic interactions , phase transitions have been seen manifesting also in strongly interacting microscopic systems and nowadays two specific areas are receiving a great deal of attention .
one involves the loss of stability of excited nuclear systems which , under certain conditions of temperature and density , may lead to the total disassembly of the nucleus into particles and fragments ..
Please generate the next two sentences of the article
|
the second , at much higher energies , concerns the transition from hadrons to quarks and gluons , and the possibility of observing new phenomena in quark matter .
+ nucleus - nucleus collisions at intermediate and relativistic energies have been shown @xcite to be an ideal tool to produce pieces of finite nuclear matter at extremely different thermodynamical conditions . in order to analyze thermodynamical properties of microscopic systems
|
11,015 |
Suppose that you have an abstract for a scientific paper: recent lattice qcd simulations of the scattering lengths of nambu - goldstone bosons off the @xmath0 mesons are studied using unitary chiral perturbation theory .
we show that the lattice qcd data are better described in the covariant formulation than in the heavy - meson formulation .
the @xmath1 can be dynamically generated from the coupled - channels @xmath2 interaction without _ a priori _ assumption of its existence .
a new renormalization scheme is proposed which manifestly satisfies chiral power counting rules and has well - defined behavior in the infinite heavy - quark mass limit . using this scheme we predict the heavy - quark spin and flavor symmetry counterparts of the @xmath1 . .
And you have already written the first three sentences of the full article: measurements of hadronic states with charm quarks such as the @xmath3 have led to extensive and still ongoing discussions about our deeper understanding of mesons and baryons @xcite , traditionally thought to be composed of a pair of quark and antiquark or three quarks in the naive quark model . with its mass ( @xmath4 mev )
about 100 mev lower than the lowest @xmath5 scalar state in the naive quark model , the @xmath1 can not be a conventional @xmath6 state @xcite .
one possible interpretation is that of a compound dynamically generated by the strong @xmath2 interaction in coupled - channels dynamics @xcite ..
Please generate the next two sentences of the article
|
such approaches have provided many useful insights into the nature of some most intriguing new resonances ( see , e.g. , refs .
@xcite for some recent applications ) . in order to clarify the nature of the @xmath3 , or of any other meson of similar kind ,
|
11,016 |
Suppose that you have an abstract for a scientific paper: the excitation of two particle - two hole final states in neutrino - nucleus scattering has been advocated by many authors as the source of the excess cross section observed by the miniboone collaboration in the quasi elastic sector . we analyse the mechanisms leading to the appearance of these final states , and illustrate their significance through the results of accurate calculations of the nuclear electromagnetic response in the transverse channel .
a novel approach , allowing for a consistent treatment of the amplitudes involving one- and two - nucleon currents in the kinematical region in which the non relativistic approximation breaks down is outlined , and its preliminary results are reported . .
And you have already written the first three sentences of the full article: experimental studies of neutrino - nucleus interactions carried out over the past decade @xcite have provided ample evidence of the inadequacy of the relativistic fermi gas model ( rfgm ) , routinely employed in event generators , to account for both the complexity of nuclear dynamics and the variety of reaction mechanisms other than single nucleon knock out contributing to the observed cross section .
a striking manifestation of the above problem is the large discrepancy between the predictions of monte carlo simulations and the double differential charged current ( cc ) quasi elastic ( qe ) cross section measured by the miniboone collaboration using a carbon target @xcite . as pointed out by the authors of ref .
@xcite , improving the treatment of nuclear effects , which turns out to be one of the main sources of systematic uncertainty in the oscillation analysis @xcite , will require the development of a _ comprehensive _ and _ consistent _ description of neutrino - nucleus interactions , _ validated _ through extensive comparison to the large body of electron - nucleus scattering data @xcite ..
Please generate the next two sentences of the article
|
the main difficulty involved in the generalisation of the approaches successfully employed to study electron scattering to the case of neutrino interactions stems from the fact that , while the energy of the electron beam is fixed , in neutrino scattering the measured cross section results from the average over different beam energies , broadly distributed according to a flux @xmath0 .
therefore , a measurement of the energy of the outgoing charged lepton in a cc qe interaction _ does not _ specify the energy transfer to the nuclear target , which largely determines the reaction mechanism . as shown in refs .
|
11,017 |
Suppose that you have an abstract for a scientific paper: the topology of the internet has typically been measured by sampling _ traceroutes _ , which are roughly shortest paths from sources to destinations .
the resulting measurements have been used to infer that the internet s degree distribution is scale - free ; however , many of these measurements have relied on sampling traceroutes from a small number of sources .
it was recently argued that sampling in this way can introduce a fundamental bias in the degree distribution , for instance , causing random ( erds - rnyi ) graphs to appear to have power law degree distributions .
we explain this phenomenon analytically using differential equations to model the growth of a breadth - first tree in a random graph @xmath0 of average degree @xmath1 , and show that sampling from a single source gives an apparent power law degree distribution @xmath2 for @xmath3 . .
And you have already written the first three sentences of the full article: the internet and the networks it facilitates including the web and email networks are the largest artificial complex networks in existence , and understanding their structural and dynamic properties is important if we wish to understand social and technological networks in general .
moreover , efforts to design novel dynamic protocols for communication and fault tolerance are well served by knowing these properties .
one structural property of particular interest is the degree distribution at the router level of the internet ..
Please generate the next two sentences of the article
|
this distribution has been inferred @xcite both by sampling _ traceroutes _ , i.e. , the paths chosen by internet routers , which approximate shortest paths in the network , and by taking `` snapshots '' of bgp ( border gateway protocol ) routing tables @xcite .
these methods have been criticized as being noisy and imperfect @xcite . however , lakhina et al .
|
11,018 |
Suppose that you have an abstract for a scientific paper: the efficiencies of the gratings in the high energy transmission grating spectrometer ( hetgs ) were updated using in - flight observations of bright continuum sources .
the procedure first involved verifying that fluxes obtained from the @xmath0 and @xmath1 orders match , which checks that the contaminant model and the ccd quantum efficiencies agree .
then the fluxes derived using the high energy gratings ( hegs ) were compared to those derived from the medium energy gratings ( megs ) .
the flux ratio was fit to a low order polynomial , which was allocated to the megs above 1 kev or the hegs below 1 kev .
the resultant efficiencies were tested by examining fits to blazar spectra . .
And you have already written the first three sentences of the full article: this is an update to the _ chandra _ high energy transmission grating ( hetg ) calibration based on in - orbit observations .
the hetg was described by canizares et al .
( 2005 @xcite ) and previous flight calibration results were reported by marshall et al ..
Please generate the next two sentences of the article
|
( 2004 @xcite ) .
there are two grating types on the hetg : the high energy gratings ( hegs ) have @xmath22 higher dispersion than the medium energy gratings ( megs ) .
|
11,019 |
Suppose that you have an abstract for a scientific paper: the zero temperature core - level photoemission spectrum of a hubbard system is studied across the metal to mott insulator transition using dynamical mean - field theory and wilson s numerical renormalization group .
an asymmetric power - law divergence is obtained in the metallic phase with an exponent @xmath0 which depends on the strength of both the hubbard interaction @xmath1 and the core - hole potential @xmath2 . for @xmath3 ,
@xmath4 decreases with increasing @xmath1 and vanishes at the transition ( @xmath5 ) leading to a symmetric peak in the insulating phase . for @xmath6 ,
@xmath4 remains finite close to the transition , but the integrated intensity of the power - law vanishes and there is no associated peak in the insulator .
the weight and position of the remaining peaks in the spectra can be understood within a molecular orbital approach . .
And you have already written the first three sentences of the full article: when an incident x - ray photon ejects an electron from a core - level in a metal , the conduction band electrons feel a local attractive potential due to the created hole .
it was discovered by anderson @xcite that the electronic ground states before and after the creation of the hole are orthogonal to each other .
this many - body effect has dramatic consequences in x - ray photoemission spectroscopy ( xps ) experiments where an asymmetric power - law divergence is observed.@xcite for a non - interacting metal , the exponent of the power - law and the relative intensity of the peaks in the xps spectra are well understood ..
Please generate the next two sentences of the article
|
however , the behavior of the power - law divergence in a strongly interacting metal has received little theoretical attention besides the one - dimensional case .
@xcite recently , there have been several xps studies of strongly correlated transition - metal oxides,@xcite which addressed the changes in the core - level spectrum across the metal to mott insulator transition ( mit ) .
|
11,020 |
Suppose that you have an abstract for a scientific paper: radial and azimuthal features ( such as disc offsets and eccentric rings ) seen in high resolution images of debris discs , provide us with the unique opportunity of finding potential planetary companions which betray their presence by gravitationally sculpting such asymmetric features .
the young debris disc around hd 115600 , imaged recently by the _
gemini planet imager _ , is such a disc with an eccentricity @xmath0 and a projected offset from the star of @xmath1 au . using our modified n - body code which incorporates radiation forces
, we firstly aim to determine the orbit of a hidden planetary companion potentially responsible for shaping the disc .
we run a suite of simulations covering a broad range of planetary parameters using a _ monte carlo markov chain _ sampling method and create synthetic images from which we extract the geometric disc parameters to be compared with the observed and model - derived quantities .
we then repeat the study using a traditional grid to explore the planetary parameter space and aim secondly to compare the efficiency of both sampling methods .
we find a planet of 7.8 jupiter mass orbiting at 30 au with an eccentricity of @xmath2 to be the best fit to the observations of hd 115600 .
technically , such planet has a contrast detectable by direct imaging , however the system s orientation does not favour such detection . in this study , at equal number of explored planetary configurations , the monte carlo markov chain not only converges faster but provides a better fit than a traditional grid . [ firstpage ] planetary systems - circumstellar matter - methods : numerical - methods : statistical - stars : individual ( hd 115600 ) .
And you have already written the first three sentences of the full article: planets can gravitationally perturb debris discs by various dynamical processes , such as secular interactions , where an eccentric or inclined planet can force the disc eccentricity or inclination @xcite , or resonance interactions , where the planet traps dust at a specific location , resulting in the creation of dust clumps in the disc @xcite .
these processes inducing eccentricity , a disc position offset with respect to the star , or clumps into the disc can result in brightness asymmetries . due to the limitations in the detection techniques ,
most of the confirmed exoplanets are located within 10 au of their host star , and potential planets located beyond this limit ( beyond saturn in our solar system ) remain undetected ..
Please generate the next two sentences of the article
|
however , even if those distant planets are too small to be detected with our current telescopes , they can still leave an observational signature by gravitationally perturbing the dust of their debris disc . therefore investigating the dynamical relationship between debris discs and exoplanets
can not only provide some insights on the origin of debris disc asymmetries , but also provides clues to the presence of hidden planets in the outer part of stellar systems , a region currently difficult to observe .
|
11,021 |
Suppose that you have an abstract for a scientific paper: principal component analysis ( pca ) aims at estimating the direction of maximal variability of a high - dimensional dataset .
a natural question is : does this task become easier , and estimation more accurate , when we exploit additional knowledge on the principal vector ?
we study the case in which the principal vector is known to lie in the positive orthant .
similar constraints arise in a number of applications , ranging from analysis of gene expression data to spike sorting in neural signal processing . in the unconstrained case , the estimation performances of pca has been precisely characterized using random matrix theory , under a statistical model known as the ` spiked model . '
it is known that the estimation error undergoes a phase transition as the signal - to - noise ratio crosses a certain threshold .
unfortunately , tools from random matrix theory have no bearing on the constrained problem . despite this challenge
, we develop an analogous characterization in the constrained case , within a one - spike model .
in particular : @xmath0 we prove that the estimation error undergoes a similar phase transition , albeit at a different threshold in signal - to - noise ratio that we determine exactly ; @xmath1 we prove that unlike in the unconstrained case estimation error depends on the spike vector , and characterize the least favorable vectors ; @xmath2 we show that a non - negative principal component can be approximately computed under the spiked model in nearly linear time .
this despite the fact that the problem is non - convex and , in general , np - hard to solve exactly . .
And you have already written the first three sentences of the full article: principal component analysis ( pca ) is arguably the most successful of dimensionality reduction techniques .
given samples @xmath3 from a @xmath4-dimensional distribution , @xmath5 , pca seeks the direction of maximum variability .
assuming for simplicity the @xmath6 s to be centered ( i.e. @xmath7 ) , and denoting by @xmath8 a random vector distributed as @xmath6 , the objective is to estimate the solution of @xmath9 the solution of this problem is the principal eigenvector of the covariance matrix @xmath10 ..
Please generate the next two sentences of the article
|
this is normally estimated by replacing expectation above by the sample mean , i.e. solving @xmath11 denoting by @xmath12 the matrix with rows @xmath3 , the solution is of course given by the principal eigenvector of the sample covariance @xmath13 , that we will denote by @xmath14 .
this approach is known to be consistent in low dimension .
|
11,022 |
Suppose that you have an abstract for a scientific paper: differential cross sections for inclusive dijet photoproduction on a virtual pion have been calculated in next - to - leading order qcd as a function of @xmath0 , and @xmath1 .
the cross sections are compared with recent zeus data on photoproduction of dijets with a leading neutron in the final state . .
And you have already written the first three sentences of the full article: recently the zeus collaboration at hera presented differential cross section data for the neutron tagged process @xmath2 @xcite .
the cross sections have been measured in the photoproduction region with photon virtuality @xmath3 for @xmath4 center - of - mass energy @xmath5 in the interval @xmath6 , for jets with transverse energy @xmath7 , neutron energy @xmath8 , and neutron production angle @xmath9 mrad .
the cross sections were compared to predictions with the one - pion exchange model and rather good agreement was found ..
Please generate the next two sentences of the article
|
+ due to the kinematic constraints on the neutron detection the squared momentum transfer @xmath10 between ingoing proton and outgoing neutron is very small @xcite . in this case
it is expected that the @xmath11 transition amplitude , _
|
11,023 |
Suppose that you have an abstract for a scientific paper: a high statistics study of the reaction @xmath0 has been performed with the belle detector using a data sample of 26 fb@xmath1 collected at @xmath2 .
a spin - parity analysis shows dominance of the @xmath3 helicity 2 wave for three - pion invariant masses from 1 to 3 .
the invariant mass distribution exhibits @xmath4 , @xmath5 and higher mass enhancements . .
And you have already written the first three sentences of the full article: three - pion final states of two - photon interactions are restricted to quantum numbers suitable for study of resonance formation .
the @xmath6 channel is known to be dominated by the formation of @xmath4 in the helicity 2 state [ 1 - 10 ] .
the @xmath4 is a ground state of isospin 1 @xmath7 @xmath8 mesons ..
Please generate the next two sentences of the article
|
observations of higher mass resonances have been reported [ 7 - 10 ] .
study of higher mass states is important for the assignment of nonet members and for the understanding of confinement in the quark model [ 11 - 15 ] .
|
11,024 |
Suppose that you have an abstract for a scientific paper: we consider connectivity properties and asymptotic slopes for certain random directed graphs on @xmath0 in which the set of points @xmath1 that the origin connects to is always infinite .
we obtain conditions under which the complement of @xmath1 has no infinite connected component . applying these results to one of the most interesting such models
leads to an improved lower bound for the critical occupation probability for oriented site percolation on the triangular lattice in 2 dimensions .
# 1 # 1 # 1 # 1 .
And you have already written the first three sentences of the full article: the main objects of study in this paper are the _ 2-dimensional orthant model _ ( one of the most interesting examples within a class of models called _ degenerate random environments _ ) , and its dual model , a version of _ oriented site percolation_. part of the motivation for studying degenerate random environments is an interest in the behaviour of random walks in random environments that are non - elliptic . indeed , many of the results of this paper and of @xcite have immediate implications for the behaviour ( in particular , directional transience ) of random walks in certain non - elliptic environments ( see e.g. @xcite ) . for fixed @xmath2 , let @xmath3 be the set of unit vectors in @xmath4 , and let @xmath5 denote the power set of @xmath6
let @xmath7 be a probability measure on @xmath5 .
a _ degenerate random environment _ ( dre ) is a random directed graph , i.e. an element @xmath8 of @xmath9 ..
Please generate the next two sentences of the article
|
we equip @xmath9 with the product @xmath10-algebra and the product measure @xmath11 , so that @xmath12 are i.i.d . under @xmath13 .
we denote the expectation of a random variable @xmath14 with respect to @xmath13 by @xmath15 $ ] .
|
11,025 |
Suppose that you have an abstract for a scientific paper: we report results of a direct imaging survey for giant planets around 80 members of the @xmath0 pic , tw hya , tucana - horologium , ab dor , and hercules - lyra moving groups , observed as part of the gemini nici planet - finding campaign . for this sample
, we obtained median contrasts of @xmath1@xmath2=13.9 mag at 1 in combined ch@xmath3 narrowband adi+sdi mode and median contrasts of @xmath1@xmath2=15.1 mag at 2 in @xmath2-band adi mode .
we found numerous ( @xmath470 ) candidate companions in our survey images .
some of these candidates were rejected as common - proper motion companions using archival data ; we reobserved with nici all other candidates that lay within 400 au of the star and were not in dense stellar fields .
the vast majority of candidate companions were confirmed as background objects from archival observations and/or dedicated nici campaign followup .
four co - moving companions of brown dwarf or stellar mass were discovered in this moving group sample : pz tel b ( 36@xmath56 m@xmath6 , [email protected] au , biller et al .
2010 ) , cd -35 2722b ( 31@xmath58 m@xmath6 , 67@xmath54 au , wahhaj et al .
2011 ) , hd 12894b ( [email protected] m@xmath7 , [email protected] au ) , and bd+07 1919c ( [email protected] m@xmath7 , [email protected] au ) . from a bayesian analysis of the achieved h band adi and asdi contrasts , using power - law models of planet distributions and hot - start evolutionary models , we restrict the frequency of 120 m@xmath6 companions at semi - major axes from 10150 au to @xmath818% at a 95.4@xmath9 confidence level using dusty models and to @xmath86% at a 95.4@xmath9 using cond models .
our results strongly constrain the frequency of planets within semi - major axes of 50 au as well .
we restrict the frequency of 120 m@xmath6 companions at semi - major axes from 1050 au to @xmath821% at a 95.4@xmath9 confidence level using dusty models and to @xmath87% at a 95.4@xmath9 using cond models .
this survey is the deepest search to date for giant planets around....
And you have already written the first three sentences of the full article: in the last decade , @xmath1010 planets and planet candidates with estimated masses @xmath813 m@xmath6 have been imaged in orbit around young stars and brown dwarfs ( e.g. * ? ? ? * ; * ? ? ? * ; * ? ? ? * ; * ? ? ? * ; * ? ? ? * ; * ? ? ? * ; * ? ? ? * ; * ? ? ? * ; * ? ? ? * ; * ? ? ? * ; * ? ? ? * ; * ? ? ? * ; * ? ? ?
* ; * ? ? ?
* ; * ? ? ?.
Please generate the next two sentences of the article
|
* ) . in total ,
@xmath1030 companions with estimated masses @xmath825 m@xmath6 have been imaged .
|
11,026 |
Suppose that you have an abstract for a scientific paper: we investigate strong - coupling properties of a trapped two - dimensional normal fermi gas . within the framework of a combined @xmath0-matrix theory with the local density approximation
, we calculate the local density of states , as well as the photoemission spectrum , to see how two - dimensional pairing fluctuations affect these single - particle quantities . in the bcs ( bardeen - cooper - schrieffer)-bec ( bose - einstein condensation ) crossover region , we show that the local density of states exhibits a dip structure in the trap center , which is more remarkable than the three - dimensional case .
this pseudogap phenomenon is found to naturally lead to a double peak structure in the photoemission spectrum .
the peak - to - peak energy of the spectrum at @xmath1 agrees well with the recent experiment on a two - dimensional @xmath2 fermi gas [ m. feld , _
et al_. , nature * 480 * , 75 ( 2011 ) ] .
since pairing fluctuations are sensitive to the dimensionality of a system , our results would be useful for the study of many - body physics in the bcs - bec crossover regime of a two - dimensional fermi gas . .
And you have already written the first three sentences of the full article: the advantage of ultracold fermi gases is the existence of highly tunable physical parameters@xcite .
a tunable pairing interaction associated with a feshbach resonance@xcite enables us to study fermi superfluids from the weak - coupling bcs regime to the strong - coupling bec limit in a unified manner@xcite .
the intermediate coupling regime ( which is also referred to as the bcs - bec crossover region in the literature ) is useful for the study of strong - coupling physics ..
Please generate the next two sentences of the article
|
since correlation effects are important in high-@xmath3 cuprates@xcite , the bcs - bec crossover physics in ultracold fermi gases would be also useful for the study of this interacting electron system .
in addition to the tunable interaction , one can also adjust the atomic motion by using an optical lattice@xcite .
|
11,027 |
Suppose that you have an abstract for a scientific paper: we present deep , panoramic multi - color imaging of the distant rich cluster a851 ( @xmath0 ) using suprime - cam on subaru .
these images cover a 27@xmath1 field of view ( @xmath2mpc ) , and by exploiting photometric redshifts , we can isolate galaxies in a narrow redshift slice at the cluster redshift . using a sample of @xmath3 probable cluster members ( @xmath4 ) , we trace the network of filaments and subclumps around the cluster core .
the depth of our observations , combined with the identification of filamentary structure , gives us an unprecedented opportunity to test the influence of the environment on the properties of low luminosity galaxies .
we find an abrupt change in the colors of faint galaxies ( @xmath5 ) at a local density of 100 gal.mpc@xmath6 .
the transition in the color - local density behavior occurs at densities corresponding to subclumps within the filaments surrounding the cluster . identifying the sites where the transition occurs brings us much closer to understanding the mechanisms which are responsible for establishing the present - day relationship between environment and galaxy characteristics . 1.25 in .125 in .25 in .
And you have already written the first three sentences of the full article: clusters of galaxies are continuously growing through the accretion of galaxies and groups from the field . the star formation activity in the accreting galaxies must be quenched during the assimilation of the galaxies into the cluster .
this transformation is a key process in creating the environmental dependence of galaxy properties , and may also underpin the observed evolution of galaxy properties in distant clusters [ e.g. 1.2 ] however , the physical mechanism which is responsible for these changes has not yet been identified [ 3,4,5 ] . the advent of suprime - cam , a revolutionary wide - field camera on the subaru telescope , has opened a new window in this field .
its 27@xmath1 field of view can trace the variation of galaxy properties from the cluster cores out to the surrounding field in an attempt to identify the environment where the decline in the star formation in accreted galaxies begins ..
Please generate the next two sentences of the article
|
as a first step towards a systematic study of distant clusters with subaru and suprime - cam , we obtained deep ( @xmath7 ) @xmath8 imaging of the rich cluster a851 at z=0.4 .
we constructed an i - band selected sample which contains 15,055 galaxies brighter than @xmath9 . in order to assign cluster membership across our large field and to faint magnitudes
|
11,028 |
Suppose that you have an abstract for a scientific paper: * background : * networks are used to model real - world phenomena in various domains , including systems biology . since proteins carry out biological processes by interacting with other proteins , it is expected that cellular functions are reflected in the structure of protein - protein interaction ( ppi ) networks .
similarly , the topology of residue interaction graphs ( rigs ) that model proteins 3-dimensional structure might provide insights into protein folding , stability , and function . an important step towards understanding these networks is finding an adequate network model . evaluating
the fit of a model network to the data is a formidable challenge , since network comparisons are computationally infeasible and thus rely on heuristics , or `` network properties . ''
* results : * we show that it is difficult to assess the reliability of the fit of a model using any network property alone .
thus , we present an integrative approach that feeds a variety of network properties into five probabilistic methods to predict the best - fitting network model for ppi networks and rigs . we confirm that geometric random graphs ( geo ) are the best - fitting model for rigs . since geo networks model spatial relationships between objects and
are thus expected to replicate well the underlying structure of spatially packed residues in a protein , the good fit of geo to rigs validates our approach .
additionally , we apply our approach to ppi networks and confirm that the structure of merged data sets containing both binary and co - complex data that are of high coverage and confidence is also consistent with the structure of geo , while the structure of sparser and lower confidence data is not . since ppi data are noisy , we test the robustness of the five classifiers to noise and show that their robustness levels differ . *
conclusions : * we demonstrate that none of the classifiers predicts noisy scale - free ( sf ) networks as geo , whereas noisy geos can be classified as sf .
thus , it....
And you have already written the first three sentences of the full article: large - scale biological network data are increasingly becoming available due to advances in experimental biology .
we analyze protein - protein interaction ( ppi ) networks , where proteins are modeled as network nodes and interactions amongst them as network edges .
since it is the proteins that carry out almost all biological processes and they do so by interacting with other proteins , analyzing ppi network structure could lead to new knowledge about complex biological mechanisms and disease ..
Please generate the next two sentences of the article
|
additionally , we analyze network representations of protein structures , `` residue interaction graphs '' ( rigs ) , where residues are modeled as network nodes and inter - residue interactions as network edges ; an inter - residue interaction exists between residues that are in close spatial proximity .
understanding rigs might provide deeper insights into protein structure , binding , and folding mechanisms , as well as into protein stability and function . to understand these complex biological network data
|
11,029 |
Suppose that you have an abstract for a scientific paper: in a wide class of new - physics models , which can be motivated through generic arguments and within supersymmetry , we obtain large contributions to @xmath0@xmath1 mixing , but not to @xmath2 processes .
if we assume such a scenario , the solutions @xmath3 for the @xmath0@xmath1 mixing phase implied by @xmath4 can not be converted directly into a constraint in the @xmath5@xmath6 plane .
however , we may complement @xmath7 with @xmath8 and the recently measured cp asymmetries in @xmath9 to determine the unitarity triangle , with its angles @xmath10 , @xmath11 and @xmath12 . to this end
, we have also to control penguin effects , which we do by means of the cp - averaged @xmath13 branching ratio .
interestingly , the present data show a perfectly consistent picture not only for the `` standard '' solution of @xmath14 , but also for @xmath15 . in the latter case ,
the preferred region for the apex of the unitarity triangle is in the second quadrant , allowing us to accommodate conveniently @xmath16 , which is also favoured by other non - leptonic @xmath17 decays such as @xmath18 .
moreover , also the prediction for br@xmath19 can be brought to better agreement with experiment .
further strategies to explore this scenario with the help of @xmath20 decays are discussed as well .
cern - th/2003 - 039 + uab - ft-541 + hep - ph/0302229 * * shedding light on the `` dark side '' of @xmath0@xmath1 mixing through @xmath9 , @xmath21 and @xmath20 robert fleischer,@xmath22 gino isidori,@xmath23 joaquim matias@xmath24 @xmath22 _ theory division , cern , ch-1211 geneva 23 , switzerland _ @xmath23 _ infn , laboratori nazionali di frascati , i-00044 frascati , italy _
@xmath24 _ ifae , universitat
autnoma de barcelona , 08193 bellaterra , barcelona , spain _ february 2003 .
And you have already written the first three sentences of the full article: thanks to the efforts at the @xmath17 factories , the exploration of cp violation is now entering another exciting stage , allowing us to confront the kobayashi maskawa mechanism @xcite with data . after the discovery of mixing - induced cp violation in the `` gold - plated '' mode @xmath25 @xcite , as well as important other measurements , one of the most interesting questions is now to what extent the possible space for new physics ( np ) has already been reduced . in this context , the central target is the unitarity triangle of the cabibbo kobayashi
maskawa ( ckm ) matrix illustrated in fig .
[ fig : ut](a ) , where @xmath5 and @xmath6 are the generalized wolfenstein parameters @xcite ..
Please generate the next two sentences of the article
|
the usual fits for the allowed region for the apex of the unitarity triangle in the @xmath5@xmath6 plane the `` ckm fits '' seem to indicate that no np is required to accommodate the data @xcite .
however , this is not the complete answer to this exciting question . in order to fully address it ,
|
11,030 |
Suppose that you have an abstract for a scientific paper: a recent survey of 17 134 proteins has identified a new class of proteins which are expected to yield stretching induced force - peaks in the range of 1 nn .
such high force peaks should be due to forcing of a slip - loop through a cystine ring , i.e. by generating a cystine slipknot .
the survey has been performed in a simple coarse grained model . here , we perform all - atom steered molecular dynamics simulations on 15 cystine knot proteins and determine their resistance to stretching . in agreement with previous studies within a coarse grained structure based model ,
the level of resistance is found to be substantially higher than in proteins in which the mechanical clamp operates through shear .
the large stretching forces arise through formation of the cystine slipknot mechanical clamp and the resulting steric jamming .
we elucidate the workings of such a clamp in an atomic detail .
we also study the behavior of five top strength proteins with the shear - based mechanostability in which no jamming is involved .
we show that in the atomic model , the jamming state is relieved by moving one amino acid at a time and there is a choice in the selection of the amino acid that advances the first .
in contrast , the coarse grained model also allows for a simultaneous passage of two amino acids .
@xmath0 e - mail : [email protected] 40 pt .
And you have already written the first three sentences of the full article: single - molecule manipulation @xcite has opened new perspectives on understanding of the mechanical processes taking place in a biological cell and may offer insights into design of nanostructures and nanomachines .
examples of manipulation of a protein include stretching @xcite , mechanically controlled refolding @xcite , knot untying @xcite and knot tightening @xcite .
the experimental studies pertained to only a handful of systems and yet demonstrated richness of possible mechanical behaviors ..
Please generate the next two sentences of the article
|
experiments on stretching generate information on mechanostability .
it can be captured by providing the value of @xmath1 the largest force that is needed to unravel the tertiary structure of a protein .
|
11,031 |
Suppose that you have an abstract for a scientific paper: multi - fiber spectroscopy has been obtained for 335 galaxies in the field of the double cluster a3128/a3125 , using the 2df multi - fiber positioner on the anglo - australian telescope . when combined with previously published results , a total of 532 objects in the double cluster now have known redshifts .
we have also obtained a 20 ks _
chandra _
acis - i image of the central 16x 16of a3128 and radio imaging of the cluster with the molonglo observatory synthesis telescope and the australia telescope compact array .
the spatial - kinematic distribution of redshifts in the field of a3128/a3125 , when combined with the _ chandra _ acis - i image of a3128 , reveals a variety of substructures present in the galaxy distribution and in the hot intracluster medium ( icm ) . the most striking large - scale feature in the galaxy distribution is a relatively underpopulated redshift zone @xmath04000 on either side of the mean cluster velocity at @xmath017500 .
we attribute this depletion zone to the effect of the extensive horologium - reticulum ( h - r ) supercluster , within which a3128/a3125 is embedded .
in addition to this large - scale feature , numerous smaller groups of galaxies can be identified , particularly within the underpopulated region within @xmath14000 of the mean cluster redshift . due to the large gravitational influence of the h - r supercluster ,
these groups arrive at a3128 with a high infall velocity , well in excess of the local sound speed .
two of these groups appear as elongated filaments in position - velocity diagrams , indicating that they are tidally distended groups which have been disrupted after a close passage through a3128 .
in fact , a3125 itself appears to be in such a post passage condition .
we have identified a primary ne - sw merger axis connecting a3128 with a3125 , along which the filaments are also oriented .
in addition , the _ chandra _ image reveals that the x - ray emission is split into two components , each with very small core radii , that are....
And you have already written the first three sentences of the full article: the distribution of groups , clusters , and superclusters of galaxies represents a fundamental testing ground for theories of the origin and evolution of structure in the universe . until recently ,
the intersection between observation and theory has largely relied on statistical properties of galaxy spatial / kinematic clustering , along with x - ray determined global icm temperatures and azimuthally averaged radial brightness and temperature profiles .
in contrast , much of the existing observational data indicates that large - scale asymmetric , probably filamentary , structures , which are not easily subjected to statistical definition _ on an individual basis _ , are present in clusters and superclusters ( e.g. , gregory & thompson 1978 ; shandarin 1983 ; de lapparent , geller , & huchra 1986 ; west , jones , & forman 1995 ; west & blakeslee 2000 ) . multi - fiber spectroscopy of galaxies with 400-fiber positioners deployed over 2 fields , as well as the unprecedented combination of spatial resolution and sensitivity in x - rays provided by the _.
Please generate the next two sentences of the article
|
chandra _ and xmm - newton observatories , are rapidly advancing the observational view of clusters of galaxies .
in addition , numerical simulations , carried out within the framework of a cold dark matter dominated universe in which structure is built up in a hierarchical fashion , have reecently reached the point where large areas , on scales comparable to the largest superclusters , can be simulated in some detail ( pearce 2001 and references therein ) .
|
11,032 |
Suppose that you have an abstract for a scientific paper: solicited public opinion surveys reach a limited subpopulation of willing participants and are expensive to conduct , leading to poor time resolution and a restricted pool of expert - chosen survey topics . in this study , we demonstrate that unsolicited public opinion polling through sentiment analysis applied to twitter correlates well with a range of traditional measures , and has predictive power for issues of global importance .
we also examine twitter s potential to canvas topics seldom surveyed , including ideas , personal feelings , and perceptions of commercial enterprises .
two of our major observations are that appropriately filtered twitter sentiment ( 1 ) predicts president obama s job approval three months in advance , and ( 2 ) correlates well with surveyed consumer sentiment . to make possible a full examination of our work and to enable others research ,
we make public over 10,000 data sets , each a seven - year series of daily word counts for tweets containing a frequently used search term . .
And you have already written the first three sentences of the full article: public opinion data can be used to determine public awareness , to predict outcomes of events , and to infer characteristics of human behaviors .
indeed , readily available public opinion data is valuable to researchers , policymakers , marketers , and many other groups , but is difficult to generate .
solicited polls can be expensive , prohibitively time consuming , and may only reach a limited number of people on a limited number of days ..
Please generate the next two sentences of the article
|
polling accuracy evidently relies on accessing representative populations and high response rates .
poor temporal sampling will weaken any poll s value as individual opinions vary in time and in response to social influence@xcite . with the continued rise of social media as a communication platform
|
11,033 |
Suppose that you have an abstract for a scientific paper: in this paper we document the modifications introduced to the previous version of the resonance chiral lagrangian current ( _ phys.rev . _ * d86 * ( 2012 ) 113008 ) of the @xmath0 decay which enable the one dimensional distributions measured by the babar collaboration to be well modeled . the main change required to model
the data is the addition of the @xmath1 resonance .
systematic errors , theoretical and experimental ones , limitations due to fits of one dimensional distributions only , and resulting difficulties and statistical / systematic errors for fitted parameters are addressed .
the current and fitting environment is ready for comparisons with the fully exclusive experimental data .
the present result for @xmath0 is encouraging for work on other @xmath2 decay modes and resonance chiral lagrangian based currents . .
And you have already written the first three sentences of the full article: in our paper @xcite we described an upgrade of the monte carlo generator tauola using the results of the resonance chiral lagrangian ( @xmath3 ) for the @xmath2 lepton decay into the most important two and three meson channels .
the necessary theoretical concepts were collected , numerical tests of the implementations were completed and documented . finally , we presented strategy for fitting experimental data and the systematic uncertainties associated with the experimental measurement .
however , there was and remain until now , an obvious limitation due to the fact that we are using one - dimensional projections of the invariant masses for a multi - dimensional distribution ..
Please generate the next two sentences of the article
|
the first comparison @xcite of the @xmath3 results for the @xmath4 mode with the babar data @xcite , did not demonstrated a satisfactory agreement for the two pion invariant mass distributions . with the recent availability of the unfolded distributions for all invariant masses constructed from observable decay products for this channel @xcite , we found ourselves in an excellent position to work on model improvement for the @xmath4 mode .
we would like to stress here that the choice of the three pion mode is not accidental .
|
11,034 |
Suppose that you have an abstract for a scientific paper: we investigate numerically the finite - temperature phase diagrams of the extended bose - hubbard model in a two - dimensional square lattice . in particular , we focus on the melting of supersolid phases of two different crystal orderings , stripe and star orders , arising from the competition of the nearest- and next - nearest -neighbor interactions in the vicinity of quarter filling .
the two crystal orders are the result of broken translational symmetry in either one or in both @xmath0 , and @xmath1 directions .
the broken gauge symmetry of the supersolids are found to be restored via a kosterlitz - thouless transition while the broken translational symmetries are restored via a single second - order phase transition , instead of two second - order transitions in the ising universality class . on the other hand ,
the phase transitions between the star and stripe orders are first order in nature . .
And you have already written the first three sentences of the full article: supersolids with both diagonal and off - diagonal long - range order are observed in various model , either with or without hard - core constraint.@xcite one obvious necessary condition for the occurrence of supersolids ( sss ) is the presence of a small but finite kinetic energy in competition with a relatively large repulsive interactions .
previous results suggest that the hard - core constraint may destroy supersolid phases by the formation of domain walls , which leads to phase separation instead.@xcite stable supersolid phases in hard - core models are found , however , in systems contain frustrated interactions.@xcite the extended bose - hubbard model is a typical example that has been proposed to demonstrate a supersold phase within the mean - field approximation @xcite , and is confirmed by quantum monte carlo simulations.@xcite as long as the next - nearest - neighbor ( nnn ) interaction @xmath2 is dominant , hard - core bosons lining up as stripes to reduce potential energy near half filling @xcite ( see fig .
[ ordering ] ) . at quarter filling , frustrations induced by competing nearest neighbor ( nn ) @xmath3 can lead to a stable star order phase that characterized by finite structure factors at wave vector @xmath4 and @xmath5 [ @xmath6 ..
Please generate the next two sentences of the article
|
the existence of the corresponding supersolid is also numerically confirmed recently.@xcite the melting of stripe supersolids has been investigated in ref .
9 where the finite - temperature phase diagrams are determined . at half filling ,
|
11,035 |
Suppose that you have an abstract for a scientific paper: we provide optimal rates of convergence to the asymptotic distribution of the ( properly scaled ) degree of a fixed vertex in two preferential attachment random graph models .
our approach is to show that these distributions are unique fixed points of certain distributional transformations which allows us to obtain rates of convergence using a new variation of stein s method . despite the large literature on these models ,
there is surprisingly little known about the limiting distributions so we also provide some properties and new representations , including an explicit expression for the densities in terms of the confluent hypergeometric function of the second kind . , .
And you have already written the first three sentences of the full article: preferential attachment random graphs are randomgraphs that evolve by sequentially adding vertices and edges in a random way so that connections to vertices with high degree are favored .
particular versions of these models were proposed by @xcite as a mechanism to explain the appearance of the so - called power law behavior observed in some real world networks ; for example , the graph derived from the world wide web by considering webpages as vertices and hyperlinks between them as edges . following the publication of @xcite , there has been an explosion of research surrounding these ( and other ) random growth models .
this work is largely motivated by the idea that many real world data structures can be captured in the language of networks [ see @xcite for a wide survey from this point of view ] ..
Please generate the next two sentences of the article
|
however , much of this work is experimental or empirical and , by comparison , the rigorous mathematical literature on these models is less developed [ see @xcite for a recent review ] . for preferential attachment models ,
the seminal reference in the mathematics literature is @xcite , in which one of the main results is a rigorous proof that the degree of a randomly chosen vertex in a particular family of preferential attachment random graph models converges to the yule
|
11,036 |
Suppose that you have an abstract for a scientific paper: in the two - higgs - doublet model(2hdm ) with large extra dimensions(led ) , we study the contributions of virtual kaluza - klein(kk ) gravitons to 2hdm charged higgs production , especially in the two important production processes @xmath0 and @xmath1 , at future linear colliders ( lc ) .
we find that kk graviton effects can significantly modify these total cross sections and also their differential cross sections compared to their respective 2hdm values and , therefore , can be used to probe the effective scale @xmath2 up to several tev .
for example , at @xmath3tev , the cross sections for @xmath0 and @xmath4 in the 2hdm are 7.4fb for @xmath5gev and 0.003fb for @xmath6tev and @xmath7 , while in led they are 12.1fb and 0.01fb , respectively , for @xmath8tev . .
And you have already written the first three sentences of the full article: the idea that quantum gravity can appear at the tev energy scale well below the planck mass @xmath9gev was proposed in the 1990s@xcite .
the large extra dimensions(led ) model@xcite introduced by arkani - hamed , dimopoulos and davli has attracted much attention .
it has been emphasized that the presence of large extra dimensions brings a new solution to the hierarchy problem , which can take the place of other mechanisms , for example , low - energy supersymmetry ..
Please generate the next two sentences of the article
|
however , it also interesting to examine a scenario which combines new physics beyond the standard model(sm ) such as the 2hdm@xcite and led .
this new possibility leads to different phenomenology than the usual led scenario , which we explore here . in this
|
11,037 |
Suppose that you have an abstract for a scientific paper: we present a comprehensive re - analysis of stellar photometric variability in the field of the open cluster m37 following the application of a new photometry and de - trending method to mmt / megacam image archive .
this new analysis allows a rare opportunity to explore photometric variability over a broad range of time - scales , from minutes to a month .
the intent of this work is to examine the entire sample of over 30,000 objects for periodic , aperiodic , and sporadic behaviors in their light curves .
we show a modified version of the fast @xmath0 periodogram algorithm ( f@xmath0 ) and change - point analysis ( cpa ) as tools for detecting and assessing the significance of periodic and non - periodic variations .
the benefits of our new photometry and analysis methods are evident .
a total of 2306 stars exhibit convincing variations that are induced by flares , pulsations , eclipses , starspots , and unknown causes in some cases .
this represents a 60% increase in the number of variables known in this field .
moreover , 30 of the previously identified variables are found to be false positives resulting from time - dependent systematic effects .
new catalog includes 61 eclipsing binary systems , 92 multiperiodic variable stars , 132 aperiodic variables , and 436 flare stars , as well as several hundreds of rotating variables .
based on extended and improved catalog of variables , we investigate the basic properties ( e.g. , period , amplitude , type ) of all variables .
the catalog can be accessed through the web interface ( http://stardb.yonsei.ac.kr/ ) . .
And you have already written the first three sentences of the full article: the rich open cluster m37 ( ngc 2099 ) in the constellation auriga has been photometrically monitored to search for new variables with different motivations @xcite . the combination of short and deep exposures with similar sky coverage allowed time - series photometry of hundreds to thousands of stars from the brightest to the dimmest within the cluster field ( @xmath1 ) .
their studies indicate that the fraction of variable sources @xmath2 increases as photometric precision of the survey gets better .
@xcite performed the first variability survey for the central ( @xmath3 ) field of the m37 with a @xmath4-filter photometry and found 7 variable stars among 2300 stars ( @xmath5 ) ..
Please generate the next two sentences of the article
|
due to poor photometric quality , their variables were with large variability and included three w uma - type eclipsing binaries ( ebs ) , two high - amplitude pulsating stars , and two long - period detached eb candidates .
@xcite and @xcite used the same data set to search for short - period pulsating variables ( e.g. , @xmath6 scuti- and @xmath7 doradus - type stars ) and rotating variables of later spectral types ( f k ) , respectively . using @xmath8-band time - series observations
|
11,038 |
Suppose that you have an abstract for a scientific paper: we carried out multiwavelength ( 0.7 - 5 cm ) , multiepoch ( 1994 - 2015 ) very large array ( vla ) observations toward the region enclosing the bright far - ir sources fir 3 ( hops 370 ) and fir 4 ( hops 108 ) in omc-2 .
we report the detection of 10 radio sources , seven of them identified as young stellar objects .
we image a well - collimated radio jet with a thermal free - free core ( vla 11 ) associated with the class i intermediate - mass protostar hops 370 .
the jet presents several knots ( vla 12n , 12c , 12s ) of non - thermal radio emission ( likely synchrotron from shock - accelerated relativistic electrons ) at distances of @xmath0 7,500 - 12,500 au from the protostar , in a region where other shock tracers have been previously identified .
these knots are moving away from the hops 370 protostar at @xmath0 100 km s@xmath1 . the class 0 protostar hops 108 , which itself is detected as an independent , kinematically decoupled radio source , falls in the path of these non - thermal radio knots .
these results favor the previously proposed scenario where the formation of hops 108 has been triggered by the impact of the hops 370 outflow with a dense clump .
however , hops 108 presents a large proper motion velocity of @xmath230 km s@xmath1 , similar to that of other runaway stars in orion , whose origin would be puzzling within this scenario .
alternatively , an apparent proper motion could result because of changes in the position of the centroid of the source due to blending with nearby extended emission , variations in the source shape , and/or opacity effects . .
And you have already written the first three sentences of the full article: omc-2 is an active star - forming region ( e.g. , @xcite ) in the orion a molecular cloud , located at a distance of 414 @xmath37 pc @xcite .
@xcite identified six bright mm / ir sources ( fir 1 - 6 ) within a region of about @xmath4 in size that have been associated with young stellar objects ( ysos ) through subsequent studies ( @xcite , @xcite and references therein ) .
the region has been imaged at mm and submm wavelengths by @xcite and @xcite , and in the near- and mid - ir by @xcite , @xcite , and @xcite . at mm and submm.
Please generate the next two sentences of the article
|
wavelengths the brightest source is fir 4 , which has been associated with the hops 108 class 0 protostar ( @xcite , @xcite ) .
this source is connected through a filamentary cloud structure to the bright source fir 3 , also known as hops 370 , an intermediate - mass class i yso with an @xmath5 ( @xcite , @xcite ) located about @xmath6 to the ne ( see fig .
|
11,039 |
Suppose that you have an abstract for a scientific paper: changes in the level of synchronization and desynchronization in coupled oscillator systems due to an external stimulus is called event related synchronization or desynchronization ( ers / erd ) .
such changes occur in real life systems where the collective activity of the entities of a coupled system is affected by some external influence . in order to understand the role played by the external influence in the occurrence of erd and ers , we study a system of coupled nonlinear oscillators in the presence of an external stimulus signal .
we find that the phenomena of ers and erd are generic and occur in all types of coupled oscillator systems .
we also find that the same external stimulus signal can cause ers and erd depending upon the strength of the signal .
we identify the stability of the ers and erd states and also find analytical and numerical boundaries between the different synchronization regimes involved in the occurrence of erd and ers . .
And you have already written the first three sentences of the full article: synchronization is an ubiquitous natural phenomenon that occurs widely in real systems including those in physics @xcite , chemistry @xcite , biology @xcite and nano - technology @xcite .
this phenomenon is an active topic of current research and is being extensively studied @xcite . nevertheless , synchronization is not always a desirable phenomenon . in some cases it is desirable
while in some other times it is undesirable and hence a mechanism to desynchronize becomes necessary for normal behavior ..
Please generate the next two sentences of the article
|
for example , synchronization is desirable in the cases of lasers and josephson junction arrays @xcite , coupled spin torque nano - oscillators where coherent microwave power is needed @xcite , and in the brain when synchronization of neuronal oscillations facilitate cognition via temporal coding of information @xcite . on the other hand
, synchronization is undesirable when pedestrians walk on the millennium bridge @xcite and when mass synchronization of neuronal oscillators occurs at a particular frequency band resulting in pathologies like trauma , parkinson s tremor and so on @xcite .
|
11,040 |
Suppose that you have an abstract for a scientific paper: an initial measurement of the lifetime of the positive muon to a precision of 16 parts per million ( ppm ) has been performed with the fast detector at the paul scherrer institute @xcite .
the result is @xmath0 = 2.197 083 ( 32 ) ( 15 ) @xmath1s , where the first error is statistical and the second is systematic .
the muon lifetime determines the fermi constant , @xmath2 @xmath3 gev@xmath4 ( 8 ppm ) .
a. barczyk , j. kirkby , l. malgeri + _ cern , geneva , switzerland _ + j. berdugo , j. casaus , c. ma , j. marin , g. martinez , e. snchez , c. willmott + _ ciemat , madrid , spain _ + c. casella , m. pohl + _ universit de genve , geneva , switzerland _ + k. deiters , p. dick , c. petitjean + _ paul scherrer institute , villigen , switzerland _ + * fast collaboration * + _ submitted to physics letters b _ .
And you have already written the first three sentences of the full article: the standard model has three free parameters in the bosonic sector : the electromagnetic coupling constant , @xmath5 , the mass of the z boson , @xmath6 , and the fermi coupling constant , @xmath2 .
the theory becomes predictive when these and several other fundamental parameters have been determined experimentally . by progressively improving the precision of these parameters ,
the theoretical predictions become increasingly precise and , in turn , the experimental measurements are increasingly sensitive to new physics beyond the standard model . therefore , on quite general grounds , it is important for each of the fundamental parameters of the standard model to be measured with the highest possible experimental precision ..
Please generate the next two sentences of the article
|
the fermi coupling constant is determined from the measurement of the positive muon lifetime , @xmath7 , through the relationship @xmath8 in order to avoid uncertainties in the capture rate of negative muons on the target nuclei , the more precise value of @xmath2 is derived from the positive muon lifetime . in this equation
@xmath9 encapsulates the higher order qed and qcd corrections calculated in the fermi theory , in which the weak charged current is described by a contact interaction .
|
11,041 |
Suppose that you have an abstract for a scientific paper: we use a conformal mapping method introduced in a companion paper @xcite to study the properties of bi - harmonic fields in the vicinity of rough boundaries .
we focus our analysis on two different situations where such bi - harmonic problems are encountered : a stokes flow near a rough wall and the stress distribution on the rough interface of a material in uni - axial tension .
we perform a complete numerical solution of these two - dimensional problems for any univalued rough surfaces .
we present results for sinusoidal and self - affine surface whose slope can locally reach 2.5 . beyond the numerical solution we present perturbative solutions of these problems .
we show in particular that at first order in roughness amplitude , the surface stress of a material in uni - axial tension can be directly obtained from the hilbert transform of the local slope . in case of self - affine surfaces ,
we show that the stress distribution presents , for large stresses , a power law tail whose exponent continuously depends on the roughness amplitude . .
And you have already written the first three sentences of the full article: in a companion paper@xcite , we have presented a conformal mapping technique that allows us to map any 2d medium bounded by a rough boundary onto a half - plane .
this method is based on the iterative use of fft transforms and is extremely fast and efficient provided that the local slope of the interface remains lower than one .
when the maximum slope exceeds one this algorithm , similar in spirit to a direct iteration technique well suited to circular geometries@xcite , can no longer be used in its original form ..
Please generate the next two sentences of the article
|
underrelaxation@xcite suffices however to make it convergent for boundaries having large slopes . beyond the determination of a conformal mapping for a given rough interface we have also shown in ref.@xcite how to generate directly mappings onto self - affine rough interfaces of chosen roughness exponent .
self - affine formalism is an anisotropic scaling invariance known to give a good description of real surfaces such as fracture surfaces@xcite .
|
11,042 |
Suppose that you have an abstract for a scientific paper: load disaggregation based on aided linear integer programming ( alip ) is proposed .
we start with a conventional linear integer programming ( ip ) based disaggregation and enhance it in several ways .
the enhancements include additional constraints , correction based on a state diagram , median filtering , and linear programming - based refinement . with the aid of these enhancements ,
the performance of ip - based disaggregation is significantly improved .
the proposed alip system relies only on the instantaneous load samples instead of waveform signatures , and hence works well on low - frequency data .
experimental results show that the proposed alip system performs better than conventional ip - based load disaggregation .
integer programming , combinatorial optimization , linear programming , load disaggregation , nilm .
And you have already written the first three sentences of the full article: load disaggregation or non - intrusive load monitoring ( nilm ) is the process of finding out how much each appliance within a household is consuming when only the aggregate current or power reading is available @xcite .
such readings are now available through smart meters , which have been , or are being , installed by most power utilities .
in addition to determining appliance consumption patterns , nilm could help balance different loads within a power network @xcite by predicting demand without the use of additional sensors ..
Please generate the next two sentences of the article
|
recent disaggregation methods make use of machine learning approaches such as clustering @xcite , fuzzy systems @xcite , and hidden markov models @xcite .
such methods might lead to practical solutions when large and sufficiently representative datasets become available for training , which is still not the case .
|
11,043 |
Suppose that you have an abstract for a scientific paper: we describe a search method for fast moving ( @xmath0 ) magnetic monopoles using simultaneously the scintillator , streamer tube and track - etch subdetectors of the macro apparatus .
the first two subdetectors are used primarily for the identification of candidates while the track - etch one is used as the final tool for their rejection or confirmation . using this technique ,
a first sample of more than two years of data has been analyzed without any evidence of a magnetic monopole .
we set a @xmath1 cl upper limit to the local monopole flux of @xmath2 in the velocity range @xmath3 and for nucleon decay catalysis cross section smaller than @xmath4 .
= 100 the macro collaboration + = 10000 m. ambrosio@xmath5 , r. antolini@xmath6 , g. auriemma@xmath7 , d. bakari@xmath8 , a. baldini@xmath9 , g. c. barbarino@xmath5 , b. c. barish@xmath10 , g. battistoni@xmath11 , y. becherini@xmath12 , r. bellotti@xmath13 , c. bemporad@xmath9 , p. bernardini@xmath14 , h. bilokon@xmath15 , c. bloise@xmath15 , c. bower@xmath16 , m. brigida@xmath13 , s. bussino@xmath17 , f. cafagna@xmath13 , m. calicchio@xmath13 , d. campana@xmath5 , m. carboni@xmath15 , r. caruso@xmath18 , s. cecchini@xmath19 , f. cei@xmath9 , v. chiarella@xmath15 , b. c. choudhary@xmath10 , s. coutu@xmath20 , g. de cataldo@xmath13 , h. dekhissi@xmath8 , c. de marzo@xmath13 , i. de mitri@xmath14 , j. derkaoui@xmath8 , m. de vincenzi@xmath17 , a. di credico@xmath6 , o. erriquez@xmath13 , c. favuzzi@xmath13 , c. forti@xmath15 , p. fusco@xmath13 , g. giacomelli@xmath12 , g. giannini@xmath21 , n. giglietto@xmath13 , m. giorgini@xmath12 , m. grassi@xmath9 , a. grillo@xmath6 , f. guarino@xmath5 , c. gustavino@xmath6 , a. habig@xmath22 , r. heinz@xmath16 , e. iarocci@xmath23 , e. katsavounidis@xmath24 , i. katsavounidis@xmath25 , e. kearns@xmath26 , h. kim@xmath10 , s. kyriazopoulou@xmath10 , e. lamanna@xmath27 , c. lane@xmath28 , d. s. levin@xmath29 , p. lipari@xmath30 , n. p. ....
And you have already written the first three sentences of the full article: within the framework of grand unified theories ( gut ) , supermassive magnetic monopoles ( @xmath56gev ) would have been produced in the early universe as intrinsically stable topological defects when the symmetry of the unified fundamental interactions was spontaneously broken @xcite . at our epoch they should be searched for in the cosmic radiation as remnants of primordial phase transition(s ) .
the velocity range in which gut monopoles should be sought spreads over several decades @xcite .
if sufficiently heavy ( @xmath57gev ) , gut monopoles will be gravitationally bound to the galaxy with a velocity distribution peaked at @xmath58 . lighter monopoles ( @xmath59gev ) would be accelerated in one or more regions of coherent galactic magnetic field up to velocities of @xmath60 while other acceleration mechanisms ( e.g. in the neighborhood of a neutron star ) could bring them to relativistic velocities ..
Please generate the next two sentences of the article
|
macro was a multipurpose underground detector ( located in the hall b of the laboratori nazionali del gran sasso , italy ) optimized for the search for gut monopoles with velocity @xmath61 and with a sensitivity below the parker bound ( i.e. @xmath62@xmath63s@xmath64sr@xmath64 @xcite ) .
the apparatus was arranged in a modular structure with overall dimensions of @xmath65m@xmath66 and was made up by three subdetectors : liquid scintillation counters , limited streamer tubes and nuclear track detectors ( cr39 and lexan ) @xcite . in this work
|
11,044 |
Suppose that you have an abstract for a scientific paper: the dynamical properties of a three dimensional model glass , the frustrated ising lattice gas ( filg ) are studied by monte carlo simulations .
we present results of compression experiments , where the chemical potential is either slowly or abruptly changed , as well as simulations at constant density .
one time quantities like density and two times ones as correlations , responses and mean square displacements are measured , and the departure from equilibrium clearly characterized .
the aging scenario , particularly in the case of the density autocorrelations , is reminiscent of spin glass phenomenology with violations of the fluctuation - dissipation theorem , typical of systems with one replica symmetry breaking .
the filg , as a valid on - lattice model of structural glasses , can be described with tools developed in spin glass theory and , being a finite dimensional model , can open the way for a systematic study of activated processes in glasses .
= -1.5 cm 2 .
And you have already written the first three sentences of the full article: upon cooling below the melting point , liquids may either crystallize or enter a super cooled regime . in the latter case , as the glass transition temperature @xmath0 is approached , molecular motion gets slower and slower and the viscosity increases enormously .
the relaxation time increases by several orders of magnitude and for all practical purposes the system remains out of equilibrium .
although mechanically responding as a solid , structural relaxation is still present , slowing down as the system ages ..
Please generate the next two sentences of the article
|
the response to a perturbation applied at a particular time @xmath1 will persist for very long times ( long term memory ) , preventing the system from reaching equilibrium .
while the system ages , one time quantities asymptotically tend to their equilibrium values while two times quantities depend explicitly both on the observation time and on the time when the perturbation was applied : time translation invariance ( tti ) is broken , which is a manifestation of history dependence . upon cooling the system gets trapped on long lived meta stable states which depend on the cooling rate , eventually escaping as a result of activated processes .
|
11,045 |
Suppose that you have an abstract for a scientific paper: we present a self - contained formalism for calculating the background solution , the linearized solutions , and a class of generalized frobenius solutions to a system of scale invariant differential equations .
we first cast the scale invariant model into its equidimensional and autonomous forms , find its fixed points , and then obtain power - law background solutions .
after linearizing about these fixed points , we find a second linearized solution , which provides a _ distinct _ collection of power laws characterizing the deviations from the fixed point .
we prove that generically there will be a region surrounding the fixed point in which the complete general solution can be represented as a generalized frobenius - like power series with exponents that are integer multiples of the exponents arising in the linearized problem .
this frobenius - like series can be viewed as a variant of liapunov s expansion theorem . as specific examples we apply these ideas to newtonian and relativistic isothermal stars and demonstrate ( both numerically and analytically ) that the solution exhibits oscillatory power - law behaviour as the star approaches the point of collapse .
these series solutions extend classical results ; as exemplified for instance by the work of lane , emden , and chandrasekhar in the newtonian case , and that of harrison , thorne , wakano , and wheeler in the relativistic case .
we also indicate how to extend these ideas to situations where fixed points may not exist either due to `` monotone '' flow or due to the presence of limit cycles .
monotone flow generically leads to logarithmic deviations from scaling , while limit cycles generally lead to discrete self - similar solutions . .
And you have already written the first three sentences of the full article: the presence of power - law behaviour in nature is such an extremely common phenomenon that considerable lore has now grown up concerning its genesis .
one of the most common situations in which it occurs is in the presence of scale - invariant systems .
such behaviour occurs , for instance , in any sort of thermodynamic system undergoing a second - order phase transition , and the behaviour of physical quantities ( such as susceptibilities ) in terms of the distance from criticality is typically given by a power - law : ( t - t_t_)^. second - order phase transitions are extremely well - described by statistical field theories ..
Please generate the next two sentences of the article
|
the associated technical machinery ( the renormalization group ) , is now so well developed that it is sometimes difficult to remember that second - order phase transitions are not the _ only _ route to power - law behaviour .
the onset of power - law behaviour actually occurs at a much more primitive level , and can be analyzed directly in terms of the underlying differential equations .
|
11,046 |
Suppose that you have an abstract for a scientific paper: we have used deep high - resolution multiband images taken at the eso _ very large telescope _ to identify the optical binary companion to the millisecond pulsar ( @xmath0 ) located in the halo of the galactic globular cluster ngc6752 .
the object turns out to be a blue star whose position in the color magnitude diagram is consistent with the cooling sequence of a low mass ( @xmath1 ) , low metallicity helium white dwarf ( he - wd ) at the cluster distance .
this is the second he - wd which has been found to orbit a millisecond pulsar in ggcs .
curiously both objects have been found to lie on the same mass he - wd cooling sequence
. the anomalous position of with respect to the globular cluster center ( @xmath2 ) suggested that this system has recently ( @xmath3 gyr ) been ejected from the cluster core as the result of a strong dynamical interaction .
the data presented here allows to constrain the cooling age of the companion within a fairly narrow range ( @xmath4 gyr ) , therefore suggesting that such dynamical encounter must have acted on an already recycled millisecond pulsar . .
And you have already written the first three sentences of the full article: has been discovered on 1999 october 17 during a search for millisecond pulsar ( msps ) in galactic globular clusters ( ggcs ) in progress at the parkes radiotelescope @xcite .
it is a binary millisecond pulsar with a spin period of 3.26 ms , an orbital period of 0.84 days and very low eccentricity ( @xmath5 ) .
precise celestial coordinates ( @xmath6 , @xmath7 ) have been recently obtained for this source from pulsar timing observations @xcite ..
Please generate the next two sentences of the article
|
this position is far away ( @xmath2 ) from the cluster optical center : indeed is the more off - centered pulsar among the sample of 44 msps whose position in the respective cluster is known , and it suggests that this object might be the result of strong interactions occurred in the cluster core .
@xcite explored a number of possibilities for the peculiar location of + @xmath0 : a careful analysis led to discard the hypothesis of a primordial binary ( born either in the halo or in the core of the cluster ) and to reject also the possibility of a 3-body scattering or exchange event off core stars .
|
11,047 |
Suppose that you have an abstract for a scientific paper: we propose two novel experiments on the measurement of the casimir force acting between a gold coated sphere and semiconductor plates with markedly different charge carrier densities . in the first of these experiments
a patterned si plate is used which consists of two sections of different dopant densities and oscillates in the horizontal direction below a sphere . the measurement scheme in this experiment is differential , i.e. , allows the direct high - precision measurement of the difference of the casimir forces between the sphere and sections of the patterned plate or the difference of the equivalent pressures between au and patterned parallel plates with static and dynamic techniques , respectively .
the second experiment proposes to measure the casimir force between the same sphere and a vo@xmath0 film which undergoes the insulator - metal phase transition with the increase of temperature .
we report the present status of the interferometer based variable temperature apparatus developed to perform both experiments and present the first results on the calibration and sensitivity .
the magnitudes of the casimir forces and pressures in the experimental configurations are calculated using different theoretical approaches to the description of optical and conductivity properties of semiconductors at low frequencies proposed in the literature .
it is shown that the suggested experiments will aid in the resolution of theoretical problems arising in the application of the lifshitz theory at nonzero temperature to real materials .
they will also open new opportunities in nanotechnology . .
And you have already written the first three sentences of the full article: the casimir effect @xcite implies that there is a force acting between closely spaced electrically neutral bodies following from the zero - point oscillations of the electromagnetic field .
the casimir force can be viewed as an extension of the van der waals force to large separations where the retardation effects come into play . within a decade of casimir s
work , lifshitz and collaborators @xcite introduced the role of optical properties of the material into the van der waals and casimir force . in the last few years.
Please generate the next two sentences of the article
|
, the advances following from both fundamental physics and nanotechnology have motivated careful experimental and theoretical investigations of the casimir effect .
the first modern experiments were made with metal test bodies in a sphere - plate configuration , and their results are summarized in ref . @xcite . in subsequent experiments the lateral casimir force between corrugated surfaces @xcite and the pressure in the original casimir configuration @xcite have been demonstrated .
|
11,048 |
Suppose that you have an abstract for a scientific paper: we report the theoretical discovery of a novel time reversal symmetry breaking superconducting state in the @xmath0-@xmath1 model on the honeycomb lattice , based on a recently developed variational method - the grassmann tensor product state approach . as a benchmark ,
we use exact diagonalization ( ed ) and density matrix renormalization ( dmrg ) methods to check our results on small clusters .
remarkably , we find systematical consistency for the ground state energy as well as other physical quantities , such as the staggered magnetization . at low doping ,
the superconductivity coexists with anti - ferromagnetic ordering . .
And you have already written the first three sentences of the full article: since the discovery of high - temperature superconductivity in the cuprates@xcite , many strongly correlated models have been intensively studied .
one of the simplest of these is the @xmath0-@xmath1 model@xcite : @xmath2 where @xmath3 is the electron operator defined in the no - double - occupancy subspace .
this model can be derived from the strong - coupling limit of the hubbard model . despite its simplicity and extensive study , the nature of the ground states of eq ..
Please generate the next two sentences of the article
|
is still controversial .
a strong correlation view of the @xmath0-@xmath1 model was advanced by anderson , who conjectured the relevance of a resonating valence bond ( rvb ) state@xcite as a low energy state for eq . when doped . when undoped , the rvb state is a spin singlet , with no symmetry breaking , and describes a `` quantum spin liquid '' . at low temperature the mobile carriers in the doped rvb state
|
11,049 |
Suppose that you have an abstract for a scientific paper: entanglement dynamics of a qutrit - qutrit system under the influence of global , local and multilocal decoherence introduced by phase flip , trit flip and trit phase flip channels is investigated .
the negativity and realignment criterion are used to quantify the entanglement of the system .
it is shown that the entanglement sudden death and distibility sudden death can be avoided in the presence of phase flip , trit flip and trit - phase flip environments .
it is shown that certain free entangled distillable qutrit states become bound entangled or separable i.e. convert into non - distillable states under different flipping noises .
it is also seen that local operations do not have any effect on the entanglement dynamics of the system .
further more , no esd and dsd is seen for the case of trit flip channel .
keywords : quantum channels ; qutrit entanglement ; global noise . .
And you have already written the first three sentences of the full article: quantum entanglement is a fundamental resource for many quantum information processing tasks , e.g. super - dense coding , quantum cryptography and quantum error correction [ 1 - 4 ] .
entangled states can be used in constructing number of protocols , e.g. teleportation [ 5 ] , key distribution and quantum computation [ 6 ] . during recent past ,
entanglement sudden death ( esd ) has been investigated by different authors for bipartite and multipartite states [ 7 - 10 ] ..
Please generate the next two sentences of the article
|
yu and eberly [ 11 , 12 ] have shown that entanglement loss occurs in a finite time under the action of pure vacuum noise in a bipartite qubit system .
a geometric interpretation of the phenomenon of esd has been given in ref .
|
11,050 |
Suppose that you have an abstract for a scientific paper: the debye model of the specific heat of solid at low temperatures is incorporate in the entropic gravity theory ( egt ) . rather of a smooth surface ,
the holographic screen is considered as an oscillating elastic membrane , with a continuous range of frequencies , that cuts off at a maximum ( debye ) temperature , @xmath0 .
we show that at low temperatures @xmath1 , the conservation of the equivalence principle in egt requires a modification of the davies - unruh effect . while the maintenance of davies - unruh effect requires a violation of the equivalence principle .
these two possibilities are equivalents , because both can emulate the same quantity of dark matter .
however , in both cases , the central mechanism is the davies - unruh effect , this seems to indicate that the modification of the davies - unruh effect emulates dark matter which in turn can be see as a violation of the equivalence principle .
this scenario is promising to explain why mond theory works at very low temperatures ( accelerations ) regime , i. e. , the galaxies sector .
we also show that in the intermediate region , for temperatures slightly lower or slightly higher than debye temperature , egt predicts the mass - temperature relation of hot x - ray galaxy clusters . .
And you have already written the first three sentences of the full article: the davies - unruh effect ( dhe ) @xcite , essentially predict that in an accelerated frame of reference ; a vacuum state may seen as a thermal bath of photons with a black boddy spectrum at a temperature t , the main point of the due is that this temperature is proportional to the acceleration of the frame .
the connection between thermodynamic and gravity began in the 70s with bekenstein @xcite and hawking @xcite , researching the nature of black holes . in 1995
jacobson @xciteshown a thermodynamic description of gravity obtaining the einstein s equations ..
Please generate the next two sentences of the article
|
according to padmanabhan @xcite , the association between gravity and entropy leads in a natural way to describes gravity as an emergent phenomenon , and a formalism of gravity as a entropic force is derived by verlinde @xcite in 2010 .
the dependence of information on surface area , rather than volume ( holographic principle ) @xcite , it is one of the key of black hole thermodynamic theory , as well as in egt .
|
11,051 |
Suppose that you have an abstract for a scientific paper: the impact of particle - vibration coupling and polarization effects due to deformation and time - odd mean fields on single - particle spectra is studied systematically in doubly magic nuclei from low mass @xmath0ni up to superheavy ones .
particle - vibration coupling is treated fully self - consistently within the framework of relativistic particle - vibration coupling model .
polarization effects due to deformation and time - odd mean field induced by odd particle are computed within covariant density functional theory .
it has been found that among these contributions the coupling to vibrations makes a major impact on the single - particle structure .
the impact of particle - vibration coupling and polarization effects on calculated single - particle spectra , the size of the shell gaps , the spin - orbit splittings and the energy splittings in pseudospin doublets is discussed in detail ; these physical observables are compared with experiment .
particle - vibration coupling has to be taken into account when model calculations are compared with experiment since this coupling is responsible for observed fragmentation of experimental levels ; experimental spectroscopic factors are reasonably well described in model calculations . .
And you have already written the first three sentences of the full article: the covariant density functional theory ( cdft ) @xcite is one of standard tools of nuclear theory which offers considerable potential for further development .
built on lorentz covariance and the dirac equation , it provides a natural incorporation of spin degrees of freedom @xcite and an accurate description of spin - orbit splittings @xcite ( see also fig . 2 in ref .
@xcite ) , which has an essential influence on the underlying shell structure . note that the spin - orbit interaction is a relativistic effect , which arises naturally in the cdft theory ..
Please generate the next two sentences of the article
|
lorentz covariance of the cdft equations leads to the fact that time - odd mean fields of this theory are determined as spatial components of lorentz vectors and therefore coupled with the same constants as time - like components @xcite which are fitted to ground state properties of finite nuclei .
in addition , pseudo - spin symmetry finds a natural explanation in the relativistic framework @xcite .
|
11,052 |
Suppose that you have an abstract for a scientific paper: following my previous study of paper length vs. number of citations in astronomy ( stanek 2008 ) , some colleagues expressed an interest in knowing if any correlation exists between citations and the number of authors on an astronomical paper .
at least naively , one would expect papers with more authors to be cited more .
i test this expectation with the same sample of papers as analyzed in stanek ( 2008 ) , selecting all ( @xmath0 ) refereed papers from a&a , aj , apj and mnras published between 2000 and 2004 . these particular years
were chosen so that the papers analyzed would not be too `` fresh '' , but number of authors and length of each article could be obtained via ads .
i find that indeed papers with more authors published in these four major astronomy journals are on average cited more , but only weakly so : roughly , the number of citations doubles with ten - fold increase in the number of authors .
while the median number of citations for a 2 author paper is 17 , the median number of citations to a paper with 10 to 20 authors is 32 .
however , i find that most papers are written by a small number of authors , with a mode of 2 authors and a median of 3 authors , and 92% of all papers written have fewer than 10 authors . perhaps surprisingly
, i also find that papers with more authors are not longer than papers with fewer authors , in fact a median number of 8 to 10 pages per paper holds for any number of authors .
for the same sample of papers , a median number of citations per paper grew from 15 in june 2008 ( stanek 2008 ) to 19 in november 2009 . unlike stanek ( 2008 )
, i do not conclude with any career advice , semi - humorous or otherwise . .
And you have already written the first three sentences of the full article: there have been a number of publications analyzing citation patterns in the astronomical literature , see my previous paper for more discussion ( stanek 2008 ) . in that paper
i addressed the question of whether longer astronomical papers are cited more ( they are ) . following a number of suggestions i received after posting that seminal study , here i address the question of whether astronomical papers with more authors are also cited more .
that this would be the case is somewhat expected , for a number of reasons , but the expected size of the effect varied quite widely among my informally polled colleagues ..
Please generate the next two sentences of the article
|
i therefore decided to investigate the citation vs. number of authors correlation for astronomical papers , and i found the results interesting enough to warrant this posting . in section 2
i describe the data , namely citation and number of authors statistics for about 30,000 refereed astronomical papers published between 2000 and 2004 in apj , a&a , aj and mnras . in section 3
|
11,053 |
Suppose that you have an abstract for a scientific paper: the miner@xmath0a experiment is aimed at precisely measuring the cross - sections for various neutrino interaction channels .
it is located at fermilab in the underground cavern in front of minos near detector .
miner@xmath0a is a fine - grained scintillator with electromagnetic and hadronic calorimetry regions .
there are various nuclear targets located inside and in front of the detector for studying nuclear medium effects in neutrino - induced interactions .
the installation was completed in march 2010 and since then the detector has been collecting data . in this paper , the method for determining the neutrino flux is described in detail with the associated uncertainties as well as the techniques for their reduction .
the general structure of the detector is given with the emphasis on the nuclear targets region .
preliminary results related to nuclear effects studies are presented followed by their discussion and future plans . .
And you have already written the first three sentences of the full article: the miner@xmath0a collaboration emerged as a result of a joint effort between the high - energy physics and medium energy nuclear physics communities .
the main goal of the experiment is to measure neutrino - nucleus interaction cross - section with a high degree of precision .
this is an important issue for both present and future neutrino oscillation experiments ..
Please generate the next two sentences of the article
|
the current available cross - section measurements have large uncertainties due to low statistics and poor knowledge of the incoming neutrino flux as can be seen on figure [ figure1 ] .
miner@xmath0a will collect large event samples for various interaction channels and will measure the cross - sections with negligible statistical errors and with the well - controlled beam systematic errors .
|
11,054 |
Suppose that you have an abstract for a scientific paper: we investigate the properties and clustering of halos , galaxies and blackholes to @xmath0 in the high resolution hydrodynamical simulation massiveblack - ii ( mbii ) .
mbii evolves a @xmath1cdm cosmology in a cubical comoving volume of @xmath2 and is able to resolve halos of mass @xmath3/h .
it is the highest resolution simulation of this size which includes a self - consistent model for star formation , black hole accretion and associated feedback .
we provide a simulation browser web application which enables interactive search and tagging of halos , subhalos and their properties and publicly release our galaxy catalogs to the scientific community .
our analysis of the halo mass function in mbii reveals that baryons have strong effects , with changes in the halo abundance of 20 - 35% below the knee of the mass function ( @xmath4/h at @xmath0 ) when compared to fits based on dark matter only simulations .
we provide a fitting function for the halo mass function valid for the full range of halo masses in mbii out to redshift @xmath5 and discuss how the onset of non - universal behavior in the mass function limits the accuracy of our fit .
we examine the halo occupation distribution of satellite galaxies and present results valid over 5 orders of magnitude in host halo mass .
we study the clustering of galaxies , and in particular the evolution and scale dependence of stochasticity and bias .
comparison with observational data for these quantities for samples with different stellar mass thresholds yields reasonable agreement .
using population synthesis , we find that the shape of the cosmic spectral energy distribution predicted by mbii is consistent with observations , but lower in amplitude . the galaxy stellar mass function ( gsmf )
function is broadly consistent with observations at @xmath6 . at @xmath7 ,
observations probe deeper into the faint end and the population of passive low mass ( for @xmath8 ) galaxies in the simulation makes the gsmf too steep . at the high mass end (....
And you have already written the first three sentences of the full article: the cold dark matter model with a cosmological constant ( @xmath1cdm ) is well established enough ( see e.g. , @xcite,@xcite , @xcite,@xcite ) that individual large - scale simulation efforts can be carried out that focus on just this one cosmology .
we have also reached the point at which supercomputers enable numerical modeling of cosmological volumes with enough resolution to study the properties of individual galaxies . in this paper
we report on a p - gadget hydrodynamic simulation of @xmath10 cubic volume , the massiveblackii simulation ..
Please generate the next two sentences of the article
|
it has @xmath11 mass resolution , cooling , star formation , black holes and feedback , and represents the evolution of a @xmath1cdm universe to redshift @xmath0 .
numerical simulations ( see reviews by @xcite , @xcite , ) are the tool of choice to address many questions in cosmology , as galaxy formation is a complex non - linear problem .
|
11,055 |
Suppose that you have an abstract for a scientific paper: we show that the nonlinear evolution of the cosmic gravitational clustering is approximately spatial local in the @xmath0-@xmath1 ( position - scale ) phase space if the initial perturbations are gaussian .
that is , if viewing the mass field with modes in the phase space , the nonlinear evolution will cause strong coupling among modes with different scale @xmath1 , but at the same spatial area @xmath0 , while the modes at different area @xmath0 remain uncorrelated , or very weakly correlated .
we first study the quasi - local clustering behavior with the halo model , and demonstrate that the quasi - local evolution in the phase space is essentially due to the self - similar and hierarchical features of the cosmic gravitational clustering .
the scaling of mass density profile of halos insures that the coupling between @xmath2 modes at different physical positions is substantially suppressed . using high resolution n - body simulation samples in the lcdm model
, we justify the quasi - locality with the correlation function between the dwt ( discrete wavelet transform ) variables of the cosmic mass field .
although the mass field underwent a highly non - linear evolution , and the dwt variables display significantly non - gaussian features , there are almost no correlations among the dwt variables at different spatial positions
. possible applications of the quasi - locality have been discussed . .
And you have already written the first three sentences of the full article: the large scale structure of the universe was arisen from initial fluctuations through the nonlinear evolution of gravitational instability .
gravitational interaction is of long range , and therefore , the evolution of cosmic clustering is not localized in physical space .
the typical processes of cosmic clustering , such as collapsing and falling into potential wells , the fourier mode - mode coupling and the merging of pre - virialized dark halos , are generally _ non - local_. these processes lead to a correlation between the density perturbations at different positions , even if the perturbations at that positions initially are statistically uncorrelated . for instance , in the zeldovich approximation ( zeldovich 1970 ) , the density field @xmath3 at ( eulerian ) comoving position @xmath4 and time @xmath5 is determined by the initial perturbation at ( lagrangian ) comoving position , @xmath6 , plus a displacement @xmath7 : @xmath8 the displacement @xmath9 represents the effect of density perturbations on the trajectories of self - gravitating particles ..
Please generate the next two sentences of the article
|
the intersection of particle trajectories leads to a correlation between mass fields at different spatial positions .
thus , the gravitational clustering is non - local even in weakly non - linear regime .
|
11,056 |
Suppose that you have an abstract for a scientific paper: we propose a digital quantum simulator of non - abelian pure - gauge models with a superconducting circuit setup . within the framework of quantum link models ,
we build a minimal instance of a pure @xmath0 gauge theory , using triangular plaquettes involving geometric frustration .
this realization is the least demanding , in terms of quantum simulation resources , of a non - abelian gauge dynamics .
we present two superconducting architectures that can host the quantum simulation , estimating the requirements needed to run possible experiments .
the proposal establishes a path to the experimental simulation of non - abelian physics with solid - state quantum platforms .
gauge invariance is a central concept in modern physics , being at the core of the standard model of elementary particle physics . in particular , invariances with respect to @xmath0 and @xmath1
gauge symmetries characterize the weak interaction and quantum chromodynamics @xcite . in this sense ,
gauge theories represent a cornerstone in our understanding of the physical world and lie at the heart of diverse phenomena , such as the quark - gluon plasma or quantum spin liquids . in condensed matter physics
, @xmath0 gauge fields can also emerge dynamically in relation to exotic many - body phenomena , like quantum hall systems , frustrated magnets , or superconductors @xcite .
lattice gauge theories ( lgt ) are non - perturbative discrete formulations that contribute to the analysis of key features of these models , such as color confinement or chiral symmetry breaking . starting from the seminal work by wilson in 1974 @xcite , lgt have attracted a significant attention across several branches of theoretical physics . in the last decades
@xcite , quantum montecarlo simulations have achieved unprecedented accuracies in determining the whole hadronic spectrum of the standard model . however , understanding its full phase diagram from first principles , or simulating dynamical processes , remain out of reach of....
And you have already written the first three sentences of the full article: in this supplemental material , we perform additional analysis that support the results obtained in the main article . in this section ,
we explicitly derive the link operators in terms of schwinger bosons . following @xcite
, we define the operators @xmath25 , @xmath26 as _ right _ and _ left _ generators , acting on the finite - dimensional hilbert space of the link , @xmath90 where @xmath91 are bosons that implement the schwinger representation of the @xmath0 algebra , and the pauli matrices @xmath92 the bosonic operators act on two different sites @xmath93 on the link between the two adjacent sites @xmath2 and @xmath10 and obey the @xmath0 commutation rules @xmath94=i \sum_{c } \epsilon^{abc}r^{c } , ~ \ ,.
Please generate the next two sentences of the article
|
~[r^{a},l^{b}]=0 , ~ \ , ~ [ l^{a},l^{b}]=&i \sum_{c } \epsilon^{abc}l^{c } , \\ \end{split}\ ] ] taking into account that they commute on different links ( i.e. , different @xmath2 , @xmath95 ) . with the schwinger representation , there are two multiplets with well - defined @xmath0 commutation relations @xmath96 = \sum_{\gamma } \frac{\sigma^{a}_{\alpha \gamma}}{2 } c_{\gamma l } , ~ \ , ~ & \left [ \left ( \sum_{\beta } \sigma^{y}_{\alpha \beta}c^{\dagger}_{\beta
l } \right ) , l^{a}\right ] = \sum_{\gamma } \frac{\sigma^{a}_{\alpha \gamma}}{2 } \left(\sum_{\beta } \sigma^{y}_{\gamma \beta } c^{\dagger}_{\beta l }
|
11,057 |
Suppose that you have an abstract for a scientific paper: gravitational and electromagnetic fields of an electron correspond to over - rotating kerr - newman ( kn ) solution , which has a naked singular ring and two - sheeted topology .
this solution is regularized by a solitonic source , in which singular interior is replaced by a vacuum bubble filled by the higgs field in a false - vacuum state .
field model of this kn bubble has much in common with the famous mit and slac bag models , but the geometry is `` dual '' ( turned inside out ) , leading to consistency of the kn bag model with gravity .
similar to other bag models , the kn bag is compliant to deformations , and under rotations it takes an oblate ellipsoidal form , creating a circular string along the border .
electromagnetic excitations of the kn bag generate stringy traveling waves which deform the bag , creating a traveling singular pole , included in a general bag - string - quark complex .
the dressed electron may be considered in this model as a coherent excitation of this system , confining the point - like electron ( as a quark ) in state of zitterbewegung . 2 .
And you have already written the first three sentences of the full article: the kerr - newman rotating black hole solution has gyromagnetic ratio @xmath0 as that of the dirac electron .
the measurable parameters of the electron : spin , mass , charge and magnetic moment determine the gravitational and electromagnetic field of electron as field of the over - rotating kerr - newman ( kn ) solution .
the corresponding space - time has topological defect the naked kerr singular ring , which forms a branch line of the kerr space into _ two sheets : _ the sheet of advanced and sheet of the retarded fields ..
Please generate the next two sentences of the article
|
the kerr - schild form of metric @xciteg_=_+ 2h k_k_,[ksh ] in which @xmath1 is metric of auxiliary minkowski space @xmath2 and @xmath3 is a null vector field , @xmath4 forming the principal null congruence ( pnc ) @xmath5 the retarded and advanced sheets are related by a smooth transfer of the kerr pnc via disk @xmath6 spanned by the kerr singular ring @xmath7 ( see fig.1 ) , where @xmath8 is the kerr ellipsoidal radial coordinate . the surface @xmath9 represents a disklike `` door '' from negative sheet @xmath10 to positive one @xmath11 .
the null vector fields @xmath12 differs on these sheets , and two different null congruences @xmath13 create two different metrics g_^=_+ 2h k_^k_^[kspm ] on the same minkowski background @xmath14 twosheetedness of the kerr geometry caused search for different models of the source of kn solution , without mystery of the negative sheet .
|
11,058 |
Suppose that you have an abstract for a scientific paper: * abstract : * we investigate the dynamics of a neutral and a charged particle around a black hole in modified gravity immersed in magnetic field . our focus is on the scalar - tensor - vector theory as modified gravity .
we are interested to explore the conditions on the energy of the particle under which it can escape to infinity after collision with another neutral particle in the vicinity of the black hole .
we calculate escape velocity of particle orbiting in the innermost stable circular orbit ( isco ) after the collision .
we study the effects of modified gravity on the dynamics of particles .
further we discuss how the presence of magnetic field in the vicinity of black hole , effects the motion of the orbiting particle .
we show that the stability of isco increases due to presence of magnetic field . it is observed that a particle can go arbitrary close to the black hole due to presence of magnetic field .
furthermore isco for black hole is more stable as compared with schwarzschild black hole .
we also discuss the lyapunov exponent and the effective force acting on the particle in the presence of magnetic field . .
And you have already written the first three sentences of the full article: theories of modified gravity ( such as @xmath0 theory , lovelock gravity , gauss - bonnet theory etc ) are constructed by adding curvature correction terms in the usual einstein - hilbert action through which the cosmic accelerated expansion might be explained @xcite ( see also @xcite for reviews on modified gravity ) .
such correction terms give rise to solutions of the field equations without invoking the concept of dark energy . to find the dynamical equations one can vary the action according to the metric .
there is no restriction on the gravitational lagrangian to be a linear function of ricci scalar @xmath1 @xcite ..
Please generate the next two sentences of the article
|
recently some authors have taken into serious consideration the lagrangians that arestochastic " functions with the requirement that it should be local gauge invariant @xcite .
this mechanism was adopted in order to treat the quantization on curved spacetime .
|
11,059 |
Suppose that you have an abstract for a scientific paper: we present the results of calculations defining global , three - dimensional representations of the complex - valued potential - energy surfaces of the @xmath0 , @xmath1 , and @xmath2 metastable states of the water anion that underlie the physical process of dissociative electron attachment to water .
the real part of the resonance energies is obtained from configuration - interaction calculations performed in a restricted hilbert space , while the imaginary part of the energies ( the widths ) is derived from complex kohn scattering calculations .
a diabatization is performed on the @xmath1 and @xmath2 surfaces , due to the presence of a conical intersection between them .
we discuss the implications that the shapes of the constructed potential - energy surfaces will have upon the nuclear dynamics of dissociative electron attachment to h@xmath3o . .
And you have already written the first three sentences of the full article: dissociative electron attachment ( dea ) to the water molecule proceeds through a number of channels , each with a different energetic threshold , @xmath4 the production of these species occurs via three metastable born - oppenheimer electronic states of the h@xmath3o@xmath5 system , whose vertical transition energies therefore determine the incident energies at which dea occurs .
those electronic states of the anion are the @xmath0 , @xmath1 , and @xmath2 feshbach resonances , and they are responsible for the three distinct peaks in the dea cross section .
their potential - energy surfaces contain asymptotes corresponding to the product channels listed in eq.([asymptotes ] ) , with the exception of the h+oh@xmath5 channel ; this product is a result of nonadiabatic effects . here.
Please generate the next two sentences of the article
|
we report the construction of the complex - valued adiabatic potential - energy surfaces associated with these resonance states , which will be used within the local complex potential ( lcp ) model @xcite to calculate the nuclear dynamics leading to dissociation . the present study is followed by a second paper@xcite , to which we will refer as paper ii , in which we present the results of nuclear dynamics calculations under the lcp model using the calculated surfaces .
dissociative electron attachment to water was studied as early as 1930 , in the experiment of lozier@xcite , and as recently as 2006 , in the study by fedor _ _ et al.__@xcite .
|
11,060 |
Suppose that you have an abstract for a scientific paper: an integrated coherent matter wave circuit is a single device , analogous to an integrated optical circuit , in which coherent de broglie waves are created and then launched into waveguides where they can be switched , divided , recombined , and detected as they propagate .
applications of such circuits include guided atom interferometers , atomtronic circuits , and precisely controlled delivery of atoms .
here we report experiments demonstrating integrated circuits for guided coherent matter waves .
the circuit elements are created with the painted potential technique , a form of time - averaged optical dipole potential in which a rapidly - moving , tightly - focused laser beam exerts forces on atoms through their electric polarizability .
the source of coherent matter waves is a bose - einstein condensate ( bec ) .
we launch becs into painted waveguides that guide them around bends and form switches , phase coherent beamsplitters , and closed circuits .
these are the basic elements that are needed to engineer arbitrarily complex matter wave circuitry . .
And you have already written the first three sentences of the full article: it has been a longstanding goal in the field of atom optics to realize an integrated coherent matter wave circuit @xcite .
this concept envisions a matter wave analog of an integrated optical circuit : a single device in which coherent de broglie waves would be created and then launched into waveguides where they can be switched , divided , recombined , and detected as they propagate .
research to develop coherent matter wave circuits is motivated in part by the many potential applications of this technology ..
Please generate the next two sentences of the article
|
one important aim is the creation of waveguide atom interferometers @xcite , which have applications ranging from fundamental physics to various forms of sensing .
for example , when the interferometer splits the moving matter waves into two wavepackets and then recombines them after the separated wavepackets have traveled along different paths that enclose an area , the device will be sensitive to rotations through the sagnac phase @xcite .
|
11,061 |
Suppose that you have an abstract for a scientific paper: this paper presents a method of computing a revision of a function - free normal logic program .
if an added rule is inconsistent with a program , that is , if it leads to a situation such that no stable model exists for a new program , then deletion and addition of rules are performed to avoid inconsistency .
we specify a revision by translating a normal logic program into an abductive logic program with abducibles to represent deletion and addition of rules .
to compute such deletion and addition , we propose an adaptation of our top - down abductive proof procedure to compute a relevant abducibles to an added rule .
we compute a minimally revised program , by choosing a minimal set of abducibles among all the sets of abducibles computed by a top - down proof procedure . .
And you have already written the first three sentences of the full article: knowledge base is always subject to change since an environment around the knowledge base is not guaranteed to be stable forever and even some error might be included at the initial stage .
therefore , study of revision of knowledge base is very important@xcite . @xcite and
@xcite consider a revision of monotonic theories and there are a lot of researches in this direction ( see @xcite for a survey ) ..
Please generate the next two sentences of the article
|
@xcite and @xcite consider an update of nonmonotonic theories to derive a given goal or a given observation . @xcite and
@xcite consider a revision of nonmonotonic theories which is more related to a revision of monotonic theories studies @xcite ; they consider a revision when inconsistency arises at addition of rules in this paper , we follow the latter approach . revision of nonmonotonic theories is especially important for ai , since it is very rare that commonsense reasoning can be represented as a monotonic theory .
|
11,062 |
Suppose that you have an abstract for a scientific paper: intermediate mass black holes play a critical role in understanding the evolutionary connection between stellar mass and super - massive black holes@xcite .
however , to date the existence of these species of black holes remains ambiguous and their formation process is therefore unknown@xcite .
it has been long suspected that black holes with masses @xmath0 should form and reside in dense stellar systems@xcite .
therefore , dedicated observational campaigns have targeted globular cluster for many decades searching for signatures of these elusive objects .
all candidates found in these targeted searches appear radio dim and do not have the x - ray to radio flux ratio predicted by the fundamental plane for accreting black holes@xcite .
based on the lack of an electromagnetic counterpart upper limits of @xmath1 and @xmath2 have been placed on the mass of a putative black hole in 47 tucanae ( ngc 104 ) from radio and x - ray observations respectively@xcite .
here we show there is evidence for a central black hole in 47 tuc with a mass of m@xmath3@xmath4 when the dynamical state of the globular cluster is probed with pulsars .
the existence of an intermediate mass black hole in the centre of one of the densest clusters with no detectable electromagnetic counterpart suggests that the black hole is not accreting at a sufficient rate and therefore contrary to expectations is gas starved .
this intermediate mass black hole might be a member of electromagnetically invisible population of black holes that are the elusive seeds leading to the formation of supermassive black holes in galaxies .
harvard - smithsonian center for astrophysics , 60 garden st . , cambridge , ma 02138 usa school of mathematics and physics , university of queensland , brisbane , qld 4072
, australia an intermediate mass black hole ( imbh ) strongly affects the spatial distribution of stars in globular clusters ( gcs ) .
massive stars sink into the centre more efficiently during relaxation in order to achieve....
And you have already written the first three sentences of the full article: the high precision timing solutions for pulsars in 47 tuc have been possible due to a long dedicated observation campaign for several decades with the parkes radio telescope@xcite .
we use the updated timing solutions in the extended table 1@xcite .
based on the observed number of pulsars , the total predicted number of neutron stars in the cluster may change between 200 - 1500 due to the integrated uncertainties in the luminosity distribution , beaming , flux densities , spectral indices of pulsars ; and the initial mass function , binary fraction , encounter rate , scintillation properties of the cluster ..
Please generate the next two sentences of the article
|
our n - body simulations with a @xmath21 retention fraction predict @xmath221000 neutron stars in 47 tuc .
a grid of several hundred @xmath5-body simulations of star clusters , varying the initial density profile of the cluster , it s initial half - mass radius and the mass ratio of the black hole to the total cluster mass ( @xmath23 ) were used for our study@xcite .
|
11,063 |
Suppose that you have an abstract for a scientific paper: we report on time - resolved ccd photometry of four outbursts of a short - period su uma - type dwarf nova , v844 herculis .
we successfully determined the mean superhump periods to be 0.05584(64 ) days , and 0.055883(3 ) for the 2002 may superoutburst , and the 2006 april - may superoutburst , respectively . during the 2002 october observations
, we confirmed that the outburst is a normal outburst , which is the first recorded normal outburst in v844 her .
we also examined superhump period changes during 2002 may and 2006 april - may superoutbursts , both of which showed increasing superhump period over the course of the plateau stage . in order to examine the long - term behavior of v844 her
, we analyzed archival data over the past ten years since the discovery of this binary .
although photometry is not satisfactory in some superoutbursts , we found that v844 her showed no precursors and rebrightenings .
based on the long - term light curve , we further confirmed v844 her has shown almost no normal outbursts despite the fact that the supercycle of the system is estimated to be about 300 days . in order to explain the long - term light curves of v844 her , evaporation in the accretion disk may play a role in the avoidance of several normal outbursts , which does not contradict with the relatively large x - ray luminosity of v844 her . .
And you have already written the first three sentences of the full article: dwarf novae belong to a subclass of cataclysmic variable stars that consist of a white dwarf ( primary ) and a late - type star ( secondary ) .
the secondary star fills its roche lobe and transfers mass to the primary via inner lagragian point ( l1 ) and the transferred matter forms an accretion disk ( for a review , see @xcite ; @xcite ; @xcite ) . among dwarf novae ,
there exist three subtypes based on their light curves ..
Please generate the next two sentences of the article
|
su uma - type dwarf novae , whose orbital period are shorter than 0.1 days in the most cases , are one of the subtypes , characteristic of exhibiting two types of outbursts .
one is normal outburst , continuing for a few days .
|
11,064 |
Suppose that you have an abstract for a scientific paper: we analyze the ground states and the elementary collective excitations ( phonons ) of a class of systems , which form cluster crystals in the absence of attractions . whereas the regime of moderate - to - high - temperatures in the phase diagram has been analyzed in detail by means of density functional considerations ( likos c n , mladek b m , gottwald d and kahl g 2007 _ j. chem . phys .
_ * 126 * 224502 ) , the present approach focuses on the complementary regime of low temperatures .
we establish the existence of an infinite cascade of isostructural transitions between crystals with different lattice site occupancy at @xmath0 and we quantitatively demonstrate that the thermodynamic instabilities are bracketed by mechanical instabilities arising from long - wavelength acoustical phonons .
we further show that all optical modes are degenerate and flat , giving rise to perfect realizations of einstein crystals .
we calculate analytically the complete phonon spectrum for the whole class of models as well as the helmholtz free energy of the systems . on the basis of the latter ,
we demonstrate that the aforementioned isostructural phase transitions must terminate at an infinity of critical points at low temperatures , brought about by the anharmonic contributions in the hamiltonian and the hopping events in the crystals .
pacs numbers : 64.70.dv , 61.20.ja , 82.30.nr , 82.70.dd .
And you have already written the first three sentences of the full article: particles interacting by means of bounded and purely repulsive interaction potentials @xmath1 can form cluster crystals at sufficiently high densities @xcite . though the existence of clustering in the full absence of attractions might seem counterintuitive at first , its existence rests on solid mathematical and physical grounds and has also been amply demonstrated by means of detailed computer simulations @xcite .
a necessary and sufficient condition for the occurrence of clustering is that the fourier transform of the interaction potential , @xmath2 , has negative parts . in this case , the properties of the system , both in the liquid and in the crystal phases , are largely determined by the position of the wavevector , @xmath3 , at which @xmath2 attains its most negative value and by the negative amplitude @xmath4 of the fourier spectrum of the potential there @xcite .
the physical realizability of such potentials as _ effective interactions _ between suitably tailored macromolecules has been demonstrated for the case of amphiphilic dendrimers as well as ring polymers @xcite . at moderate to high temperatures , the thermodynamics of the system.
Please generate the next two sentences of the article
|
is very accurately described by a mean - field density functional theory , which predicts , among other properties , that the lattice constant of the crystal is density - independent @xcite .
this is brought about by the mechanism of occupying the crystal sites by a multiple number of particles @xmath5 which scales proportionally to the density of particles , @xmath6 , in the crystal . within this crystal ,
|
11,065 |
Suppose that you have an abstract for a scientific paper: with the relativistic representation of the nuclear tensor force that is included automatically by the fock diagrams , we explored the self - consistent tensor effects on the properties of nuclear matter system .
the analysis were performed within the density - dependent relativistic hartree - fock ( ddrhf ) theory .
the tensor force is found to notably influence the saturation mechanism , the equation of state and the symmetry energy of nuclear matter , as well as the neutron star properties . without introducing any additional free parameters , the ddrhf approach paves a natural way to reveal the tensor effects on the nuclear matter system . .
And you have already written the first three sentences of the full article: in the past several decades , the covariant density functional theories have achieved great successes in exploring the finite nuclei and nuclear matter .
one of the most outstanding schemes is the relativistic mean field ( rmf ) theory with a limited number of free parameters @xcite . because of its covariant formulation of strong scalar and vector fields , the rmf theory is able to self - consistently describe the nuclear spin - orbit effect .
however , important degrees of freedom associated with the @xmath0 and tensor-@xmath1 fields are missing in the limit of hartree approach ..
Please generate the next two sentences of the article
|
in fact , the dominant part of one - pion exchange process is the nuclear tensor force component @xcite that plays significant roles in nuclear structure @xcite , excitation and decay modes @xcite , and symmetry energy @xcite . as an important ingredient of nuclear force
, the tensor force , together with the spin - orbit coupling , characterizes the spin dependent feature @xcite .
|
11,066 |
Suppose that you have an abstract for a scientific paper: we consider an interacting system of spin variables on a loopy interaction graph , identified by a tree graph and a set of loopy interactions .
we start from a high - temperature expansion for loopy interactions represented by a sum of nonnegative contributions from all the possible frustration - free loop configurations .
we then compute the loop corrections using different approximations for the nonlocal loop interactions induced by the spin correlations in the tree graph .
for distant loopy interactions , we can exploit the exponential decay of correlations in the tree interaction graph to compute loop corrections within an independent - loop approximation .
higher orders of the approximation are obtained by considering the correlations between the nearby loopy interactions involving larger number of spin variables . in particular
, the sum over the loop configurations can be computed `` exactly '' by the belief propagation algorithm in the low orders of the approximation as long as the loopy interactions have a tree structure .
these results might be useful in developing more accurate and convergent message - passing algorithms exploiting the structure of loopy interactions . .
And you have already written the first three sentences of the full article: the problem of computing local marginals of an arbitrary probability measure is computationally hard but essential , for example , in the study of inverse problems and in solving for solutions to random constraint satisfaction problems .
the loopy belief propagation ( bp ) algorithm is an efficient approximate algorithm that has proven very helpful in the study of random optimization problems @xcite .
the bp algorithm , relying on the bethe approximation , is exact for systems living on tree interaction graphs ..
Please generate the next two sentences of the article
|
it is also expected to be asymptotically exact for locally tree - like graphs as long as the variables are not strongly correlated . in general , the accuracy and convergence of the loopy bp algorithm are not guaranteed , especially in the presence of short loopy interactions . therefore , characterizing the algorithm performance in the presence of loopy interactions @xcite , and its generalizations @xcite , have been the subject of many studies in recent years . in fact , the loopy bp marginals are not globally consistent in the presence of loopy interactions .
this means that the algorithm performance can be improved by demanding more consistency for the bp marginals , e.g. , by ensuring that the local marginals satisfy the fluctuation - response relations @xcite .
|
11,067 |
Suppose that you have an abstract for a scientific paper: we present a model for magnetic energy dissipation in a pulsar wind nebula .
better understanding of this process is required to assess the likelihood that certain astrophysical transients may be powered by the spin - down of a `` millisecond magnetar . ''
examples include superluminous supernovae , gamma - ray bursts , and anticipated electromagnetic counterparts to gravitational wave detections of binary neutron star coalescence .
our model leverages recent progress in the theory of turbulent magnetic relaxation to specify a dissipative closure of the stationary magnetohydrodynamic ( mhd ) wind equations , yielding predictions of the magnetic energy dissipation rate throughout the nebula .
synchrotron losses are treated self - consistently . to demonstrate the model s efficacy ,
we show that it can reproduce many features of the crab nebula , including its expansion speed , radiative efficiency , peak photon energy , and mean magnetic field strength . unlike ideal mhd models of the crab ( which lead to the so - called @xmath0-problem ) our model accounts for the transition from ultra to weakly magnetized plasma flow , and for the associated heating of relativistic electrons .
we discuss how the predicted heating rates may be utilized to improve upon models of particle transport and acceleration in pulsar wind nebulae .
we also discuss implications for the crab nebula s @xmath1-ray flares , and point out potential modifications to models of astrophysical transients invoking the spin - down of a millisecond magnetar . .
And you have already written the first three sentences of the full article: a pulsar wind nebula ( pwn ) is a bubble of relativistic plasma energized by a rapidly rotating , magnetized neutron star .
the prototypical pwn is the crab nebula , which is decisively the best studied celestial object beyond our solar system , having served for decades as a testbed for theories of astrophysical outflows and their radiative processes .
pwne are also of broad interest in astroparticle physics , as potential sources of galactic positrons @xcite and ultra - high energy cosmic rays @xcite ..
Please generate the next two sentences of the article
|
more generally , pwne have ultra - energetic ( albeit hypothetical ) counterparts in the winds of so - called `` millisecond magnetars . ''
such exotic objects may be formed in the coalescence of binary neutron star systems , in which case they could yield the first electromagnetic counterparts to gravitational wave detections @xcite . or , if formed during the core - collapse of a massive star , a millisecond magnetar could re - energize the ejecta shell , helping to explain the light - curves of certain hydrogen - poor superluminous supernovae @xcite .
|
11,068 |
Suppose that you have an abstract for a scientific paper: in smooth - particle hydrodynamics ( sph ) , artificial viscosity is necessary for the correct treatment of shocks , but often generates unwanted dissipation away from shocks .
we present a novel method of controlling the amount of artificial viscosity , which uses the total time derivative of the velocity divergence as shock indicator and aims at completely eliminating viscosity away from shocks .
we subject the new scheme to numerous tests and find that the method works at least as well as any previous technique in the strong - shock regime , but becomes virtually inviscid away from shocks , while still maintaining particle order . in particular sound waves or oscillations of gas spheres
are hardly damped over many periods .
[ firstpage ] hydrodynamics methods : numerical methods : @xmath0-body simulations .
And you have already written the first three sentences of the full article: smooth - particle hydrodynamics ( sph ) is a lagrangian method for modelling fluid dynamics , pioneered by @xcite and @xcite . instead of discretising the fluid quantities , such as density , velocity , and temperature , on a fixed grid as in eulerian methods ,
the fluid is represented by a discrete set of moving particles acting as interpolation points . due to its lagrangian nature , sph models regions of higher density with higher resolution with the ability to simulate large dynamic ranges .
this makes it particularly useful in astrophysics , where it is used to model galaxy and star formation , stellar collisions , and accretion discs ..
Please generate the next two sentences of the article
|
the core of sph is the kernel estimator : the fluid density is _ estimated _ from the masses @xmath1 and positions @xmath2 of the particles via to denote a local _ estimate _ in many sph - related publications the distinction between actual and estimated quantities is not clearly made , confusing the discussion . ]
@xmath3 where @xmath4 is the kernel function and @xmath5 the sph smoothing length for @xmath6 . ] for the @xmath7th particle .
|
11,069 |
Suppose that you have an abstract for a scientific paper: we study a new s - process path through an isomer of @xmath0re to improve a @xmath1re-@xmath1os nucleo - cosmochronometer . the nucleus @xmath1re is produced by this new path of @xmath2re(n,@xmath3)@xmath0re@xmath4(n,@xmath3)@xmath1re .
we measure a ratio of neutron capture cross - sections for the @xmath2re(n,@xmath3)@xmath0re@xmath4 and @xmath2re(n,@xmath3)@xmath0re@xmath5 reactions at thermal neutron energy because the ratio with the experimental uncertainty has not been reported . using an activation method with reactor neutrons ,
we obtain the ratio of @xmath6 = 0.54 @xmath7 0.11% . from this ratio
we estimate the ratio of maxwellian averaged cross sections in a typical s - process environment at @xmath8 = 30 kev with a help of the temperature dependence given in a statistical - model calculation because the energy dependence of the isomer / ground ratio is smaller than the absolute neutron capture cross - section .
the ratio at @xmath8=30 kev is estimated to be @xmath9 = 1.3 @xmath7 0.8% .
we calculate the s - process contribution from the new path in a steady - flow model .
the additional abundance of @xmath1re through this path is estimated to be @xmath10 = 0.56 @xmath7 0.35% relative to the abundance of @xmath0os .
this additional increase of @xmath1re does not make any remarkable change in the @xmath1re-@xmath1os chronometer for an age estimate of a primitive meteorite , which has recently been found to be affected strongly by a single supernova r - process episode . .
And you have already written the first three sentences of the full article: two neutron - capture processes are important for astrophysical nucleosynthesis of heavy elements .
the first one is a rapid neutron - capture process ( r - process ) that is considered to occur in supernova ( sn ) explosions @xcite , and the other is a slow neutron - capture process ( s - process ) in low - mass asymptotic giant branch ( agb ) stars @xcite or massive stars @xcite .
long - lived radioactive nuclei are used as nucleo - cosmochronometers , which are useful for an investigation of nucleosynthesis process history along the galactic chemical evolution ( gce ) before the solar system formation ..
Please generate the next two sentences of the article
|
a general idea of the nucleo - cosmochronometer was proposed by rutherford about 70 years ago @xcite .
radioactive nuclei of cosmological significance are rare and only six chronometers with half - lives in the range of the cosmic age 1 @xmath11 100 gyr are known .
|
11,070 |
Suppose that you have an abstract for a scientific paper: _ planck_-2015 data seem to favour a large value of the lensing amplitude parameter , @xmath0 , in cmb spectra .
this result is in @xmath1 tension with the lensing reconstruction result , @xmath2 . in this paper , we simulate several cmb anisotropy and cmb lensing spectra based on _ planck_-2015 best - fit cosmological parameter values and _ planck _ blue book beam and noise specifications .
we analyse several modified gravity models within the effective field theory framework against these simulations and find that models whose effective newton constant is enhanced can modulate the cmb anisotropy spectra in a way similar to that of the @xmath3 parameter . however , in order to lens the cmb anisotropies sufficiently , like in the _
planck_-2015 results , the growth of matter perturbations is substantially enhanced and gives a high @xmath4 value .
this in turn proves to be problematic when combining these data to other probes , like weak lensing from cfhtlens , that favour a smaller amplitude of matter fluctuations . .
And you have already written the first three sentences of the full article: based on the full - mission _ planck _
observations of temperature and polarization anisotropies of the cosmic microwave background ( cmb ) radiation , _ planck_-2015 results show that the temperature and polarization power spectra are consistent with the standard spatially - flat six - parameter @xmath5cdm cosmology with a primordial power - law spectrum of adiabatic scalar perturbations .
hereafter we shall call this model the base-@xmath5cdm . on the other hand , the same data , especially the temperature - temperature ( tt ) spectrum reveals some tension with the cmb lensing deflection angle ( @xmath6 ) spectrum reconstructed from the same maps . in details , the lensing amplitude in cmb temperature and polarization spectra , @xmath7 , is in @xmath1 tension with the amplitude of the cmb trispectrum reconstructed lensing deflection angle spectrum , @xmath8 while it is expected that in the base-@xmath5cdm model both these quantities should be equal to unity ..
Please generate the next two sentences of the article
|
the _ planck _ collaboration finds that , compared with the base-@xmath5cdm model , the base-@xmath5cdm+@xmath3 model can reduce the logarithmic likelihood ( @xmath9 ) and provide a better fit to the data sets with @xmath10 or marginalized constraint @xmath7 @xcite .
more importantly , they find that there is roughly equal preference for high @xmath3 from intermediate and high multipoles ( _ i.e. _ , the ` plik ` likelihood ; @xmath11 ) and from the low-@xmath12 likelihood ( @xmath13 ) with a further small change coming from the priors .
|
11,071 |
Suppose that you have an abstract for a scientific paper: we report our comprehensive study of physical properties of a ternary intermetallic compound prirsi@xmath0 investigated by dc magnetic susceptibility @xmath1 , isothermal magnetization @xmath2 , thermo - remnant magnetization @xmath3 , ac magnetic susceptibility @xmath4 , specific heat @xmath5 , electrical resistivity @xmath6 , muon spin relaxation ( @xmath7sr ) and inelastic neutron scattering ( ins ) measurements . a magnetic phase transition is marked by a sharp anomaly at @xmath8 k in @xmath1 measured at low applied fields which is also reflected in the @xmath5 data through a weak anomaly at 12 k. an irreversibility between the zero field cooled and field cooled @xmath1 data below 12.2 k and a very large relaxation time of @xmath3 indicates the presence of ferromagnetic correlation .
the magnetic part of specific heat shows a broad schottky - type anomaly near 40 k due to the crystal electric field ( cef ) effect .
an extremely small value of magnetic entropy below 12 k suggests a cef - split singlet ground state which is confirmed from our analysis of ins data .
the ins spectra show two prominent inelastic excitations at 8.5 mev and 18.5 mev that could be well accounted by a cef model . the cef splitting energy between the ground state singlet and the first excited doublet
is found to be 92 k. our @xmath7sr data reveal a possible magnetic ordering below 30 k which is much higher than that found from the specific heat and magnetic susceptibility measurements .
this could be due to the presence of short range correlations well above the long range magnetic ordering or due to the electronic changes induced by muons .
the induced moment magnetism in the singlet ground state system prirsi@xmath0 with such a large splitting energy of 92 k is quite surprising . .
And you have already written the first three sentences of the full article: the electrostatic coupling between the @xmath9 shell of rare earths having nonzero orbital angular momentum ( @xmath10 ) and its environment , known as crystalline electric field ( cef ) , greatly influences the physical properties of a rare earth system .
the cef modifies the energy levels of rare earth atoms in a solid and tends to remove the ( @xmath11)-fold degeneracy associated with total angular momentum @xmath12 of the @xmath9 ground state multiplet . the magnetic ordering in compounds having cef - split
nonmagnetic singlet ground state are of particular interest ..
Please generate the next two sentences of the article
|
the magnetic properties of such systems critically depend on the relative strength of the crystal electric field and the exchange field between the rare earth ions . the magnetic moment associated with the @xmath9 electrons
can be completely quenched if the former dominates over the latter @xcite .
|
11,072 |
Suppose that you have an abstract for a scientific paper: we consider the friedberg - lee symmetry for the quark sector and show that the symmetry closely relates to both quark masses and mixing angles .
we also extend our scheme to the fourth generation quark model and find the relation @xmath0 with @xmath1 for @xmath2 and @xmath3 . .
And you have already written the first three sentences of the full article: although the standard model ( sm ) is a very successful theory , there are still some mysteries and problems . one of them is the flavor structure of fermions .
we currently know that quarks mix with each other through the cabibbo - kobayashi - maskawa ( ckm ) matrix @xcite and that there is a hierarchy among the three mixing angles of the ckm matrix : @xmath4 , @xmath5 and @xmath6 @xcite .
yet another hierarchy exists among quark masses and it can be expressed in terms of @xmath7 as @xmath8 and @xmath9 @xcite . in particular , @xmath10 and @xmath11 surprisingly coincide with each other ..
Please generate the next two sentences of the article
|
however , the origin of the hierarchies is still unclear .
this is because the yukawa sector in the sm contains a huge number of unknown parameters .
|
11,073 |
Suppose that you have an abstract for a scientific paper: we show that there exist efficient algorithms for the triangle packing problem in colored permutation graphs , complete multipartite graphs , distance - hereditary graphs , @xmath0-modular permutation graphs and complements of @xmath0-partite graphs ( when @xmath0 is fixed ) .
we show that there is an efficient algorithm for @xmath1-packing on bipartite permutation graphs and we show that @xmath1-packing on bipartite graphs is np - complete .
we characterize the cobipartite graphs that have a triangle partition . .
And you have already written the first three sentences of the full article: a triangle packing in a graph @xmath2 is a collection of vertex - disjoint triangles .
the triangle packing problem asks for a triangle packing of maximal cardinality .
the triangle partition problem asks whether the vertices of a graph can be partitioned into triangles ..
Please generate the next two sentences of the article
|
we refer to appendix [ prel tp ] for an overview of known results on triangle packing problems .
our objective is the study of the triangle partition problem on permutation graphs .
|
11,074 |
Suppose that you have an abstract for a scientific paper: we derive expressions for the dispersion for two classes of random variables in markov processes .
random variables like current and activity pertain to the first class , which is composed by random variables that change whenever a jump in the stochastic trajectory occurs .
the second class corresponds to the time the trajectory spends in a state ( or cluster of states ) .
while the expression for the first class follows straightforwardly from known results in the literature , we show that a similar formalism can be used to derive an expression for the second class . as an application
, we use this formalism to analyze a cellular two - component network estimating an external ligand concentration . the uncertainty related to this external concentration
is calculated by monitoring different random variables related to an internal protein .
we show that , _ inter alia _ , monitoring the time spent in the phosphorylated state of the protein leads to a finite uncertainty only if there is dissipation , whereas the uncertainty obtained from the activity of the transitions of the internal protein can reach the berg and purcell limit even in equilibrium . .
And you have already written the first three sentences of the full article: markov processes are used to model a wide variety of physical , chemical and biological phenomena @xcite . while calculating the mean of a random
variable is often relatively straightforward , obtaining its dispersion can be much harder .
a prominent example is the expression for the dispersion of the number of steps of a particle hopping in a one - dimensional lattice @xcite ..
Please generate the next two sentences of the article
|
fluctuating currents are important random variables in stochastic thermodynamics @xcite .
if their mean is nonzero in the steady state , the system is out equilibrium . a standard method to calculate
|
11,075 |
Suppose that you have an abstract for a scientific paper: the general philosophy for bootstrap or permutation methods for testing hypotheses is to simulate the variation of the test statistic by generating the sampling distribution which assumes both that the null hypothesis is true , and that the data in the sample is somehow representative of the population .
this philosophy is inapplicable for testing hypotheses for a single parameter like the population mean , since the two assumptions are contradictory ( e.g. , how can we assume both that the mean of the population is @xmath0 and that the individuals in the sample with a mean @xmath1 are representative of the population ? ) .
the mirror bootstrap resolves that conundrum .
the philosophy of the mirror bootstrap method for testing hypotheses regarding one population parameter is that we assume both that the null hypothesis is true , and that the individuals in our sample are as representative as they could be without assuming more extreme cases than observed .
for example , the mirror bootstrap method for testing hypotheses of one mean uses a generated symmetric distribution constructed by reflecting the original sample around the hypothesized population mean @xmath2 .
simulations of the performance of the mirror bootstrap for testing hypotheses of one mean show that , while the method is slightly on the conservative side for very small samples , its validity and power quickly approach that of the widely used t - test .
the philosophy of the mirror bootstrap is sufficiently general to be adapted for testing hypotheses about other parameters ; this exploration is left for future research . .
And you have already written the first three sentences of the full article: the general philosophy for bootstrap or permutation methods for testing hypotheses is to simulate the variation of the test statistic by generating the sampling distribution which assumes both that the null hypothesis is true , and that the data in the sample is somehow representative of the population .
this philosophy works well for testing hypotheses regarding the correlation coefficient , but is inapplicable for testing hypotheses for a single parameter like the population mean , since the two assumptions are contradictory .
for example , how can we assume both that the mean of the population is @xmath0 and that the individuals in the sample with a mean @xmath1 are representative of the population ? one naive way that has been used is the shift method @xcite , where each individual in the sample is shifted by @xmath3 , which is essentially equivalent to testing the hypotheses with a confidence interval ..
Please generate the next two sentences of the article
|
the shift method hypothesizes that the variance between the sampled individuals is representative of the population variance , yet it loses any semblance to the assumption that the sampled individuals themselves are representative of the population .
the bootstrap method provides a more elegant resolution .
|
11,076 |
Suppose that you have an abstract for a scientific paper: 2155 is one of the brightest extragalactic source in the x ray and euv bands , and is a prototype for the bl lac class of objects . in this paper
we investigate the large scale environment of this source using new multi object as well as long slit spectroscopy , together with archival spectra and optical images .
we find clear evidence of a modest overdensity of galaxies at @xmath0 , consistent with previous determinations of the bl lac redshift .
the galaxy group has a radial velocity dispersion of 250@xmath1kms@xmath2 and a virial radius of @xmath3mpc , yielding a role of thumb estimate of the virial mass of m@[email protected]@xmath610@xmath7m@xmath8 , i.e. , one order of magnitude less than what observed in other similar objects .
this result hints toward a relatively wide diversity in the environmental properties of bl lac objects .
[ firstpage ] ; .
And you have already written the first three sentences of the full article: bl lac objects are a subclass of active galactic nuclei ( agn ) showing a strong , non thermal variable emission from radio to tev energies .
these properties are usually ascribed to the relativistic jet emission that is closely aligned with the line of sight @xcite .
four decades of studies of the host galaxies and of the bl lacs close environment have lead to a general consensus that they are mainly hosted by luminous elliptical galaxies embedded in small clusters or group of galaxies ( e.g. * ? ? ?.
Please generate the next two sentences of the article
|
* ; * ? ? ?
* ; * ? ? ?
|
11,077 |
Suppose that you have an abstract for a scientific paper: we investigate the relationship between two massive star - forming galaxy populations at redshift @xmath0 ; i.e. submillimetre galaxies ( smgs ) and bzk - selected galaxies ( bzks ) . out of 60
smgs found in the subaru / xmm - newton deep field , we collect optical
nir photometry of 28 radio counterparts for 24 smgs , based on refined sky positions with a radio map for 35 smgs ( ivison et al .
2007 ) .
we find a correlation between their @xmath1-band magnitudes and @xmath2 [ @xmath3 colours : almost all of the @xmath1-faint ( @xmath4 ) radio - detected smgs have @xmath5 , and therefore bzks .
this result gives strong support to perform direct optical identification of smgs by searching for bzks around smgs .
we calculate the formal significance ( @xmath6 value ) for each of the bzk associations around radio - undetected smgs , and find 6 new robust identifications , including one double identification . from this analysis , we obtain the current best estimate on the surface density of bzk - selected smgs , which indicates that only @xmath7 per cent of bzks are smgs .
if bzks are normal disk - like galaxies at @xmath0 as indicated by the correlation between their star formation rate ( sfr ) and stellar mass and also by dynamical properties , smgs are likely to be merging bzks . in this case
, a typical enhancement of sfr due to merging is only a factor of @xmath8 , which is an order of magnitude lower than that of local ulirgs .
this may indicate that most of the merging bzks could be observed as smgs . considering a possible high fraction of mergers at @xmath0
( at least it would be higher than the fraction at @xmath9 of @xmath10 per cent ) , it is rather puzzling to find such a low fraction of smgs in the progenitor population , i.e. bzks .
galaxies : starburst dust , extinction infrared : galaxies submillimetre galaxies : evolution . .
And you have already written the first three sentences of the full article: among many cosmological surveys , the submillimetre ( submm ) survey is very unique , in the sense that the expected flux density of sources is almost insensitive to redshift for @xmath11 1 8 , owing to the strong negative @xmath1-correction ( e.g. * ? ? ?
although the current sensitivity allows us to detect only the brightest infrared galaxies in the universe , it is possible to detect massive starbursts and gas rich qsos at extreme redshifts @xmath12 if exist ( e.g. * ? ? ?
* ) . however , most of the submm galaxies ( smgs ) currently identified lie at @xmath13 ..
Please generate the next two sentences of the article
|
this is not because of the detection limit as noted above , but because of the ` identification limit ' , owing to a large beam size of current ( sub)mm telescopes used for surveys .
the radio emission provides a high - resolution substitute for the infrared emission observed in the submm ( e.g. * ? ? ?
|
11,078 |
Suppose that you have an abstract for a scientific paper: in the first part of this article we show how observations of the chemical evolution of the galaxy : g- and k dwarf numbers as functions of metallicity , and abundances of the light elements , d , li , be and b , in both stars and the interstellar medium ( ism ) , lead to the conclusion that metal poor gas has been accreting to the galactic disc during the whole of its lifetime , and is accreting today at a measurable rate , @xmath0 per year across the full disc .
estimates of the local star formation rate ( sfr ) using methods based on stellar activity , support this picture .
the best fits to all these data are for models where the accretion rate is constant , or slowly rising with epoch .
we explain here how this conclusion , for a galaxy in a small bound group , is not in conflict with graphs such as the madau plot , which show that the universal sfr has declined steadily from @xmath1 to the present day .
we also show that a model in which disc galaxies in general evolve by accreting major clouds of low metallicity gas from their surroundings can explain many observations , notably that the sfr for whole galaxies tends to show obvious variability , and fractionally more for early than for late types , and yields lower dark to baryonic matter ratios for large disc galaxies than for dwarfs . in the second part of the article we use ngc 1530 as a template object , showing from fabry
prot observations of its emission how strong shear in this strongly barred galaxy acts to inhibit star formation , while compression acts to stimulate it .
galaxy : evolution , galaxy : accretion , galaxies : ism , galaxies : kinematics galaxies : star formation .
And you have already written the first three sentences of the full article: the role of mergers in galaxy evolution has become increasingly recognized recently , stimulated by the central role of cdm cosmology ( navarro , frenk & white 1994 , 1995 ; power et al .
the importance of mergers was realized from purely observational arguments by toomre & toomre ( 1972 ) ; they argued that as tidal encounters generate short lived features , to yield today s `` peculiar '' galaxies a population of binary galaxies with highly eccentric orbits is required .
if these have a flat binding energy distribution , their merger rate must have declined with time as @xmath2 so that the ten obviously merging objects in the new general catalogue must be the tail end of 750 remnants ( see also toomre , 1977 ) ..
Please generate the next two sentences of the article
|
zepf & koo ( 1989 ) , prior to the hubble deep fields , and abraham ( 1999 ) using their contents , inferred that galaxy pair density grows as @xmath3 , while brinchmann et al .
( 1998 ) showed that irregular galaxies form 10% of the total at @xmath4 and 30% at @xmath5 .
|
11,079 |
Suppose that you have an abstract for a scientific paper: in semiclassical theories for chaotic systems such as gutzwiller s periodic orbit theory the energy eigenvalues and resonances are obtained as poles of a non - convergent series @xmath0 .
we present a general method for the analytic continuation of such a non - convergent series by harmonic inversion of the `` time '' signal , which is the fourier transform of @xmath1 .
we demonstrate the general applicability and accuracy of the method on two different systems with completely different properties : the riemann zeta function and the three disk scattering system .
the riemann zeta function serves as a mathematical model for a bound system .
we demonstrate that the method of harmonic inversion by filter - diagonalization yields several thousand zeros of the zeta function to about 12 digit precision as eigenvalues of small matrices . however , the method is not restricted to bound and ergodic systems , and does not require the knowledge of the mean staircase function , i.e. , the weyl term in dynamical systems , which is a prerequisite in many semiclassical quantization conditions .
it can therefore be applied to open systems as well .
we demonstrate this on the three disk scattering system , as a physical example .
the general applicability of the method is emphasized by the fact that one does not have to resort a symbolic dynamics , which is , in turn , the basic requirement for the application of cycle expansion techniques . .
And you have already written the first three sentences of the full article: since the development of _ periodic orbit theory _ by gutzwiller @xcite it has become a fundamental question as to how individual semiclassical eigenenergies and resonances can be obtained from periodic orbit quantization for classically chaotic systems .
a major problem is the exponential proliferation of the number of periodic orbits with increasing period , resulting in a divergence of gutzwiller s trace formula at real energies and below the real axis , where the poles of the green s function are located .
the periodic orbit sum is a dirichlet series @xmath2 where the parameters @xmath3 and @xmath4 are the amplitudes and periods ( actions ) of the periodic orbit contributions . in most applications eq ..
Please generate the next two sentences of the article
|
[ g ] is absolutely convergent only in the region @xmath5 with @xmath6 the entropy barrier of the system , while the poles of @xmath1 , i.e. , the bound states and resonances , are located on and below the real axis , @xmath7 .
thus , to extract individual eigenstates , the semiclassical trace formula ( [ g ] ) has to be analytically continued to the region of the quantum poles . up to now no general procedure is known for the analytic continuation of a non - convergent dirichlet series of the type of eq .
|
11,080 |
Suppose that you have an abstract for a scientific paper: we present a closed description of the charge carrier injection process from a conductor into an insulator .
common injection models are based on single electron descriptions , being problematic especially once the amount of charge - carriers injected is large .
accordingly , we developed a model , which incorporates space charge effects in the description of the injection process .
the challenge of this task is the problem of self - consistency .
the amount of charge - carriers injected per unit time strongly depends on the energy barrier emerging at the contact , while at the same time the electrostatic potential generated by the injected charge- carriers modifies the height of this injection barrier itself . in our model , self - consistency is obtained by assuming continuity of the electric displacement and the electrochemical potential all over the conductor / insulator system .
the conductor and the insulator are properly taken into account by means of their respective density of state distributions .
the electric field distributions are obtained in a closed analytical form and the resulting current - voltage characteristics show that the theory embraces injection - limited as well as bulk - limited charge - carrier transport .
analytical approximations of these limits are given , revealing physical mechanisms responsible for the particular current - voltage behavior .
in addition , the model exhibits the crossover between the two limiting cases and determines the validity of respective approximations .
the consequences resulting from our exactly solvable model are discussed on the basis of a simplified indium tin oxide / organic semiconductor system . .
And you have already written the first three sentences of the full article: once a conductor forms contact with an insulator , an energy barrier is formed between the two materials , which impedes the charge - carrier injection into the insulator .
although this injection barrier in general sways charge transport through the conductor / insulator system , only the two limiting cases of very low or very high injection barriers are often considered . for low injection barriers ,
one expects the contact to be ohmic , meaning that the contact is able to supply more charges per unit time than the bulk of the insulator can support . in this case , a space - charge region is formed and the electric field at the interface vanishes @xcite . because excess charge - carriers dominate charge transport in insulators , one observes a space - charge - limited current ( sclc ) density of the form @xmath0 ( in the absence of charge - carrier traps ) , where @xmath1 is the sample thickness and @xmath2 is the applied voltage ..
Please generate the next two sentences of the article
|
the current - voltage characteristic ( iv - characteristic ) is determined by the bulk properties of the material with no influence of the contact properties @xcite . for high injection barriers ,
one anticipates the injection rate across the conductor / insulator interface to dominate the iv - characteristic of the system .
|
11,081 |
Suppose that you have an abstract for a scientific paper: we develop a parametric high - resolution method for the estimation of the frequency nodes of linear combinations of complex exponentials with exponential damping .
we use kronecker s theorem to formulate the associated nonlinear least squares problem as an optimization problem in the space of vectors generating hankel matrices of fixed rank .
approximate solutions to this problem are obtained by using the alternating direction method of multipliers .
finally , we extract the frequency estimates from the con - eigenvectors of the solution hankel matrix .
the resulting algorithm is simple , easy to implement and can be applied to data with equally spaced samples with approximation weights , which for instance allows cases of missing data samples . by means of numerical simulations ,
we analyze and illustrate the excellent performance of the method , attaining the cramr - rao bound .
* keywords : * frequency estimation , nonlinear least squares , hankel matrices , kronecker s theorem , missing data , alternating direction method of multipliers .
And you have already written the first three sentences of the full article: spectral estimation constitutes a classical problem that has found applications in a large variety of fields ( including astronomy , radar , communications , economics , medical imaging , spectroscopy , , to name but a few ) .
one important category of spectral estimation problems arises for signals that can be well represented by the parametric model @xmath0 where @xmath1 is an additive noise term . given a vector of (
typically equally spaced ) samples , @xmath2 where @xmath3 is the sampling period , the goal is to estimate the complex frequency nodes @xmath4 i.e. , the damping and frequency parameters @xmath5 and @xmath6 ..
Please generate the next two sentences of the article
|
note that once the parameters @xmath7 have been computed , determining @xmath8 reduces to a simple linear regression problem .
thus , the focus lies on the estimation of the nodes @xmath9 .
|
11,082 |
Suppose that you have an abstract for a scientific paper: we present numerical simulations investigating the interaction of agn jets with galaxy clusters , for the first time taking into account the dynamic nature of the cluster gas and detailed cluster physics .
the simulations successfully reproduce the observed morphologies of radio sources in clusters .
we find that cluster inhomogeneities and large scale flows have significant impact on the morphology of the radio source and can not be ignored a - priori when investigating radio source dynamics .
morphological comparison suggests that the gas in the centres of clusters like virgo and abell 4059 show significant shear and/or rotation .
we find that shear and rotation in the intra - cluster medium move large amounts of cold material back into the path of the jet , ensuring that subsequent jet outbursts encounter a sufficient column density of gas to couple with the inner cluster gas , thus alleviating the problem of evacuated channels discussed in the recent literature .
the same effects redistribute the excess energy @xmath0 deposited the jet , making the distribution of @xmath1 at late times consistent with being isotropic .
galaxies : clusters : general galaxies : jets galaxies : intergalactic medium .
And you have already written the first three sentences of the full article: recent observations show a multitude of physical effects that occur when active galactic nuclei ( agn ) interact with the ambient intracluster medium ( icm ) .
while these effects are widely believed to be crucial for the formation of structure in the universe , they are still poorly understood .
the central galaxy in almost every strong cooling core contains an active nucleus and a jet driven radio galaxy ..
Please generate the next two sentences of the article
|
the radio power of these cooling cores is somewhat correlated with the x - ray luminosity , although the range of the radio power is much greater than the range of the x - ray core power .
this is supported by a recently discovered correlation between the bondi accretion rates and the jet power in nearby , x - ray luminous elliptical galaxies @xcite .
|
11,083 |
Suppose that you have an abstract for a scientific paper: we derive expressions for three - body phase space that are explicitly symmetrical in the masses of the three particles , by three separate methods .
utas - phys-02 - 09 + hep - th/0209233 + july 2002 * three - body phase space : symmetrical treatments * a.i .
davydychev and r. delbourgo + @xmath0_school of mathematics and physics , university of tasmania , + gpo box 252 - 21 , hobart , tasmania 7001 , australia _ + .
And you have already written the first three sentences of the full article: the phase volume in @xmath1-dimensional space - time for the decay process @xmath2 can be reduced to the integral @xmath3^{d/2 - 2}\theta(\phi),\ ] ] where @xmath4 , @xmath5 , @xmath6 are the three mandelstam variables and , in the physical region , the kibble cubic @xmath7 stays positive .
clearly @xmath8 is symmetric in the three masses .
however , when one eliminates one of the mandelstam variables ( or any linear combination ) one is left , for even @xmath1 , with an elliptic integral which is not _.
Please generate the next two sentences of the article
|
explicitly _ symmetrical , although it must be so implicitly .
the aim of this article is to exhibit three routes for getting a satisfyingly symmetrical result in the @xmath9 case , where @xmath8 is nothing but the area of the dalitz - kibble plot @xcite divided by @xmath10 .
|
11,084 |
Suppose that you have an abstract for a scientific paper: background forces are linear long range interactions of the cantilever body with its surroundings that must be compensated for in order to reveal tip - surface force , the quantity of interest for determining material properties in atomic force microscopy .
we provide a mathematical derivation of a method to compensate for background forces , apply it to experimental data , and discuss how to include background forces in simulation .
our method , based on linear response theory in the frequency domain , provides a general way of measuring and compensating for any background force and it can be readily applied to different force reconstruction methods in dynamic afm . .
And you have already written the first three sentences of the full article: accurate and reproducible measurement of material properties at the nanoscale is the main goal of dynamic atomic force microscopy ( afm ) .
extraction of material properties from the measurable quantities in dynamic afm requires a deep understanding of both the tip - surface interaction and the dynamics of the afm cantilever when it is close to the sample surface .
we propose a method that uses fourier analysis to measure and compensate for background forces , which are long range and not local to the afm tip ..
Please generate the next two sentences of the article
|
these interactions produce artifacts in the measurement of tip - surface force , leading to overestimation of its attractive and dissipative components .
background forces are observed when measuring the quality factor of a cantilever resonance , which drops by as much as @xmath0 when the tip - sample distance becomes comparable to the cantilever width ( fig .
|
11,085 |
Suppose that you have an abstract for a scientific paper: we develop a phenomenological mapping between submonolayer polynuclear growth ( png ) and the interface dynamics at and below the depinning transition in the kardar parisi
zhang equation for a negative non - linearity @xmath0 .
this is possible since the phase transition is of first - order , with no diverging correlation length as the transition is approached from below .
the morphology of the still - active and pinned configurations and the interface velocity are compared to the png picture .
the interface mean height scales as @xmath1 . _
pacs # 05.70.np , 75.50.lk , 68.35.ct , 64.60.ht_ 2 [ ] .
And you have already written the first three sentences of the full article: the dynamics of driven manifolds in random media presents many examples of non - equilibrium phase transitions .
the interest lies often in how an object interacts with a quenched disorder environment .
if the driving force is small enough and the temperature zero , the manifold , e.g. a domain wall in a magnet , or an interface between two phases , can get pinned ..
Please generate the next two sentences of the article
|
this means , quite simply , that its velocity becomes zero .
therefore one can discuss the physics in terms of an order parameter ( velocity ) and a control parameter ( external force ) . at and close to the critical force value @xmath2
|
11,086 |
Suppose that you have an abstract for a scientific paper: the projectile fragmentation reactions using @xmath0 @xmath1 @xmath2 beams at 140 mev / n on targets @xmath3 @xmath1 @xmath4 are studied using the canonical thermodynamical model coupled with an evaporation code .
the isoscaling property of the fragments produced is studied using both the primary and the secondary fragments and it is observed that the secondary fragments also respect isoscaling though the isoscaling parameters @xmath5 and @xmath6 changes .
the temperature needed to reproduce experimental data with the secondary fragments is less than that needed with the primary ones .
the canonical model coupled with the evaporation code successfully explains the experimental data for isoscaling for the projectile fragmentation reactions . .
And you have already written the first three sentences of the full article: projectile fragmentation reaction is used extensively to study the reaction mechanisms in heavy ion collisions at intermediate and high energies .
this is also an important technique for the production of rare isotope beams and is used by many radioactive ion beam facilities around the world .
the fragment cross sections of projectile fragmentation reactions using primary beams of @xmath7 , @xmath8 , @xmath0 and @xmath2 at 140 mev / nucleon on @xmath3 and @xmath4 targets have been measured at the national superconducting cyclotron laboratory at michigan state university@xcite ..
Please generate the next two sentences of the article
|
the canonical thermodynamical model(ctm)@xcite has been used to calculate some of these fragment cross sections@xcite . in the present work
, an evaporation code has been developed and has been coupled with the canonical thermodynamical model .
|
11,087 |
Suppose that you have an abstract for a scientific paper: we use information about dis and @xmath0 production on hydrogen to model the @xmath1-dependence of the @xmath2 scattering amplitude .
we investigate the profile function for elastic scattering of hadronic components of the virtual photon off both a nucleon and heavy nuclear target , and we estimate the value of the impact parameter where the black body limit is reached .
we also estimate the fraction of the cross section that is due to hadronic configurations in the virtual photon wave function that approach the unitarity limit .
we extract , from these considerations , approximate lower limits on the values of @xmath3 where the leading twist approximation in dis is violated .
we observe that the black body limit may be approached within hera kinematics with @xmath4 equal to a few gev@xmath5 and @xmath6 .
comparisons are made with earlier predictions by munier _
et al . _ , and the longitudinal structure function
is compared with preliminary hera data .
the principle advantage of our method is that we do not rely solely on the @xmath1-dependence of @xmath7-meson production data .
this allows us to extend our analysis down to very small impact parameters and dipole sizes .
finally , we perform a similar calculation with a @xmath8pb target , and we demonstrate that the black body limit is already approached at @xmath9 gev@xmath10 and @xmath6 . .
And you have already written the first three sentences of the full article: one of the current theoretical challenges in quantum chromodynamics ( qcd ) is to describe high energy interactions with hadrons in terms of fundamental field theory .
it is observed that high - energy hadron - hadron scattering interactions become completely absorptive ( black ) at small impact parameters so that elastic scattering can be viewed essentially as a shadow of the inelastic cross section in the sense of babinet s principle .
if this regime occurs at most of the impact parameters which contribute to the inelastic cross section , then the elastic and inelastic cross sections are equal ..
Please generate the next two sentences of the article
|
this limit is often referred to as the black body limit ( bbl ) in analogy with the quantum mechanical situation of scattering from an absorptive share of radius @xmath11 in which case the total cross section is equal to @xmath12 , ( see e.g. , problem 1 of section 131 in ref .@xcite ) .
this limit is also loosely referred to as the unitarity limit , although unitarity alone admits cross sections as large as @xmath13 provided there are no inelastic interactions ( see e.g. , problem 2 of section 132 in ref .
|
11,088 |
Suppose that you have an abstract for a scientific paper: continuing our investigation into the numerical properties of the _ hierarchical reference theory _ , we study the square well fluid of range @xmath0 from slightly above unity up to 3.6 .
after briefly touching upon the core condition and the related decoupling assumption necessary for numerical calculations , we shed some light on the way an inappropriate choice of the boundary condition imposed at high density may adversely affect the numerical results ; we also discuss the problem of the partial differential equation becoming stiff for close - to - critical and sub - critical temperatures . while agreement of the theory s predictions with simulational and purely theoretical studies of the square well system is generally satisfactory for @xmath1 , the combination of stiffness and the closure chosen is found to render the critical point numerically inaccessible in the current formulation of the theory for most of the systems with narrower wells .
the mechanism responsible for some deficiencies is illuminated at least partially and allows us to conclude that the specific difficulties encountered for square wells are not likely to resurface for continuous potentials .
# 1^(#1 ) c_2 .
And you have already written the first three sentences of the full article: in a large part of the density - temperature plane , integral equation theories are a reliable tool for studying thermodynamic and structural properties of , among others , simple one - component fluids @xcite ; unfortunately , in the vicinity of a liquid - vapor critical point , integral equations are haunted by a host of difficulties , leading to a variety of shortcomings such as incorrect and non - matching branches of the binodal , classical values at best for the critical exponents , or other deviations from the correct behavior at the critical singularity @xcite .
asymptotically close to the critical point , on the other hand , renormalization group ( ) theory is the instrument of choice for describing the fluid ; in general , however , approaches do not allow one to derive non - universal quantities from microscopic information only , _
i. e. _ from knowledge of the forces acting between the fluid s particles alone ..
Please generate the next two sentences of the article
|
one of the theories devised to bridge the conceptual gap between these complementary approaches is the _ hierarchical reference theory _ ( ) first put forward by parola and reatto @xcite : in this theory the introduction of a cut - off wavenumber @xmath2 inspired by momentum space theory and , for every value of @xmath2 , of a renormalized potential @xmath3 means that only non - critical systems have to be considered at any stage of the calculation ; consequently , integral equations may successfully be applied to every system with @xmath4 , and critical behavior characterized by non - classical critical exponents is recovered only in the limit @xmath5 .
while applicability of to a number of interesting systems , ranging from a lattice gas or ising model @xcite to various one - component fluids @xcite even including three - body interactions @xcite , internal degrees of freedom @xcite , or non - hard - core reference systems @xcite , was demonstrated early on , the main focus of research on has since shifted to the richer phase behavior of binary systems @xcite .
|
11,089 |
Suppose that you have an abstract for a scientific paper: we show that if a group @xmath0 acting faithfully on a rooted tree @xmath1 has a free subgroup , then either there exists a point @xmath2 of the boundary @xmath3 and a free subgroup of @xmath0 with trivial stabilizer of @xmath2 , or there exists @xmath4 and a free subgroup of @xmath0 fixing @xmath2 and acting faithfully on arbitrarily small neighborhoods of @xmath2 .
this can be used to prove absence of free subgroups for different known classes of groups .
for instance , we prove that iterated monodromy groups of expanding coverings have no free subgroups and give another proof of a theorem by s. sidki . .
And you have already written the first three sentences of the full article: it is well known that free groups are ubiquitous in the automorphism group of an infinite rooted spherically homogeneous tree , see for instance @xcite , though explicit examples ( especially ones generated by finite automata ) were not so easy to construct , see @xcite . on the other hand , many famous groups , which are defined by their action on rooted trees do not have free subgroups .
absence of free subgroups is proved in different ways . in some cases it follows from torsion or sub - exponential growth ( as in the grigorchuk groups @xcite and gupta - sidki groups @xcite ) . in other cases
it is proved using some contraction arguments ( see , for instance @xcite ) ..
Please generate the next two sentences of the article
|
s. sidki has proved in @xcite absence of free groups generated by `` automata of polynomial growth '' , which covers many examples .
an important class of groups acting on rooted trees are the _ contracting self - similar groups_. they appear naturally as iterated monodromy groups of expanding dynamical systems ( see @xcite ) . there are no known examples of contracting self - similar groups with free subgroups and it was a folklore conjecture that they do not exist .
|
11,090 |
Suppose that you have an abstract for a scientific paper: we present a new technique for the design of transformation - optics devices based on large - scale optimization to achieve the optimal effective isotropic dielectric materials within prescribed index bounds , which is computationally cheap because transformation optics circumvents the need to solve maxwell s equations at each step .
we apply this technique to the design of multimode waveguide bends ( realized experimentally in a previous paper ) and mode squeezers , in which all modes are transported equally without scattering .
in addition to the optimization , a key point is the identification of the correct boundary conditions to ensure reflectionless coupling to untransformed regions while allowing maximum flexibility in the optimization .
many previous authors in transformation optics used a certain kind of quasiconformal map which overconstrained the problem by requiring that the entire boundary shape be specified _ a priori _ while at the same time underconstraining the problem by employing `` slipping '' boundary conditions that permit unwanted interface reflections .
100 a. j. ward and j. b. pendry .
`` refraction and geometry in maxwell s equations , '' j. mod
. opt .
* 43*(4):773793 ( 1996 ) .
u. leonhardt .
`` optical conformal mapping , '' science * 312*:17771780 ( 2006 ) .
j. b. pendry , d. schurig , and d. r. smith .
`` controlling electromagnetic fields , '' science * 312*:17801782 ( 2006 ) .
h. chen , c. t. chan , and p. sheng .
`` transformation optics and metamaterials , '' nat . mater . * 9*:387396 ( 2010 ) .
y. liu and x. zhang .
`` recent advances in transformation optics , '' nanoscale * 4*(17):52775292 ( 2012 ) .
n. i. landy and w. j. padilla .
`` guiding light with conformal transformations , '' opt .
express * 17*(17):1487214879 ( 2009 ) .
m. heiblum and j. h. harris .
`` analysis of curved optical waveguides by conformal transformation , '' j. quantum electron . * 11*(2):7583 ( 1975 ) .
y.g .
ma ,....
And you have already written the first three sentences of the full article: in this work , we introduce the technique of transformation inverse design , which combines the elegance of transformation optics @xcite ( to ) with the power of large - scale optimization ( inverse design ) , enabling automatic discovery of the best possible transformation for given design criteria and material constraints .
we illustrate our technique by designing multimode waveguide bends @xcite and mode squeezers @xcite , then measuring their performance with finite element method ( fem ) simulations .
most designs in transformation optics use either hand - chosen transformations @xcite ( which often require nearly unattainable anisotropic materials ) , or quasiconformal and conformal maps @xcite which can automatically generate nearly - isotropic transformations ( either by solving partial differential equations or by using grid generation techniques ) but still require _ a priori _ specification of the entire boundary shape of the transformation ..
Please generate the next two sentences of the article
|
further , neither technique can directly incorporate refractive - index bounds . on the other hand ,
most inverse design in photonics involves repeatedly solving computationally expensive maxwell equations for different designs @xcite .
|
11,091 |
Suppose that you have an abstract for a scientific paper: a theoretical study of toroidal membranes with various degrees of intrinsic orientational order is presented at mean - field level .
the study uses a simple ginzburg - landau style free energy functional , which gives rise to a rich variety of physics and reveals some unusual ordered states .
the system is found to exhibit many different phases with continuous and first order phase transitions , and phenomena including spontaneous symmetry breaking , ground states with nodes and the formation of vortex - antivortex quartets .
transitions between toroidal phases with different configurations of the order parameter and different aspect ratios are plotted as functions of the thermodynamic parameters .
regions of the phase diagrams in which spherical vesicles form are also shown .
= msbm10 epsf .
And you have already written the first three sentences of the full article: the bilayer fluid membranes , which can form spontaneously when molecules with hydrophobic and hydrophilic parts are introduced to water , are a popular topic of research . in theoretical studies of such membranes , it is usual to work on length scales much larger than the molecular size , so that they can be considered as continuous surfaces ; two - dimensional curved spaces embedded in euclidean three - space .
features commonly incorporated into mathematical models of membranes include bending rigidity @xmath0 ( disaffinity for extrinsic curvature ) , gaussian curvature modulus @xmath1 ( disaffinity for intrinsic curvature ) , spontaneous curvature due to membrane asymmetries , and for closed surfaces , constraints of constant volume , constant area and constant difference in area between the inner and outer layers . in recent years
, interest has grown in membranes whose molecules have orientational order within the surface ..
Please generate the next two sentences of the article
|
for instance , a smectic - a liquid crystal membrane , whose molecules have aliphatic tails which point in an average direction normal to the local tangent plane of the membrane , can undergo a continuous phase transition to smectic - c phase , in which the tails tilt .
the tilt is described by a two - component vector order parameter within the surface , given by the local thermal average of vectors parallel to the tails , projected onto to local tangent plane .
|
11,092 |
Suppose that you have an abstract for a scientific paper: the geometry of the multifractional brownian motion ( mbm ) is known to present a complex and surprising form when the hurst function is greatly irregular .
nevertheless , most of the literature devoted to the subject considers sufficiently smooth cases which lead to sample paths locally similar to a fractional brownian motion ( fbm ) .
the main goal of this paper is therefore to extend these results to a more general frame and consider any type of continuous hurst function .
more specifically , we mainly focus on obtaining a complete characterization of the pointwise hlder regularity of the sample paths , and the box and hausdorff dimensions of the graph
. these results , which are somehow unusual for a gaussian process , are illustrated by several examples , presenting in this way different aspects of the geometry of the mbm with irregular hurst functions . .
And you have already written the first three sentences of the full article: the multifractional brownian motion ( mbm ) has been independently introduced by @xcite and @xcite as a natural extension of the well - known fractional brownian motion ( fbm ) .
the main idea behind these two works was to drop the stationary assumption on the process , and allow the hurst exponent to change as time passes . in this way
, the mbm is parametrized by a function @xmath0 , usually continuous , and happens to be an interesting stochastic model for non - stationary phenomena ( e.g. signal processing , traffic on internet , terrain modelling , ) ..
Please generate the next two sentences of the article
|
since its introduction , several authors have investigated sample path properties of the mbm .
for instance , @xcite and @xcite have respectively studied the law of iterated logarithm and the hlder regularity of its trajectories . in the latter , the box and hausdorff dimensions of the graph
|
11,093 |
Suppose that you have an abstract for a scientific paper: we show the possibility of extracting important information on the symmetry term of the equation of state ( @xmath0 ) directly from multifragmentation reactions using stable isotopes with different charge asymmetries .
we study n - rich and n - poor @xmath1 collisions at @xmath2 using a new stochastic transport approach with all isospin effects suitably accounted for . for central collisions a chemical component in the spinodal instabilities
is clearly seen .
this effect is reduced in the neck fragmentation observed for semiperipheral collisions , pointing to a different nature of the instability . in spite of the low asymmetry tested with stable isotopes
the results are showing an interesting and promising dependence on the stiffness of the symmetry term , with an indication towards an increase of the repulsion above normal density . .
And you have already written the first three sentences of the full article: our starting point is that the key question in the physics of unstable nuclei is the knowledge of the @xmath0 for asymmetric nuclear matter away from normal conditions .
we remark the effect of the symmetry term at low densities on the neutron skin structure , while the knowledge in high densities region is crucial for supernovae dynamics and neutron star cooling @xcite .
effective interactions are obviously tuned to symmetry properties around normal conditions and any extrapolation can be quite dangerous ..
Please generate the next two sentences of the article
|
microscopic approaches based on realistic @xmath3 interactions , brueckner or variational schemes , or on effective field theories show a quite large variety of predictions , see fig.1 . in the reaction dynamics with intermediate energy radioactive beams we can probe highly asymmetric nuclear matter in compressed as well as dilute phases : the aim of this paper is to show that fragmentation events have new features due to isospin effects and that some observables are particularly sensitive to the symmetry term of the @xmath0 . _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ * fig.1 - * @xmath0 for various effective forces , @xmath4 @xcite(dashed ) , @xmath5 @xcite(squares ) and @xmath6 @xcite(crosses ) .
top : neutron matter ( up ) , symmetric matter ( down ) ; bottom : potential symmetry term .
|
11,094 |
Suppose that you have an abstract for a scientific paper: we present optical long - slit and sparsepak integral field unit emission line spectroscopy along with optical broadband and near ir images of the edge - on spiral galaxy ngc 2683 .
we find a multi - valued , figure - of - eight velocity structure in the inner 45@xmath0 of the long - slit spectrum and twisted isovelocity contours in the velocity field .
we also find , regardless of wavelength , that the galaxy isophotes are boxy .
we argue that taken together , these kinematic and photometric features are evidence for the presence of a bar in ngc 2683 .
we use our data to constrain the orientation and strength of the bar . .
And you have already written the first three sentences of the full article: a large fraction of edge - on spiral galaxies have been observed to have boxy or peanut - shaped ( hereafter b / ps ) bulges . as the observations have improved and the samples have grown , the percentage of galaxies with these bulge shapes has increased from less than 20% @xcite to up to 45% @xcite . because the observed frequency of b / ps bulges is so high , significant effort has been invested in determining the formation mechanism responsible for these bulge shapes .
@xcite and @xcite demonstrated that the accretion of satellite galaxies could result in a galaxy bulge looking boxy or peanut - shaped .
the currently favored formation mechanism , however , is the buckling and subsequent vertical thickening of a bar ( e.g. * ? ? ?.
Please generate the next two sentences of the article
|
* ; * ? ? ?
* ; * ? ? ?
|
11,095 |
Suppose that you have an abstract for a scientific paper: we use the double exchange ( de ) model via degenerate orbitals and tight - binding approximation to study the magnetoconductivity of a canted a - phase of pseudo - cubic manganites .
it is argued that the model is applicable in a broad concentration range for manganites a@xmath0b@xmath1mno@xmath2 with the tolerance factor , @xmath3 , close to one . as for the substitutional disorder , scattering on random jahn - teller distortions of mno@xmath4 octahedra
is chosen .
we emphasize an intimate correlation between the carrier concentration and resistivity value of metallic manganites .
magnetoresistance as a function of magnetization is calculated for a canted a - phase for both in - plane and out - of - plane current directions .
a contact between two manganite phases is considered and structure of the transition region near the contact is discussed .
numerical calculations show charge re - distribution near the contact and a large screening length of the order of five inter - atomic distances .
we employed our results to interpret data obtained in recent experiments on la@xmath5sr@xmath6mno@xmath2/la@xmath7sr@xmath8mno@xmath2 superlattices .
we also briefly discuss the relative importance of the cooperative jahn - teller distortions , double exchange mechanism and super - exchange interactions for the formation of the a - phase at increasing sr concentrations @xmath9 in la@xmath0sr@xmath10mno@xmath2 to suggest that the jahn - teller contraction of octahedra , @xmath11 , plays a prevailing role . .
And you have already written the first three sentences of the full article: pseudo - cubic manganites are remarkable for their rich phase diagram although properties of various phases have not been studied equally well .
for example , the mechanisms of the `` charge ordering '' phenomenon ( co - phase ) or of the metallic anti - ferromagnetic phase ( a - phase ) , which usually appear near @xmath12 , to a large extent remain to be poorly understood .
it is also surprising that the potential of metallic manganites as the basic elements for various hetero - structures in development of alternate devices is practically unexplored . meanwhile.
Please generate the next two sentences of the article
|
the diversity of their phases depending on temperature and doping concentration poses interesting questions regarding phenomena that may take place at the interface of the contacts between them .
further understanding of magneto - transport properties of these materials also remains of prime importance because of several reasons .
|
11,096 |
Suppose that you have an abstract for a scientific paper: we present three - dimensional atmospheric circulation models of gj 1214b , a 2.7 earth - radius , 6.5 earth - mass super earth detected by the mearth survey . here
we explore the planet s circulation as a function of atmospheric metallicity and atmospheric composition , modeling atmospheres with a low mean - molecular weight ( i.e. , @xmath0-dominated ) and a high mean - molecular weight ( i.e. water- and @xmath1-dominated ) .
we find that atmospheres with a low mean - molecular weight have strong day - night temperature variations at pressures above the infrared photosphere that lead to equatorial superrotation .
for these atmospheres , the enhancement of atmospheric opacities with increasing metallicity lead to shallower atmospheric heating , larger day - night temperature variations and hence stronger superrotation . in comparison ,
atmospheres with a high mean - molecular weight have larger day - night and equator - to - pole temperature variations than low mean - molecular weight atmospheres , but differences in opacity structure and energy budget lead to differences in jet structure .
the circulation of a water - dominated atmosphere is dominated by equatorial superrotation , while the circulation of a @xmath1-dominated atmosphere is instead dominated by high - latitude jets . by comparing emergent flux spectra and lightcurves for 50@xmath2 solar and water - dominated compositions , we show that observations in emission can break the degeneracy in determining the atmospheric composition of gj 1214b .
the variation in opacity with wavelength for the water - dominated atmosphere leads to large phase variations within water bands and small phase variations outside of water bands .
the 50@xmath2 solar atmosphere , however , yields small variations within water bands and large phase variations at other characteristic wavelengths . these observations would be much less sensitive to clouds , condensates , and hazes than transit observations . .
And you have already written the first three sentences of the full article: as the number of extrasolar planets detected by various ground- and space - based surveys grows , so too do the number of so - called super earths " , exoplanets with masses of 1 - 10 earth masses .
many of these super earths transit their host stars along our line of sight , which allow us to directly observe their atmospheres using the same techniques as for hot jupiters ( e.g. , * ? ? ?
such a case is true for gj 1214b , a 2.7 earth - radius , 6.5 earth - mass super earth detected by the mearth survey @xcite . because gj 1214a is an m - type star only 13 parsecs away , the system has proven to be a favorable target for follow - up observations ( e.g. , * ? ? ?.
Please generate the next two sentences of the article
|
* ; * ? ? ?
* ; * ? ? ?
|
11,097 |
Suppose that you have an abstract for a scientific paper: we investigated the valence electronic structure of diamondoid particles in the gas phase , utilizing valence photoelectron spectroscopy .
the samples were singly or doubly covalently bonded dimers or trimers of the lower diamondoids .
both the bond type and the combination of bonding partners are shown to affect the overall electronic structure .
for singly bonded particles , we observe a small impact of the bond on the electronic structure , whereas for doubly bonded particles , the connecting bond determines the electronic structure of the highest occupied orbitals . in the singly bonded particles a superposition of the bonding partner orbitals determines the overall electronic structure .
the experimental findings are supported by density functional theory computations at the m06 - 2x / cc - pvdz level of theory . .
And you have already written the first three sentences of the full article: the electronic structure of nanoparticles define their electrical , chemical and optical properties .
hence , investigating these forms the basis for developing new applications in nanotechnology .
a comprehensive understanding of the various possibilities to modify the electronic structure of a particle is required when aiming to tailor compounds for specific applications ..
Please generate the next two sentences of the article
|
research in this area can advance , for instance , the development of electron photoemitters@xcite or nano - electronics.@xcite diamondoids are perfectly size- and shape - selectable , hydrogen passivated , _ _
sp__@xmath0-hybridized carbon nanostructures.@xcite as such , they are an ideal class of particles for the study of effects induced by manipulation of geometry and chemical composition on the electronic structure in nanoscale systems . in addition , functionalization of these particles presents another possibility to tune their electronic structure.@xcite apart from the exploration of their existence in crude oil,@xcite continuous improvements in the field of synthesis of diamondoids@xcite have led to a rise in their popularity during the last ten years .
|
11,098 |
Suppose that you have an abstract for a scientific paper: previous studies of proton and neutron spectra from non - mesonic weak decay of eight @xmath0-hypernuclei ( @xmath1 ) have been revisited .
new values of the ratio of the two - nucleon and the one - proton induced decay widths , @xmath2 , are obtained from single proton spectra , , and from neutron and proton coincidence spectra , , in full agreement with previously published ones . with these values
, a method is developed to extract the one - proton induced decay width in units of the free @xmath0 decay width , @xmath3 , without resorting to intra nuclear cascade models but by exploiting only experimental data , under the assumption of a linear dependence on @xmath4 of the final state interaction contribution .
this is the first systematic determination ever done and it agrees within the errors with recent theoretical calculations .
@xmath0hypernuclei , two - nucleon and proton - induced non - mesonic weak decay width 21.80.+a , 25.80.pw .
And you have already written the first three sentences of the full article: @xmath0-hypernuclei ( hypernuclei in the following ) decay through weak interaction to non - strange nuclear systems following two modes , the mesonic ( mwd ) and the non - mesonic ( nmwd ) one . the mwd is further split into two branches corresponding to the decay modes of the @xmath0 in free space : @xmath5 @xmath6 @xmath7 indicates the hypernucleus with mass number @xmath4 and atomic number @xmath8 , @xmath9 and @xmath10 the residual nuclear system , usually the daughter nucleus in its ground state , and the @xmath11 s stand for the decay widths . since the momentum released to the nucleon in mwd ( p@xmath12100 mev / c , q@xmath1337 mev ) is much lower than the fermi momentum , the mwd is strongly suppressed by the pauli exclusion principle in all but the lightest hypernuclei . in nmwd
the hypernucleus decays through weak interaction involving the constituent @xmath0 and one or more core nucleons .
the importance of such processes was pointed out for the first time in @xcite ..
Please generate the next two sentences of the article
|
if the pion emitted in the weak vertex @xmath14 is virtual , then it can be absorbed by the nuclear medium giving origin to : @xmath15 @xmath16 @xmath17 the processes ( [ gammap ] ) and ( [ gamman ] ) are globally indicated as one - nucleon induced decays ( one - proton ( [ gammap ] ) , one - neutron ( [ gamman ] ) ) while ( [ gamma2 ] ) as two - nucleon induced decay . by neglecting @xmath0 weak interactions with nuclear clusters of more than two nucleons , the total nmwd width is : @xmath18 the two - nucleon induced mechanism ( [ gamma2 ] ) was first suggested in @xcite and interpreted by assuming that the virtual pion from the weak vertex is absorbed by a pair of nucleons ( @xmath19 , @xmath20 or @xmath21 ) , correlated by the strong interaction . in ( [ gamma2 ] ) we have indicated for simplicity only the most probable process involving @xmath19 pairs .
note that the nmwd can also be mediated by the exchange of mesons more massive than the pion .
|
11,099 |
Suppose that you have an abstract for a scientific paper: in this paper , we consider the problem of translating ltl formulas to bchi automata .
we first translate the given ltl formula into a special _ disjuctive - normal form _ ( dnf )
. the formula will be part of the state , and its dnf normal form specifies the atomic properties that should hold immediately ( labels of the transitions ) and the _ formula _ that should hold afterwards ( the corresponding successor state ) .
surprisingly , if the given formula is until - free or release - free , the bchi automaton can be obtained directly in this manner . for a general formula ,
the construction is slightly involved : an additional component will be needed for each formula that helps us to identify the set of accepting states .
notably , our construction is an on - the - fly construction , and the resulting bchi automaton has in worst case @xmath0 states where @xmath1 denotes the number of subformulas .
moreover , it has a better bound @xmath2 when the formula is until- ( or release- ) free .
we explore the properties of the formula s dnf form , and then identify the corresponding accepting states . as a result
, we present a dnf - based approach to generating bchi automata from ltl formulas . compared to the classic tableau construction
, our approach 1 ) avoids generating the gba ( generalized bchi automata ) ; 2 ) discusses many interesting ltl formulas properties which are seldom concerned before ; 3 ) gives the more precise upper bound of @xmath0 for the general translation , and even more has a better one of @xmath2 when the formula is until - free ( release - free ) . .
And you have already written the first three sentences of the full article: translating linear temporal logic ( ltl ) formulas to their equivalent automata ( usually bchi automata ) has been studied for nearly thirty years .
this translation plays a key role in the automata - based model checking @xcite : here the automaton of the negation of the ltl property is first constructed , then the verification process is reduced to the emptiness problem of the product .
gerth et al ..
Please generate the next two sentences of the article
|
@xcite proposed an on - the - fly construction approach to generating bchi automata from ltl formulas , which means that the counterexample can be detected even only a part of the property automaton is generated .
they called it a tableau construction approach , which became widely used and many subsequent works @xcite for optimizing the automata under construction are based on it .
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.