id
int64
0
203k
input
stringlengths
66
4.29k
output
stringlengths
0
3.83k
4,600
Suppose that you have an abstract for a scientific paper: the importance of spatial non - locality in the description of negative refraction in electromagnetic materials has been put forward recently . we develop a theory of negative refraction in homogeneous and isotropic media , based on first principles , and that includes non locality in its full generality . the theory shows that both dissipation and spatial non locality are necessary conditions for the existence of negative refraction . it also provides a sufficient condition in materials with weak spatial non locality . these fundamental results should have broad implications in the theoretical and practical analyses of negative refraction of electromagnetic and other kinds of waves . the study and design of artificial materials ( metamaterials ) with exotic electromagnetic properties has attracted a lot of attention both at the theoretical and experimental levels . negative refraction has remained one of the most intriguing properties : in a certain range of frequencies , the energy flow is opposite to the direction of the phase velocity @xcite , offering a new potential for the design of components and devices ( the flat lens being the most famous example ) . the investigations of the conditions for negative refraction have been originally carried out using the local description of electrodynamics in continuous media in terms of the electric permittivity ( dielectric function ) @xmath0 and magnetic permeability @xmath1 ( for a review see for example ref . @xcite ) . the first metamaterials exhibiting negative refraction were severely limited by absorption , and various strategies have been explored to design almost transparent media . nevertheless , basic principles of electromagnetic waves propagation impose general constraints on the response functions of materials , and on the conditions for negative refraction . for example in ref . @xcite the principle of causality has been used to argue that , for spatially local materials , dissipation is necessary for the existence of.... And you have already written the first three sentences of the full article: in the main text we have chosen a specific formalism to discuss electrodynamics in media . this formalism encodes all the interactions of the medium with the electromagnetic radiation in the field @xmath100 through only the response function @xmath101 @xcite , without the need to introduce the macroscopic field @xmath102 . in this framework we assess necessary or sufficient conditions for negative refraction . in this short section , building on the existing literature ,. Please generate the next two sentences of the article
we briefly explain why this is the appropriate formalism to undergo the first principle analysis we perform in the main text , the relation between this formalism and more well - known formalisms in term of the macroscopic fields @xmath100 and @xmath102 , the limit to the local formalism in term of the traditional functions @xmath85 and @xmath86 , and we comment on some subtleties . the formalism for electrodynamics in media based on the response function @xmath101 is valid for any system @xcite in the linear response regime , without the need of any other specific assumption @xcite .
4,601
Suppose that you have an abstract for a scientific paper: the quantum navigation problem of finding the time - optimal control hamiltonian that transports a given initial state to a target state through quantum wind , that is , under the influence of external fields or potentials , is analysed . by lifting the problem from the state space to the space of unitary gates realising the required task , we are able to deduce the form of the solution to the problem by deriving a universal quantum speed limit . the expression thus obtained indicates that further simplifications of this apparently difficult problem are possible if we switch to the interaction picture of quantum mechanics . a complete solution to the navigation problem for an arbitrary quantum system is then obtained , and the behaviour of the solution is illustrated in the case of a two - level system . with the advances in the implementation of quantum technologies , the theoretical understanding of controlled quantum dynamics and , in particular , of their limits , is becoming increasingly important . one aspect of such limits that has been investigated extensively in the literature concerns the time - optimal manoeuvring of quantum states @xcite . if the time - evolution is unconstrained ( apart from a bound on the energy resource ) , then this amounts to finding the time - independent hamiltonian that generates maximum speed of evolution . however , in general there can be a range of constraints that prohibits the implementation of such an elementary protocol , and various optimisations will have to be applied to determine time - dependent hamiltonians that generate the dynamics achieving required tasks . an important class of problems arising in this context is the identification of the time - optimal quantum evolution under the influence of external fields or potentials that can not be easily eliminated in a laboratory . solutions to such problems are indeed relevant to practical implementations of time - optimal controlled quantum dynamics because in real laboratories external.... And you have already written the first three sentences of the full article: the purpose of this appendix is to offer a proof of the following claim in the paper : _ the squared speed of the evolution of a quantum state generated by a hamiltonian @xmath98 is bounded above by twice the hilbert - schmidt norm @xmath99 of the hamiltonian , and the bound is attained if @xmath98 is horizontal . _ we begin by establishing some properties of the space of horizontal hamiltonians . recall that a hamiltonian @xmath98 is horizontal with respect to some state @xmath39 if and only if @xmath100 for all hamiltonians @xmath101 that leave @xmath102 invariant . for simplicity. Please generate the next two sentences of the article
, we fix the state @xmath102 to be the one whose homogeneous coordinates are given by @xmath103 . it should be stressed , however , that the conclusions of the discussion below remain valid for any choice of @xmath39 , owing to the homogeneous nature of the state space @xmath104 .
4,602
Suppose that you have an abstract for a scientific paper: it is shown that the basic shape of dipion mass distributions in the two - pion transitions of both charmonia and bottomonia states are explained by an unified mechanism based on the contribution of the @xmath0 , @xmath1 and @xmath2 coupled channels including their interference . . And you have already written the first three sentences of the full article: in the analysis of practically all available data on two - pion transitions of the @xmath3 mesons from the argus , cleo , cusb , crystal ball , belle , and _ babar _ collaborations @xmath4 ( @xmath5 , @xmath6 @xmath7 ) the contribution of multi - channel @xmath0 scattering in the final - state interactions is considered . the analysis , which is aimed at studying the scalar mesons , is performed jointly considering the above bottomonia decays , the isoscalar s - wave processes @xmath8 , which are described in our model - independent approach based on analyticity and unitarity and using an uniformization procedure , and the charmonium decay processes @xmath9 , @xmath10 with data from the crystal ball , dm2 , mark ii , mark iii , and bes ii collaborations . we show that the experimentally observed interesting ( even mysterious ) behavior of the @xmath0 spectra of the @xmath3-family decays , beginning from the second radial excitation and higher , a bell - shaped form in the near-@xmath0-threshold region , smooth dips about 0.6 gev in the @xmath11 , about 0.45 gev in the @xmath12 , and about 0.7 gev in the @xmath13 , and also sharp dips about 1 gev in the @xmath11 is explained by the interference between the @xmath0 scattering , @xmath14 and @xmath15 contributions to the final states of these decays ( by the constructive one in the near-@xmath0-threshold region and by the destructive one in the dip regions ) .. Please generate the next two sentences of the article
considering multi - channel @xmath0 scattering , we shall deal with the 3-channel case ( @xmath8 ) because it was shown @xcite that this is a minimal number of coupled channels needed for obtaining correct values of @xmath17-resonance parameters . when performing our combined analysis data for the multi - channel @xmath0 scattering were taken from many papers ( see refs . in our paper @xcite ) .
4,603
Suppose that you have an abstract for a scientific paper: in this paper we describe the integral transform that allows to write solutions of one partial differential equation via solution of another one . this transform was suggested by the author in the case when the last equation is a wave equation , and then it was used to investigate several well - known equations such as tricomi - type equation , the klein - gordon equation in the de sitter and einstein - de sitter spacetimes . a generalization given in this paper allows us to consider also the klein - gordon equations with coefficients depending on the spatial variables . + _ keywords : _ klein - gordon equation ; curved spacetime ; representation of solution 8.5 in -0.15 cm -1.5 cm 6.5 in [ section ] [ theorem]lemma [ theorem]definition [ theorem]corollary [ theorem]remark [ theorem]proposition [ theorem]example [ theorem]assumption _ department of mathematics + university of texas - pan american + 1201 w. university drive + edinburg , tx 78539 usa _ . And you have already written the first three sentences of the full article: in this paper we give some generalization of the approach suggested in @xcite , which is aimed to reduce equations with variable coefficients to more simple ones . this transform was used in a series of papers @xcite , @xcite-@xcite to investigate in a unified way several equations such as the linear and semilinear tricomi equations , gellerstedt equation , the wave equation in einstein - de sitter spacetime , the wave and the klein - gordon equations in the de sitter and anti - de sitter spacetimes . the listed equations play an important role in the gas dynamics , elementary particle physics , quantum field theory in curved spaces , and cosmology .. Please generate the next two sentences of the article
consider for the smooth function @xmath0 the solution @xmath1 to the problem @xmath2\subseteq { \mathbb r } , \,\ , x \in \omega \subseteq { \mathbb r}^n,\ ] ] with the parameter @xmath3 \subseteq { \mathbb r}$ ] , @xmath4 , and with @xmath5 . here @xmath6 is a domain in @xmath7 , while @xmath8 is the partial differential operator @xmath9 .
4,604
Suppose that you have an abstract for a scientific paper: we examine the leading order corrections to the nambu effective action for the motion of a cosmic string , which appear at fourth order in the ratio of the width to radius of curvature of the string . we determine the numerical coefficients of these extrinsic curvature corrections , and derive the equations of motion of the worldsheet . using these equations , we calculate the corrections to the motion of a collapsing loop , a travelling wave , and a helical breather . from the numerical coefficients we have calculated , we discuss whether the string motion can be labelled as ` rigid ' or ` antirigid , ' and hence whether cusp or kink formation might be suppressed or enhanced . . And you have already written the first three sentences of the full article: the study of topological or vacuum defects is of importance in many areas of contemporary physics . in high energy physics , a defect will generically occur during a symmetry breaking process where different parts of a medium choose different vacuum energy configurations , and the non - compatibility of these different vacua forces a sheet , line , or point of energy where these non - compatible vacua meet . the relevant vacuum order parameter then becomes indeterminate this is the defect .. Please generate the next two sentences of the article
a defect may be topological @xcite , in that it is the topology of the vacuum that simultaneously allows formation , and prevents dissipation , of these objects but other types of defect are also possible . for instance , a defect may be stable dynamically ( i.e. classically , due to energy considerations ) but not topologically , as it happens for semilocal @xcite or electroweak @xcite defects .
4,605
Suppose that you have an abstract for a scientific paper: a local and parallel algorithm based on the multilevel discretization is proposed in this paper to solve the eigenvalue problem by the finite element method . with this new scheme , solving the eigenvalue problem in the finest grid is transferred to solutions of the eigenvalue problems on the coarsest mesh and a series of solutions of boundary value problems by using the local and parallel algorithm . the computational work in each processor can reach the optimal order . therefore , this type of multilevel local and parallel method improves the overall efficiency of solving the eigenvalue problem . some numerical experiments are presented to validate the efficiency of the new method . 0.3 cm * keywords . * eigenvalue problem , multigrid , multilevel correction , local and parallel method , finite element method . 0.2 cm * ams subject classifications . * 65n30 , 65n25 , 65l15 , 65b99 . . And you have already written the first three sentences of the full article: solving large scale eigenvalue problems becomes a fundamental problem in modern science and engineering society . however , it is always a very difficult task to solve high - dimensional eigenvalue problems which come from physical and chemistry sciences . xu and zhou @xcite give a type of two - grid discretization method to improve the efficiency of the solution of eigenvalue problems .. Please generate the next two sentences of the article
by the two - grid method , the solution of eigenvalue problem on a fine mesh is reduced to a solution of eigenvalue problem on a coarse mesh ( depends on the fine mesh ) and a solution of the corresponding boundary value problem on the fine mesh @xcite . for more details , please read @xcite . combing the two - grid idea and the local and parallel finite element technique @xcite , a type of local and parallel finite element technique to solve the eigenvalue problems is given in @xcite ( also see @xcite ) .
4,606
Suppose that you have an abstract for a scientific paper: the masses of supermassive black holes ( sbhs ) show correlations with bulge properties in disk and elliptical galaxies . we study the formation of galactic structure within flat - core _ triaxial _ haloes and show that these correlations can be understood within the framework of a baryonic component modifying the orbital structure in the underlying potential . in particular , we find that terminal properties of bulges and their central sbhs are constrained by the destruction of box orbits in the harmonic cores of dark haloes and the emergence of progressively less eccentric loop orbits there . sbh masses , @xmath0 , should exhibit a tighter correlation with bulge velocity dispersions , @xmath1 , than with bulge masses , @xmath2 , in accord with observations , if there is a significant scatter in the @xmath3 relation for the halo . in the context of this model the observed @xmath4 relation implies that haloes should exhibit a faber - jackson type relationship between their masses and velocity dispersions . the most important prediction of our model is that halo properties determine the bulge and sbh parameters . the model also has important implications for galactic morphology and the process of disk formation . 3mark iii # 1#2#3#4=.24 = .24 = .24 = .24 . And you have already written the first three sentences of the full article: the possibility that supermassive black holes ( sbhs ) inhabit the centers of many if not most galaxies , and the observed correlation between sbh masses and galactic bulge properties , has potentially a fundamental significance for our understanding of galaxy formation and evolution . the relationships between black hole and bulge properties include a loose relationship between sbh and bulge masses , @xmath5 , and an apparently much tighter one between the sbh mass and the velocity dispersion in the corresponding bulge , @xmath6 ( e.g. , ferrarese & merritt 2000 ; gebhardt et al . 2000 ; tremaine et al . 2002 ; cf .. Please generate the next two sentences of the article
reviews by kormendy & gebhardt 2001 ; merritt & ferrarese 2001 ) . in this paper we attempt to provide a physical explanation for these relationships between sbhs and their host galaxies . our model is based on the interaction between the dark haloes of galaxies and the baryonic components settling in their midst . as baryonic matter accumulates to form the bulge and sbh , the orbital structure of the underlying gravitational potential is modified .
4,607
Suppose that you have an abstract for a scientific paper: we study the two - qubit controlled - not gate operating on qubits encoded in the spin state of a pair of electrons in a double quantum dot . we assume that the electrons can tunnel between the two quantum dots encoding a single qubit , while tunneling between the quantum dots that belong to different qubits is forbidden . therefore , the two qubits interact exclusively through the direct coulomb repulsion of the electrons . we find that entangling two - qubit gates can be performed by the electrical biasing of quantum dots and/or tuning of the tunneling matrix elements between the quantum dots within the qubits . the entangling interaction can be controlled by tuning the bias through the resonance between the singly - occupied and doubly - occupied singlet ground states of a double quantum dot . . And you have already written the first three sentences of the full article: the spin-1/2 of a single electron trapped in a quantum dot ( qd ) is a promising candidate for a carrier of quantum information in a quantum computer @xcite . to perform a quantum computation we need to have all the unitary operations from some universal set of quantum gates at our disposal @xcite . one such universal set consists of all the single qubit quantum gates and a two - qubit controlled - not ( cnot ) quantum gate .. Please generate the next two sentences of the article
quantum computation over the single - spin qubits with the logical states corresponding to the spin orientations @xmath0 and @xmath1 can in principle be achieved using an external magnetic field or with g - factor engineering for the single qubit operations , and with the time - dependent isotropic exchange interaction @xmath2 for manipulating a pair of qubits encoded into spins @xmath3 and @xmath4 @xcite . control of electron spins in quantum dots is in the focus of many intense experimental investigations .
4,608
Suppose that you have an abstract for a scientific paper: we study the role resonant scattering plays in the transport of @xmath0 photons in accreting protoplanetary disk systems subject to varying degrees of dust settling . while the intrinsic stellar fuv spectrum of accreting t tauri systems may already be dominated by a strong , broad @xmath0 line ( @xmath180% of the fuv luminosity ) , we find that resonant scattering further enhances the @xmath0 density in the deep molecular layers of the disk . @xmath0 is scattered downwards efficiently by the photodissociated atomic hydrogen layer that exists above the molecular disk . in contrast , fuv - continuum photons pass unimpeded through the photodissociation layer , and ( forward-)scatter inefficiently off dust grains . using detailed , adaptive grid monte carlo radiative transfer simulations we show that the resulting @xmath0/fuv - continuum photon density ratio is strongly stratified ; fuv - continuum dominated in the photodissociation layer , and @xmath0-dominated field in the molecular disk . the enhancement is greatest in the interior of the disk ( @xmath2au ) but is also observed in the outer disk ( @xmath3au ) . the majority of the total disk mass is shown to be increasingly @xmath0 dominated as dust settles towards the midplane . . And you have already written the first three sentences of the full article: t tauri stars ( tts ) are pre - main sequence low mass stars frequently described as young analogs of our solar system . surrounded by circumstellar disks of gas and dust , these systems mark the earliest stages of planet - formation , a process now understood to be common in our galaxy @xcite . due to their close proximity to the parent star , intense ultraviolet and x - ray radiation fields greatly impact the evolution of the protoplanetary disk .. Please generate the next two sentences of the article
the far - ultraviolet ( fuv , @xmath4ev ) field is of particularly broad interest as it provides thermodynamical gas heating through the photoelectric effect @xcite , and affects disk chemistry directly through the photo - dissociation and -ionization of molecules and atoms ( * ? ? ? * and references therein ) . fuv photodesorption of ices contributes to the transport of material between solid and gas phases , whilst enabling complex molecule formation on grains surfaces @xcite . alongside x - rays and cosmic - rays ,
4,609
Suppose that you have an abstract for a scientific paper: in this paper possibilities of a stabilization of large amplitude fluctuations in an intracavity - doubled solid - state laser are studied . the modification of the cross - saturation coefficient by the effect of spatial hole - burning is taken into account . the stabilization of the laser radiation by an increase of the number of modes , as proposed in [ james et al . , 1990b , magni et al . , 1993 ] , is analyzed . it is found that when the cross - saturation coefficient is modulated by the spatial hole - burning the stabilization is not always possible . we propose a new way of obtaining a stable steady - state configuration based on an increase of the strength of nonlinearity , which leads to a strong cancellation of modes , so that during the evolution all of the modes , but a single one , are canceled . such a steady - state solution is found to be stable with respect to small perturbations . 16.5 cm 23.cm -1.5 cm i i v elimination of chaos in multimode , intracavity - doubled lasers in the presence of spatial hole - burning . + faculty of physics , warsaw university of technology + warsaw , poland + + laser laboratory , sincrotrone + trieste , italy + short title : elimination of chaos in multimode , intracavity - doubled ... . And you have already written the first three sentences of the full article: solid - state lasers containing frequency - doubling crystals are efficient and compact sources of coherent visible optical radiation . unfortunately , when they operate in multimode regime , one observes irregular fluctuations of the output intensity . this behavior , referred to as the green problem , has been reported for the first time by baer [ baer , 1986 ] . he found that these instabilities arise from a coupling between longitudinal modes of the laser due to sum - frequency generation .. Please generate the next two sentences of the article
in particular , when such a laser operates in a single longitudinal mode , its output is stable [ kennedy & barry , 1974 ] . in the case of two oscillating longitudinal modes , the output intensity is stable only for small values of nonlinearity , otherwise both modes tend to pulse on and off out of phase [ baer , 1986 ] . when the number of lasing modes is larger than two , the laser can exhibit , depending on the parameters describing it , various types of behaviour such as : antiphase dynamics [ james et al .
4,610
Suppose that you have an abstract for a scientific paper: in this paper , we explore how numerical calculations can be accelerated by implementing several numerical methods of fractional - order systems using parallel computing techniques . we investigate the feasibility of parallel computing algorithms and their efficiency in reducing the computational costs over a large time interval . particularly , we present the case of adams - bashforth - mouhlton predictor - corrector method and measure the speedup of two parallel approaches by using gpu and hpc cluster implementations . * keywords : * fractional - order systems , parallel numerical algorithms , gpu processing , hpc processing . And you have already written the first three sentences of the full article: it is well understood that fractional - order derivatives provide a good tool for the description of memory and hereditary properties of various processes , fractional - order systems being characterized by infinite memory . generalizations of dynamical systems using fractional - order derivatives instead of classical integer - order derivatives have proved to be useful and more accurate in the mathematical modeling of real world phenomena arising from several interdisciplinary areas such as : diffusion and wave propagation , viscoelastic liquids , fractional kinetics , boundary layer effects in ducts , electromagnetic waves , electrode - electrolyte polarization . theoretical characterization of chaos in fractional - order dynamical systems is yet to be investigated .. Please generate the next two sentences of the article
however , chaotic behavior has been observed by numerical simulations in many systems such as : a fractional - order van der pol system @xcite , fractional - order chua and chen s systems @xcite , a fractional - order rossler system @xcite and a fractional - order financial system @xcite . nevertheless , it is worth noting that numerical simulations are limited by the fact that they only reveal the chaotic behavior of discrete - time dynamical systems that are obtained by discretizing the fractional - order systems . in order to assess chaotic behavior of fractional - order dynamical systems ,
4,611
Suppose that you have an abstract for a scientific paper: the renormalised value of @xmath0 is calculated for a massless , conformally coupled scalar field in the hartle - hawking vacuum state . this calculation is a first step towards the calculation of the gravitational back reaction of the field in a black cosmic string spacetime which is asymptotically anti - desitter and possesses a non constant dilaton field . it is found that the field is divergence free throughout the spacetime and attains its maximum value near the horizon . . And you have already written the first three sentences of the full article: a useful question to ask when one studies quantum fields in general relativity is the following : given that all matter is inherently quantum in nature , will quantum effects remove the singularity at the centre of a black hole spacetime ? to answer this question using a scalar field and semi - classical perturbation theory , one must first calculate the expectation value @xmath1 where @xmath2 is the stress - energy tensor operator of the scalar field @xmath3 . in this paper @xmath0 is computed in preparation for the computation of @xmath1 which is used as the source term for the einstein field equations : @xmath4 a review of handling quantum fields in the presence of strong gravitational fields can been found in articles by wipf @xcite and dewitt @xcite and in books by birrel and davies @xcite and wald @xcite . @xmath0 for massless fields has been computed for both the interior and exterior of a schwarzschild black hole @xcite @xcite . these calculations have also been extended by anderson to accommodate massive fields in general spherically symmetric , asymptotically flat spacetimes @xcite . a method has also been developed by anderson , hiscock and samuel @xcite @xcite to calculate the expectation value of the stress - energy operator , @xmath1 in spherically symmetric , static spacetimes .. Please generate the next two sentences of the article
they use this method to calculate @xmath1 in schwarzschild and reissner - nordstrm geometries . @xmath1 has also previously been computed by howard and candelas @xcite in schwarzschild spacetime .
4,612
Suppose that you have an abstract for a scientific paper: the statistical properties of coherent radiation scattered from phase - ordering materials are studied in detail using large - scale computer simulations and analytic arguments . specifically , we consider a two - dimensional model with a nonconserved , scalar order parameter ( model a ) , quenched through an order - disorder transition into the two - phase regime . for such systems it is well established that the standard scaling hypothesis applies , consequently the average scattering intensity at wavevector @xmath0 and time @xmath1 is proportional to a scaling function which depends only on a rescaled time , @xmath2 . we find that the simulated intensities are exponentially distributed , and the time - dependent average is well approximated using a scaling function due to ohta , jasnow , and kawasaki . considering fluctuations around the average behavior , we find that the covariance of the scattering intensity for a single wavevector at two different times is proportional to a scaling function with natural variables @xmath3 and @xmath4 . in the asymptotic large-@xmath5 limit this scaling function depends only on @xmath6 . for small values of @xmath7 , the scaling function is quadratic , corresponding to highly persistent behavior of the intensity fluctuations . we empirically establish that the intensity covariance ( for @xmath8 ) equals the square of the spatial fourier transform of the two - time , two - point correlation function of the order parameter . this connection allows sensitive testing , either experimental or numerical , of existing theories for two - time correlations in systems undergoing order - disorder phase transitions . comparison between theoretical scaling functions and our numerical results requires no adjustable parameters . . And you have already written the first three sentences of the full article: a scattering experiment , using neutrons or x - rays for example , is one of the most direct measures of the structure of materials . naively , this comes about because in the born approximation , which usually applies for x - rays and neutrons , the intensity in scattering measurements is proportional to the fourier transform of a density - density correlation function . it is the wavelike properties of the scattering probe which produces the fourier transform . for. Please generate the next two sentences of the article
a deeper understanding of the relationship between scattering intensity and structure one must realize that this direct correspondence applies precisely only for coherent waves . indeed , for conventional sources , a given point in the incident wave is only coherent within a small volume of neighboring points .
4,613
Suppose that you have an abstract for a scientific paper: the formation history of the small magellanic cloud ( smc ) is unraveled based on the results of our new chemical evolution models constructed for the smc , highlighting the observed anomaly in the age - metallicity relation for star clusters in the smc . we first propose that evidence of a major merger is imprinted in the age - metallicity relation as a dip in [ fe / h ] . our models predict that the major merger with a mass ratio of 1:1 to 1:4 occurred at @xmath07.5 gyr ago , with a good reproduction of the abundance distribution function of field stars in the smc . furthermore , our models predict a relatively large scatter in [ mg / fe ] for @xmath1 } -1.1 $ ] as a reflection of a looping feature resulting from the temporally inverse progress of chemical enrichment , which can be tested against future observational results . given that the observed velocity dispersion ( @xmath2 km s@xmath3 ) of the smc is much smaller than that ( @xmath4 km s@xmath3 ) of the galactic halo , our finding strongly implies that the predicted merger event happened in a small group environment that was far from the galaxy and contained a number of small gas - rich dwarfs comparable to the smc . this theoretical view is extensively discussed in the framework that considers a connection with the formation history of the large magellanic cloud . . And you have already written the first three sentences of the full article: merging plays a key role in the hierarchical galaxy formation in the cold dark matter ( cdm ) universe @xcite . numerical simulations based on the cdm scenario demonstrate how the galaxies build up hierarchically ( e.g. , * ? ? ? * ; * ? ? ?. Please generate the next two sentences of the article
in addition to the theoretical ground , the observed mass function of galaxies as well as that for cluster of galaxies have evolved with redshifts , suggestive of the merging process over the cosmic time , though some discrepancy between the prediction and the observation exists ( e.g. , * ? ? ? * ; * ? ? ? * ; * ? ? ? furthermore , some fraction of galaxies at all redshifts show an ongoing merger . for high - z galaxies ,
4,614
Suppose that you have an abstract for a scientific paper: let @xmath0 be coprime positive integers . we empirically study the record gaps @xmath1 between primes @xmath2 of the form @xmath3 . extensive computations suggest that @xmath4 almost always ; more precisely , @xmath5 , where @xmath6 is the expected average gap between primes @xmath7 , and @xmath8 is a correction term . the distribution of @xmath1 near its trend is close to the gumbel extreme value distribution . however , the question whether there exists a _ limiting distribution _ of @xmath1 is open . 0.5 cm [ theorem]corollary [ theorem]lemma [ theorem]proposition [ theorem]definition [ theorem]example [ theorem]conjecture [ theorem]remark 2.7 cm * on the distribution of maximal gaps + .1 in between primes in residue classes * 0.7 cm alexei kourbatov + www.javascripter.net/math + [email protected] .2 in . And you have already written the first three sentences of the full article: let @xmath9 and @xmath10 be fixed positive integers such that @xmath11 and @xmath12 . dirichlet proved in 1837 that the integer @xmath13 is prime infinitely often ; this is dirichlet s theorem on arithmetic progressions . the prime number theorem tells us that the total number of primes @xmath14 is asymptotic to @xmath15 , i.e. , @xmath16 .. Please generate the next two sentences of the article
moreover , the generalized riemann hypothesis implies that primes are distributed approximately equally among the @xmath17 residue classes modulo @xmath9 corresponding to specific values of @xmath10 . thus each residue class contains a positive proportion , about @xmath18 , of all primes below @xmath19 .
4,615
Suppose that you have an abstract for a scientific paper: interstellar dust appears in a number of roles in the interstellar medium . historically , the most familiar one is as a source of extinction in the optical . absorbed optical and ultraviolet light heats the dust , whence infrared ( including near - infrared to submillimeter ) emission , the particular theme of this review . in some distant galaxies , most of the luminosity of the galaxy is thus converted into the infrared , though in the milky way the situation is not generally as extreme , except in localized regions of star formation . i briefly review the range of physical conditions in which the dust emits in the interstellar medium and the various sizes of dust probed . of interest astrophysically are observations of both the spectrum and the spatial distribution of the emission , preferably in combination . in the past fifteen year probes of dust emission have advanced significantly : through iras , kao , cobe experiments , irts , iso , and msx . satellite and stratospheric observations of dust emission are complemented by ground - based studies , such as imaging and polarimetry in the submillimeter with the jcmt or imaging of reflected light from dust in the near infrared with 2mass . looking ahead , the next decade promises to be equally exciting . i give an overview of some of the distinctive features of facilities anticipated in the near term ( wire ) , early in the new millennium ( sofia , sirtf ) , and somewhat further out ( iris , first , planck ) . . And you have already written the first three sentences of the full article: interstellar dust is fairly cold , emitting in the infrared and submillimeter , and so as major observational facilities have become available at these wavelengths unique data have been gathered and great progress has been made through their analysis . i describe the basic processes involved in emission by dust and large molecules . observations of both the emission spectrum and the spatial distribution of the emission are of interest .. Please generate the next two sentences of the article
i enumerate what data bases are available now and then turn to the `` candy shop , '' wherein one finds an enticing array of powerful new facilities that will become available within the next decade . the variety of instrumentation offers broad - band photometric information , higher resolution spectroscopy for smaller areas of the sky , and wide - field imaging with increased resolution . thus dust in a range of environments from point - like protostellar environments ,
4,616
Suppose that you have an abstract for a scientific paper: we study coherent multiple andreev reflections in quantum sns junctions of finite length and arbitrary transparency . the presence of superconducting bound states in these junctions gives rise to great enhancement of the subgap current . the effect is most pronounced in low - transparency junctions , @xmath0 , and in the interval of applied voltage @xmath1 , where the amplitude of the current structures is proportional to the first power of the junction transparency @xmath2 . the resonant current structures consist of steps and oscillations of the two - particle current and also of multiparticle resonance peaks . the positions of the two - particle current structures have pronounced temperature dependence which scales with @xmath3 , while the positions of the multiparticle resonances have weak temperature dependence , being mostly determined by the junction geometry . despite the large resonant two - particle current , the excess current at large voltage is small and proportional to @xmath4 . + pacs : 74.50.+r , 74.80.fp , 74.20.fg , 73.23.ad . And you have already written the first three sentences of the full article: transport properties of small conducting structures are strongly influenced by size effects . oscillation of magnetoresistance in thin metallic films , and quantization of conductance in narrow wires and point contacts are examples of such effects . size effects in superconducting tunneling have attracted attention since early experiments by tomasch @xcite . in these experiments , oscillations of the tunnel conductance as a function of applied voltage were found for tunneling from a superconductor to a thin superconducting film of an ns proximity bilayer .. Please generate the next two sentences of the article
the geometric resonance nature of the effect was clearly indicated by the dependence of the period of oscillations on the thickness of the superconducting film . similar conductance oscillations for tunneling into a normal metal film of ns bilayers were reported by rowell and mcmillan @xcite . later on an even more pronounced effect
4,617
Suppose that you have an abstract for a scientific paper: we have used neutron scattering techniques that probe time scales from @xmath0s to @xmath1s to characterize the diffuse scattering and low - energy lattice dynamics in single crystals of the relaxor pbmg@xmath2nb@xmath3o@xmath4 from 10k to 900k . our study extends far below @xmath5k , where long - range ferroelectric correlations have been reported under field - cooled conditions , and well above the nominal burns temperature @xmath6k , where optical measurements suggest the development of short - range polar correlations known as polar nanoregions " ( pnr ) . we observed two distinct types of diffuse scattering . the first is weak , relatively temperature independent , persists to at least 900k , and forms bow - tie - shaped patterns in reciprocal space centered on @xmath7 bragg peaks . we associate this primarily with chemical short - range order . the second is strong , temperature dependent , and forms butterfly - shaped patterns centered on @xmath7 bragg peaks . this diffuse scattering has been attributed to the pnr because it responds to an electric field and vanishes near @xmath6k when measured with thermal neutrons . surprisingly , it vanishes at 420k when measured with cold neutrons , which provide @xmath8 times superior energy resolution . that this onset temperature depends so strongly on the instrumental energy resolution indicates that the diffuse scattering has a quasielastic character and demands a reassessment of the burns temperature @xmath9 . neutron backscattering measurements made with 300 times better energy resolution confirm the onset temperature of @xmath10k . the energy width of the diffuse scattering is resolution limited , indicating that the pnr are static on timescales of at least 2ns between 420k and 10k . transverse acoustic ( ta ) phonon lifetimes , which are known to decrease dramatically for wave vectors @xmath11 @xmath12 and @xmath13 , are temperature independent up to 900k for @xmath14 close to the zone center . this motivates a physical picture.... And you have already written the first three sentences of the full article: the concept of nanometer - scale regions of polarization , randomly embedded within a non - polar cubic matrix , has become central to attempts to explain the remarkable physical properties of relaxors such as pbmg@xmath2nb@xmath3o@xmath4 ( pmn ) and pbzn@xmath2nb@xmath3o@xmath4 ( pzn ) . @xcite the existence of these so - called `` polar nanoregions '' ( pnr ) was first inferred from the optic index of refraction studies of burns and dacol on pmn , pzn , and other related systems , @xcite and later confirmed using many different experimental techniques including x - ray and neutron diffraction , @xcite @xmath16pb nmr , @xcite and piezoresponse force microscopy . @xcite early small - angle x - ray scattering and neutron pair distribution function ( pdf ) measurements on pmn by egami _. Please generate the next two sentences of the article
et al_. cast doubt on the nano - domain model of relaxors . @xcite however , the recent pdf analysis of jeong _ et al_. , which shows the formation of polar ionic shifts in pmn below @xmath17k , and which occupy only one third of the total sample volume at low temperatures , provides convincing support for the existence of pnr .
4,618
Suppose that you have an abstract for a scientific paper: we investigate the evolution of dust formed in population iii supernovae ( sne ) by considering its transport and processing by sputtering within the sn remnants ( snrs ) . we find that the fates of dust grains within snrs heavily depend on their initial radii @xmath0 . for type ii snrs expanding into the ambient medium with density of @xmath1 @xmath2 , grains of @xmath3 @xmath4 m are detained in the shocked hot gas and are completely destroyed , while grains of @xmath5 @xmath4 m are injected into the surrounding medium without being destroyed significantly . grains with @xmath0 = 0.050.2 @xmath4 m are finally trapped in the dense shell behind the forward shock . we show that the grains piled up in the dense shell enrich the gas up to 10@xmath610@xmath7 @xmath8 , high enough to form low - mass stars with 0.11 @xmath9 . in addition , [ fe / h ] in the dense shell ranges from @xmath10 to @xmath11 , which is in good agreement with the ultra - metal - poor stars with [ fe / h ] @xmath12 . we suggest that newly formed dust in a population iii sn can have great impacts on the stellar mass and elemental composition of population ii.5 stars formed in the shell of the snr . . And you have already written the first three sentences of the full article: the first dust in the universe plays critical roles in the subsequent formation processes of stars and galaxies . dust grains provide additional pathways for cooling of gas in metal - poor molecular clouds through their thermal emission and formation of h@xmath13 molecules on the surface ( e.g. , cazaux & spaans 2004 ) . in particular , the presence of dust decreases the values of the critical metallicity to @xmath14@xmath15 @xmath8 ( omukai et al .. Please generate the next two sentences of the article
2005 ; schneider et al . 2006 ; tsuribe & omukai 2006 ) , where the transition of star formation mode from massive population iii stars to low - mass population ii stars occurs . since absorption and thermal emission by dust grains strongly depend on their composition , size distribution , and amount , it is essential to clarify the properties of dust in the early epoch of the universe , in order to elucidate the evolutional history of stars and galaxies .
4,619
Suppose that you have an abstract for a scientific paper: nowadays it is experimentally feasible to create artificial , and in particular , non - abelian gauge potentials for ultracold atoms trapped in optical lattices . motivated by this fact , we investigate the fundamental properties of an ultracold fermi gas in a non - abelian @xmath0 gauge potential characterized by a _ constant _ wilson loop . under this specific condition , the energy spectrum exhibits a robust band structure with large gaps and reveals a new fractal figure . the transverse conductivity is related to topological invariants and is shown to be quantized when the fermi energy lies inside a gap of the spectrum . we demonstrate that the analogue of the integer quantum hall effect for neutral atoms survives the non - abelian coupling and leads to a striking fractal phase diagram . moreover , this coupling induces an anomalous hall effect as observed in graphene . . And you have already written the first three sentences of the full article: ultracold atoms in optical lattices offer unprecedented possibilities of controlling quantum matter and mimicking the systems of condensed - matter and high - energy physics @xcite . particularly fascinating is the possibility to study ultracold atoms under the influence of strong artificial abelian and non - abelian magnetic " fields . the experimental realization of artificial abelian magnetic " fields , which reproduce the physics of electrons in strong magnetic fields , is currently achieved through diverse schemes : for atoms in a trap the simplest way is to rotate the trap @xcite , while for atoms in optical lattices this can be accomplished by combining laser - assisted tunneling and lattice acceleration methods @xcite , by the means of lattice rotations @xcite , or by the immersion of atoms in a lattice within a rotating bose - einstein condensate ( bec ) @xcite .. Please generate the next two sentences of the article
several phenomena were predicted to occur in these arrangements such as the hofstadter butterfly " @xcite and the escher staircase " @xcite in single - particle spectra , vortex formation @xcite , quantum hall effects @xcite , as well as other quantum correlated liquids @xcite . as shown by one of us in ref .
4,620
Suppose that you have an abstract for a scientific paper: we determine the total enclosed mass profile from 0.7 to 35 kpc in the elliptical galaxy ngc 4636 based on the hot interstellar medium temperature profile measured using the _ chandra _ x - ray observatory , and other x - ray and optical data . the total mass increases as @xmath0 to a good approximation over this range in radii , attaining a total of @xmath1 m@xmath2 ( corresponding to @xmath3 ) at 35 kpc . we find that at least half , and as much as 80% , of the mass within the optical half - light radius is non - luminous , implying an exceptionally low baryon fraction in ngc 4636 . the large inferred dark matter concentration and central dark matter density , consistent with the upper end of the range expected for standard cold dark matter halos , imply that mechanisms proposed to explain low dark matter densities in less massive galaxies are not effective in elliptical galaxies . . And you have already written the first three sentences of the full article: according to recent estimates , 8090% of the matter in the universe is non - baryonic . with the presence of extended dark matter halos in galaxies of all morphological types now well - established , attention is focusing on comparing the detailed mass distribution with theoretical predictions of galactic dark halo structure . the shape of the dark matter distribution is determined by the initial density perturbation spectrum , the coupled dynamical evolution of baryonic and non - baryonic constituents , and the nature of the dark matter itself .. Please generate the next two sentences of the article
therefore , its measurement represents a powerful diagnostic of fundamental astrophysical processes and parameters . the standard cold dark matter ( cdm ) model is highly successful in explaining the distribution of mass in the universe on scales ranging from galaxies on up , but is undergoing a critical re - examination due in large part to its confrontation with measurements of _ late - type _ galaxy mass distributions indicating that dark matter is less concentrated than expected . in this work @xcite
4,621
Suppose that you have an abstract for a scientific paper: a set of weakly interacting spin-@xmath0 fermions , confined by a harmonic oscillator potential , and interacting with each other via a contact potential , is a model system which closely represents the physics of a dilute gas of two - component fermionic atoms confined in a magneto - optic trap . in the present work , our aim is to present a fortran 90 computer program which , using a basis set expansion technique , solves the hartree - fock ( hf ) equations for spin-@xmath0 fermions confined by a three - dimensional harmonic oscillator potential , and interacting with each other via pair - wise delta - function potentials . additionally , the program can also account for those anharmonic potentials which can be expressed as a polynomial in the position operators @xmath1 @xmath2 , and @xmath3 . both the restricted - hf ( rhf ) , and the unrestricted - hf ( uhf ) equations can be solved for a given number of fermions , with either repulsive or attractive interactions among them . the option of uhf solutions for such systems also allows us to study possible magnetic properties of the physics of two - component confined atomic fermi gases , with imbalanced populations . using our code we also demonstrate that such a system exhibits shell structure , and follows hund s rule . trapped fermi gases , hartree - fock equation numerical solutions 02.70.-c , 02.70.hm , 03.75.ss , 73.21.la * program summary * + _ title of program : _ trap.x + _ catalogue identifier : _ + _ program summary url : _ + _ program obtainable from : _ cpc program library , queen s university of belfast , n. ireland + _ distribution format : _ tar.gz + _ computers : _ pcs / linux , sun ultra 10/solaris , hp alpha / tru64 , ibm / aix + _ programming language used : _ mostly fortran 90 + _ number of bytes in distributed program , including test data , etc . : _ size of the gzipped tar file 371074 bytes + _ card punching code : _ ascii + _ nature of physical problem : _ the simplest description of a spin.... And you have already written the first three sentences of the full article: over the last several years , there has been an enormous amount of interest in the physics of dilute fermi gases confined in magneto - optic traps@xcite . with the possibility of tuning the atomic scattering lengths from the repulsive regime to an attractive one using the feshbach resonance technique , there has been considerable experimental activity in looking for phenomenon such as superfluidity , and other phase transitions in these systems@xcite . this has led to equally vigorous theoretical activity starting from the studies of so - called bec - bcs crossover physics@xcite , search for shell - structure in these systems@xcite , to the study of more complex phases@xcite .. Please generate the next two sentences of the article
as far as the spin of the fermions is concerned , most attention has been given to the cases of two - component gases which can be mapped to a system of spin-@xmath0 atoms@xcite . therefore , in our opinion , a quantum - mechanical study of spin-@xmath0 fermions moving in a harmonic oscillator potential , and interacting via a pair - wise delta function potential , can help us achieve insights into the physics of dilute gases of trapped fermionic atoms . with the aforesaid
4,622
Suppose that you have an abstract for a scientific paper: we discuss the full counting statistics of non - commuting variables with the measurement of successive spin counts in non - collinear directions taken as an example . we show that owing to an irreducible detector back - action , the fcs in this case may be sensitive to the dynamics of the detectors , and may differ from the predictions obtained with using a naive version of the projection postulate . we present here a general model of detector dynamics and path - integral approach to the evaluation of fcs . we concentrate further on a simple diffusive " model of the detector dynamics where the fcs can be evaluated with transfer - matrix method . the resulting probability distribution of spin counts is characterized by anomalously large higher cumulants and substantially deviates from gaussian statistics . . And you have already written the first three sentences of the full article: in the past years , there has been a growing interest in noise in mesoscopic systems @xcite . normally , noise is an unwanted feature , and , according to classical physics , in principle can be made arbitrarily small by lowering the temperature ; according to quantum physics , however , noise is uneliminable due to the intrinsic randomness of elementary processes . furthermore , noise , rather than being a hindrance , contains valuable information which adds to the one carried by the mean value of the quantity observed .. Please generate the next two sentences of the article
simple probability distributions , like e.g. the gaussian ones , are determined by the mean values and noise . even though gaussian distributions are ubiquitous , there are interesting physical processes which are described by non - gaussian distributions .
4,623
Suppose that you have an abstract for a scientific paper: in these proceedings we present cms results on hard diffraction . diffractive dijet production in @xmath0 collisions at @xmath1=7 tev is discussed . the cross section for dijet production is presented as a function of @xmath2 , representing the fractional momentum loss of the scattered proton in single - diffractive events . the observation of w and z boson production in events with a large pseudo - rapidity gap is also presented . . And you have already written the first three sentences of the full article: diffractive processes contribute a significant fraction to the total inelastic proton - proton cross sections at high energies . these reactions can be described in terms of the exchange of a @xmath3 , a hypothetical object with the quantum numbers of the vacuum . the experimental signatures of diffractive events are the presence of non - exponentially suppressed large rapidity gaps and/or presence of the intact leading protons .. Please generate the next two sentences of the article
diffractive events with a hard parton - parton scattering , so called _ hard diffractive events _ , subject of these proceedings , are of particular interest since they can be studied in terms of perturbative qcd . the measurements presented here are based on the data collected by the cms experiment during 2010 at a @xmath47 tev . the detailed description of the cms experiment can be found elsewhere @xcite .
4,624
Suppose that you have an abstract for a scientific paper: we demonstrate a novel technology that combines the power of the multi - object spectrograph with the spatial multiplex advantage of an integral field spectrograph ( ifs ) . the sydney - aao multi - object ifs ( sami ) is a prototype wide - field system at the anglo - australian telescope ( aat ) that allows 13 imaging fibre bundles ( `` hexabundles '' ) to be deployed over a 1degree diameter field of view . each hexabundle comprises 61 lightly fused multimode fibres with reduced cladding and yields a 75 percent filling factor . each fibre core diameter subtends 1.6 arcseconds on the sky and each hexabundle has a field of view of 15 arcseconds diameter . the fibres are fed to the flexible aaomega double beam spectrograph , which can be used at a range of spectral resolutions ( @xmath0 170013000 ) over the optical spectrum ( 37009500 ) . we present the first spectroscopic results obtained with sami for a sample of galaxies at @xmath1 . we discuss the prospects of implementing hexabundles at a much higher multiplex over wider fields of view in order to carry out spatially resolved spectroscopic surveys of @xmath2 galaxies . instrumentation : spectrographs techniques : imaging spectroscopy surveys galaxies : general galaxies : kinematics and dynamics . And you have already written the first three sentences of the full article: galaxies are intrinsically complex with multiple components and varied formation histories . this complexity is the primary reason that unravelling the physics of galaxy formation and evolution is so challenging . galaxies are made up of baryons confined to dark matter haloes , and often have multiple distinct kinematic components ( e.g.bulge and/or disc ) .. Please generate the next two sentences of the article
there are complex interactions between the stars , gas , dust , dark matter and super - massive black holes . these can lead to both positive and negative feedback on the formation rate of stars .
4,625
Suppose that you have an abstract for a scientific paper: in this work we investigate the effect of the convolutional network depth on its accuracy in the large - scale image recognition setting . our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small ( @xmath0 ) convolution filters , which shows that a significant improvement on the prior - art configurations can be achieved by pushing the depth to 1619 weight layers . these findings were the basis of our 2014 submission , where our team secured the first and the second places in the localisation and classification tracks respectively . we also show that our representations generalise well to other datasets , where they achieve state - of - the - art results . we have made our two best - performing convnet models publicly available to facilitate further research on the use of deep visual representations in computer vision . . And you have already written the first three sentences of the full article: convolutional networks ( convnets ) have recently enjoyed a great success in large - scale image and video recognition @xcite which has become possible due to the large public image repositories , such as imagenet @xcite , and high - performance computing systems , such as gpus or large - scale distributed clusters @xcite . in particular , an important role in the advance of deep visual recognition architectures has been played by the imagenet large - scale visual recognition challenge ( ilsvrc ) @xcite , which has served as a testbed for a few generations of large - scale image classification systems , from high - dimensional shallow feature encodings @xcite ( the winner of ilsvrc-2011 ) to deep convnets @xcite ( the winner of ilsvrc-2012 ) . with convnets becoming more of a commodity in the computer vision field , a number of attempts have been made to improve the original architecture of @xcite in a bid to achieve better accuracy . for instance , the best - performing submissions to the ilsvrc-2013 @xcite utilised smaller receptive window size and smaller stride of the first convolutional layer . another line of improvements dealt with training and testing the networks densely over the whole image and over multiple scales @xcite . in this paper. Please generate the next two sentences of the article
, we address another important aspect of convnet architecture design its depth . to this end , we fix other parameters of the architecture , and steadily increase the depth of the network by adding more convolutional layers , which is feasible due to the use of very small ( @xmath1 ) convolution filters in all layers . as a result ,
4,626
Suppose that you have an abstract for a scientific paper: we present a theoretical study of the transport characteristics of molecular junctions , where first - row diatomic molecules are attached to ( 001 ) gold and platinum electrodes . we find that the conductance of all of these junctions is of the order of the conductance quantum unit @xmath0 , spelling out that they belong to the transparent regime . we further find that the transmission coefficients show wide plateaus as a function of the energy , instead of the usual sharp resonances that signal the molecular levels in the tunneling regime . we use caroli s model to show that this is a rather generic property of the transparent regime of a junction , which is driven by a strong effective coupling between the delocalized molecular levels and the conduction channels at the electrodes . we analyse the transmission coefficients and chemical bonding of gold / benzene and gold / benzene - dithiolate ( bdt ) junctions to understand why the later show large resistances , while the former are highly conductive . . And you have already written the first three sentences of the full article: the field of molecular electronics was arisen by the early realization that organic molecules could act as rectifiers@xcite when attached to conducting electrodes to form tunnel junctions . many experiments with a large variety of organic molecules have been performed@xcite , typically finding values of the conductance @xmath1 several orders of magnitude smaller than @xmath0 ( @xmath2 is the conductance quantum ) and a large variability , which hinder the reproducibility of the experiments . molecular junctions can be understood in terms of resonant tunneling models@xcite , where the conduction is carried through the highest occupied and lowest unoccupied molecular orbitals ( homo and lumo , respectively ) .. Please generate the next two sentences of the article
these are revealed as sharp resonances in either the densities of states ( dos ) of the molecule , or the transmission coefficients @xmath3 of the junction , and are usually located 1 or 2 ev above or below the fermi level of the molecule , respectively . conductance values of the order of @xmath0 can only be achieved by pinning one of those resonances to the fermi level of the electrodes .
4,627
Suppose that you have an abstract for a scientific paper: the development and decay of a turbulent vortex tangle driven by the gross - pitaevskii equation is studied . using a recently - developed accurate and robust tracking algorithm , all quantised vortices are extracted from the fields . the vinen s decay law for the total vortex length with a coefficient that is in quantitative agreement with the values measured in helium ii is observed . the topology of the tangle is then studied showing that linked rings may appear during the decay . the tracking also allows for determining the statistics of small - scales quantities of vortex lines , exhibiting large fluctuations of curvature and torsion . finally , the temporal evolution of the kelvin wave spectrum is obtained providing evidence of the development of a weak - wave turbulence cascade . the full understanding of turbulence in a fluid is one of the oldest yet still unsolved problems in physics . a fluid is said to be turbulent when it manifests excitations occurring at several length - scales . due to the large number of degrees of freedom and the nonlinearity of the governing equations of motion , the problem is usually tackled statistically by introducing assumptions and closures in terms of correlators . this is the case in the seminal work of kolmogorov in 1941 based on the idea of richardson s energy cascade , where energy in classical fluids is transferred from large to small scales @xcite . superfluids form a particular class among fluids characterised essentially by two main ingredients : the lack of dissipation and the evidence that vortex circulation takes only discrete values multiple of the quantum of circulation @xcite . superfluid examples which are routinely created in laboratories are superfluid liquid helium ( he ii ) and bose - einstein condensates ( becs ) made of dilute alkali gases . here the superfluid phase is usually modelled via a complex field describing the order parameter of the system and quantised vortices appear as topological defects where the.... And you have already written the first three sentences of the full article: we have recently developed a robust and accurate algorithm to track vortex lines in the gross - pitaevskii equation ( gp ) with arbitrary geometries in a periodic domain . the full details of the algorithm and the case studies to check its validity can be found in @xcite . we recall here briefly the basic ideas .. Please generate the next two sentences of the article
a quantised vortex line is defined by the nodal lines of the wavefunction . in three dimensions this corresponds to a line defined by @xmath67=im[\psi(x , y , z)]=0\ ] ] the algorithm is based on a newton - raphson method to find zeros of @xmath2 and on the knowledge of the _ pseudo - vorticity _ field @xmath68\times \nabla im[\psi]$ ] , always tangent to the line , to follow vortex lines ( in the spirit of rorai et al .
4,628
Suppose that you have an abstract for a scientific paper: the spectral energy distribution ( sed ) of a galaxy contains information on the galaxy s physical properties , and multi - wavelength observations are needed in order to measure these properties via sed fitting . in planning these surveys , optimization of the resources is essential . the fisher matrix formalism can be used to quickly determine the best possible experimental setup to achieve the desired constraints on the sed fitting parameters . however , because it relies on the assumption of a gaussian likelihood function , it is in general less accurate than other slower techniques that reconstruct the probability distribution function ( pdf ) from the direct comparison between models and data . we compare the uncertainties on sed fitting parameters predicted by the fisher matrix to the ones obtained using the more thorough pdf fitting techniques . we use both simulated spectra and real data , and consider a large variety of target galaxies differing in redshift , mass , age , star formation history , dust content , and wavelength coverage . we find that the uncertainties reported by the two methods agree within a factor of two in the vast majority ( @xmath0 ) of cases . if the age determination is uncertain , the top - hat prior in age used in pdf fitting to prevent each galaxy from being older than the universe needs to be incorporated in the fisher matrix , at least approximately , before the two methods can be properly compared . we conclude that the fisher matrix is a useful tool for astronomical survey design . . And you have already written the first three sentences of the full article: the fisher information matrix ( fm , @xcite ) is a statistical instrument of paramount importance in parameter estimation problems . its function is to _ predict _ the precision with which the parameters of interest can be measured by a given experiment , starting from the expected observational uncertainties on the data . the utility of the fm comes from the fact that it can be used to optimize the experimental setup , according to the desired target results , _ before _ actually collecting the data .. Please generate the next two sentences of the article
the cramr - rao inequality @xcite states that the uncertainties predicted by the fm formalism are , in general , optimistic ; they set the upper limit for the precision that an experiment can attain . in particular , the fm method is expected to be accurate only if the probability distribution is a gaussian function of the parameters .
4,629
Suppose that you have an abstract for a scientific paper: we present two numerical methods for the fully nonlinear elliptic monge - ampre equation . the first is a pseudo transient continuation method and the second is a pure pseudo time marching method . the methods are proved to converge to a convex solution of a natural discrete variational formulation with @xmath0 conforming approximations . the assumption of existence of a convex solution to the discrete problem is proven for smooth solutions of the continuous problem and supported by numerical evidence for non smooth solutions . . And you have already written the first three sentences of the full article: we are interested in numerical solutions of the fully nonlinear elliptic monge - ampre equation @xmath1 on a convex bounded domain @xmath2 of @xmath3 with boundary @xmath4 . the unknown @xmath5 is a real valued function and @xmath6 are given functions with @xmath7 in the non degenerate case and @xmath8 in the degenerate case . we will also assume that @xmath9 and @xmath10 in @xmath11 .. Please generate the next two sentences of the article
starting with @xcite , interest has grown for finite element methods which are able to capture non smooth solutions of second order fully nonlinear equations . for smooth solutions , the problem was studied in the context of @xmath0 conforming approximations by b@xmath12hmer @xcite and in the context of lagrange elements by brenner and al @xcite .
4,630
Suppose that you have an abstract for a scientific paper: in a warm dark matter ( wdm ) cosmology , the first objects to form at @xmath0 are one dimensional filaments with mean length on the order of the wdm free - streaming scale . gao and theuns recently claimed by using high - resolution hydrodynamic simulations that the eventual collapse of these wdm filaments along their longest axes may seed the supermassive black holes that power high-@xmath1 quasars . in this picture , it is supposed that the high-@xmath1 quasar luminosity function should reflect how abundant the wdm filaments are in the early universe . we derive analytically the mass function of early - universe filaments with the help of the zeldovich approximation . then , we determine the rate of its decrease in the mass section corresponding to the free streaming scale of a wdm particle of mass @xmath2 . adjusting the value of @xmath2 , we fit the slope of the analytic model to that of the high-@xmath1 quasar luminosity function measured from the sloan digital sky survey dr3 . a new wdm constraint from this feasibility study is found to be consistent with the lightest super - symmetric partner . . And you have already written the first three sentences of the full article: the large - scale features of the observed universe are strikingly consistent with the theoretical predictions based on the cold dark matter model . the combined analyses of the recent data from the observations of cosmic microwave background ( cmb ) , galaxy power spectrum and type ia supernovae ( e.g. , * ? ? ? * and references therein ) have been capable of measuring the key cosmological parameters that characterize the cdm model with surprisingly high precision . this has opened an era of precision cosmology , echoing the triumph of the cdm model .. Please generate the next two sentences of the article
nevertheless , the status of the cdm model as the standard paradigm has been shaking currently in both observational and theoretical perspectives . observations have reported several mismatches between the predictions of the cdm model and the real phenomena on galactic and subgalactic scales .
4,631
Suppose that you have an abstract for a scientific paper: we introduce an experimental procedure for the detection of quantum entanglement of an unknown quantum state with a small number of measurements . the method requires neither a priori knowledge of the state nor a shared reference frame between the observers and can thus be regarded as a perfectly state independent entanglement witness . the scheme starts with local measurements , possibly supplemented with suitable filtering , which essentially establishes the schmidt decomposition for pure states . alternatively we develop a decision tree which reveals entanglement within few steps . these methods are illustrated and verified experimentally for various entangled states of two and three qubits . _ introduction._entanglement is the distinguishing feature of quantum mechanics and it is the most important resource for quantum information processing @xcite . for any experiment it is thus of utmost importance to easily reveal entanglement , best with as little effort as possible . common methods suffer from disadvantages . on the one hand , employing the peres - horodecki criterion @xcite or evaluating entanglement measures , one can identify entanglement in arbitrary states , however , it requires full state tomography . on the other hand , various entanglement witnesses @xcite can be determined with much fewer measurements but give conclusive answers only if the state under investigation is close to the witness - state , i.e. , they require a priori knowledge . recently , it has been shown that the existence of entanglement can be inferred from analyzing correlations between the measurement results on the subsystems of a quantum state . only if the state is entangled , the properly weighted sum of correlations will overcome characteristic thresholds @xcite . here we further develop this approach to obtain a simple and practical method to detect entanglement of all pure states and some mixed states by measuring only a small number of correlations . since the method is adaptive it.... And you have already written the first three sentences of the full article: this section is devoted to study the efficiency of the decision tree algorithm described in the main text . some results in this section are analytical and some are numerical . in all our numerical investigations ( unless explicitly stated otherwise ) we used the decision tree of the main text ( for two qubits ) and in cases when going through the whole tree did not reveal entanglement we augmented it with additional measurements of those correlations which were not performed until that moment .. Please generate the next two sentences of the article
the order of the additional measurements also results from the correlation complementarity ( anti - commutation relations ) @xcite . with every remaining measurement we associate the `` priority '' parameter @xmath110 that depends on the measured correlation tensor elements of the decision tree in the following way @xmath111 according to the correlation complementarity there is a bigger chance that this correlation is significant if the value of the corresponding parameter is small .
4,632
Suppose that you have an abstract for a scientific paper: we propose a new method of classifying documents into categories . we define for each category a _ finite mixture model _ based on _ soft clustering _ of words . we treat the problem of classifying documents as that of conducting statistical hypothesis testing over finite mixture models , and employ the em algorithm to efficiently estimate parameters in a finite mixture model . experimental results indicate that our method outperforms existing methods . . And you have already written the first three sentences of the full article: we are concerned here with the issue of classifying documents into categories . more precisely , we begin with a number of categories ( e.g. , ` tennis , soccer , skiing ' ) , each already containing certain documents . our goal is to determine into which categories newly given documents ought to be assigned , and to do so on the basis of the distribution of each document s words . many methods have been proposed to address this issue , and a number of them have proved to be quite effective ( e.g.,@xcite ) .. Please generate the next two sentences of the article
the simple method of conducting hypothesis testing over word - based distributions in categories ( defined in section 2 ) is not efficient in storage and suffers from the _ data sparseness problem _ , i.e. , the number of parameters in the distributions is large and the data size is not sufficiently large for accurately estimating them . in order to address this difficulty , @xcite have proposed using distributions based on what we refer to as _ hard clustering _ of words , i.e. , in which a word is assigned to a single cluster and words in the same cluster are treated uniformly .
4,633
Suppose that you have an abstract for a scientific paper: the cornerstone of boltzmann - gibbs ( @xmath0 ) statistical mechanics is the boltzmann - gibbs - jaynes - shannon entropy @xmath1 , where @xmath2 is a positive constant and @xmath3 a probability density function . this theory has exibited , along more than one century , great success in the treatment of systems where short spatio / temporal correlations dominate . there are , however , anomalous natural and artificial systems that violate the basic requirements for its applicability . different physical entropies , other than the standard one , appear to be necessary in order to satisfactorily deal with such anomalies . one of such entropies is @xmath4^q)/(1-q)$ ] ( with @xmath5 ) , where the entropic index @xmath6 is a real parameter . it has been proposed as the basis for a generalization , referred to as _ nonextensive statistical mechanics _ , of the @xmath0 theory . @xmath7 shares with @xmath8 four remarkable properties , namely _ concavity _ ( @xmath9 ) , _ lesche - stability _ ( @xmath9 ) , _ finiteness of the entropy production per unit time _ ( @xmath10 ) , and _ additivity _ ( for at least a compact support of @xmath6 including @xmath11 ) . the simultaneous validity of these properties suggests that @xmath7 is appropriate for bridging , at a macroscopic level , with classical thermodynamics itself . in the same natural way that exponential probability functions arise in the standard context , power - law tailed distributions , even with exponents _ out _ of the lvy range , arise in the nonextensive framework . in this review , we intend to show that many processes of interest in economy , for which fat - tailed probability functions are empirically observed , can be described in terms of the statistical mechanisms that underly the nonextensive theory . . And you have already written the first three sentences of the full article: the concept of `` entropy '' ( from the greek @xmath12 , transformation ) , was introduced in @xmath13 by rudolf julius emmanuel clausius in the context of thermodynamics@xcite . this was motivated by his studies on reversible and irreversible transformations , as a measure of the amount of energy in a physical system , that can not be used to perform work . more specifically , clausius defined _ change in entropy _ of a thermodynamic system , during some reversible process where a certain amount of heat @xmath14 is transported at constant temperature @xmath15 , as @xmath16 . we can consider _ entropy _ as the cornerstone of thermodynamics , since all the thermodynamical principles involve , directly or indirectly , this fundamental concept . the first connection between the macroscopic clausius entropy of a system and its microscopic configurations was done by ludwig boltzmann in @xmath17 @xcite .. Please generate the next two sentences of the article
studying the approach to equilibrium of an ideal " gas @xcite , he realized that the entropy could be related to the number of possible microstates compatible with the thermodynamic properties of the gas . for an isolated system in its terminal stationary state ( _ thermal equilibrium _ ) , boltzmann observation can be expressed as @xmath18 where @xmath2 is a positive constant and @xmath19 the number of microstates consistent with the macroscopic state .
4,634
Suppose that you have an abstract for a scientific paper: an extensive study of the magnetic properties of fete@xmath0se@xmath1 crystals in the superconducting state is presented . we show that weak collective pinning , originating from spatial variations of the charge carrier mean free path ( @xmath2 pinning ) , rules in this superconductor . our results are compatible with the nanoscale phase separation observed on this compound and indicate that in spite of the chemical inhomogeneity spatial fluctuations of the critical temperature are not important for pinning . a power law dependence of the magnetization vs time , generally interpreted as signature of single vortex creep regime , is observed in magnetic fields up to 8 t. for magnetic fields applied along the @xmath3 axis of the crystal the magnetization curves exhibit a clear peak effect whose position shifts when varying the temperature , following the same dependence as observed in yba@xmath4cu@xmath5o@xmath6 . the time and temperature dependence of the peak position has been investigated . we observe that the occurrence of the peak at a given magnetic field determines a specific vortex configuration that is independent on the temperature . this result indicates that the influence of the temperature on the vortex - vortex and vortex - defect interactions leading to the peak effect in fete@xmath0se@xmath1 is negligible in the explored range of temperatures . . And you have already written the first three sentences of the full article: the study of the vortex properties in type - ii superconductors is of extreme interest both for investigating the basic physics of the superconductivity and for evaluating the quality of the materials in view of practical applications . many aspects of this subject are still topical , especially with regard to the high temperature superconductors ( hts ) and the recently discovered iron - based superconductors . to date , five families of fe - based superconductors have been discovered : _ _ re__ofeas , ( 1111 , _ _ re__=rare earth),@xcite _ _ a__fe@xmath4as@xmath4 ( 122 , _ _ a__=alkaline earth ) , @xcite _ _ x__feas ( 111 , _ _ x__=li ; na),@xcite fe(se , ch ) ( 11 , ch = s , te ) @xcite and the most recently discovered 21311 family of sr@xmath4mo@xmath5fepn ( m = sc , v , cr and pn = pnictogen).@xcite among these families , iron chalcogenides are considered of particular interest because of their simple crystal structure consisting of fe ions tetrahedrally coordinated by se and te arranged in layers stacked along the c - axis , without any other interlayer cations , as occurs in the pnictides .. Please generate the next two sentences of the article
for this reason , iron chalcogenides are generally considered an ideal candidate for understanding some open issues of high - temperature superconductivity . one of the most intriguing phenomena observed in the study of the vortex properties in type - ii superconductors is the so called peak ( or fishtail ) effect .
4,635
Suppose that you have an abstract for a scientific paper: by making use of the weak gravitational field approximation , we obtain a linearized solution of the gravitational vacuum field equation in an anisotropic spacetime . the plane - wave solution and dispersion relation of gravitational wave is presented explicitly . there is possibility that the speed of gravitational wave is larger than the speed of light and the casuality still holds . we show that the energy - momentum of gravitational wave in the ansiotropic spacetime is still well defined and conserved . . And you have already written the first three sentences of the full article: lorentz invariance is one of the foundations of the standard model of particle physics . the constraints on possible lorentz violating phenomenology are quite severe , see for example , the summary tables that provided by kostelecky _ et al_.@xcite .. Please generate the next two sentences of the article
the gravitational interaction is far more weak , compare to other fundamental interactions . this allows one to study the possible lorentz violating effects on certain gravity theories , such as einstein - aether theory @xcite and horava - lifshitz theory @xcite .
4,636
Suppose that you have an abstract for a scientific paper: this paper presents a unified mathematical framework for inference in graphical models , building on the observation that graphical models are algebraic varieties . from this geometric viewpoint , observations generated from a model are coordinates of a point in the variety , and the sum - product algorithm is an efficient tool for evaluating specific coordinates . the question addressed here is how the solutions to various inference problems depend on the model parameters . the proposed answer is expressed in terms of tropical algebraic geometry . a key role is played by the newton polytope of a statistical model . our results are applied to the hidden markov model and to the general markov model on a binary tree . . And you have already written the first three sentences of the full article: this paper presents a unified mathematical framework for probabilistic inference with statistical models , such as graphical models . our approach is summarized as follows : * ( a ) statistical models are algebraic varieties . * * ( b ) every algebraic variety can be tropicalized . * * ( c ) tropicalized statistical models are fundamental for parametric inference . * by a _ statistical model _ we mean a family of joint probability distributions for a collection of discrete random variables @xmath0 .. Please generate the next two sentences of the article
thesis ( a ) states that many families of interest can be characterized by polynomials in the joint probabilities @xmath1 . the emerging field of algebraic statistics @xcite offers algorithms for this polynomial representation .
4,637
Suppose that you have an abstract for a scientific paper: with modern large scale spectroscopic surveys , such as the sdss and lss - gac , galactic astronomy has entered the era of millions of stellar spectra . taking advantage of the huge spectroscopic database , we propose to use a `` standard pair '' technique to a ) estimate multi - band extinction towards sightlines of millions of stars ; b ) detect and measure the diffuse interstellar bands in hundreds of thousands sdss and lamost low - resolution spectra ; c ) search for extremely faint emission line nebulae in the galaxy ; and d ) perform photometric calibration for wide field imaging surveys . in this contribution , we present some results of applying this technique to the sdss data , and report preliminary results from the lamost data . . And you have already written the first three sentences of the full article: dust grains produce extinction and reddening of stellar light from the ultraviolet ( uv ) to the infrared ( ir ) ( draine 2003 ) . accurate determination of reddening to a star is vital for reliable derivation of its basic stellar parameters , such as effective temperature and distance . constructing a 3d galactic extinction map plays an essential role in galactic astronomy , particularly in achieving the driving goals of the lamost spectroscopic survey of the galactic anti - center ( lss - gac ; liu et al . this volume ) . the sloan digital sky survey ( sdss ; york et al .. Please generate the next two sentences of the article
2000 ) has delivered low - resolution spectra for about 0.7 m stars in its data release 9 ( dr9 ; ahn et al . 2012 ) . the lamost galactic surveys ( deng et al . 2012 and
4,638
Suppose that you have an abstract for a scientific paper: the radial velocity ( rv ) method for detecting extrasolar planets has been the most successful to date . the rv signal imprinted by a few earth - mass planet around a cool star is at the limit of the typical single measurement uncertainty obtained using state - of - the - art spectrographs . this requires relying on statistics in order to unearth signals buried below noise . artifacts introduced by observing cadences can produce spurious signals or mask genuine planets that should be easily detected otherwise . here we discuss a particularly confusing statistical degeneracy resulting from the yearly aliasing of the first eccentric harmonic of an already - detected planet . this problem came sharply into focus after the recent announcement of the detection of a 3.1 earth mass planet candidate in the habitable zone of the nearby low mass star gj 581 . the orbital period of the new candidate planet ( gj 581@xmath0 ) corresponds to an alias of the first eccentric harmonic of a previously reported planet , gj 581@xmath1 . although the star is stable , the combination of the observing cadence and the presence of multiple planets can cause period misinterpretations . in this work , we determine whether the detection of gj 581@xmath0 is justified given this degeneracy . we also discuss the implications of our analysis for the recent bayesian studies of the same dataset , which failed to confirm the existence of the new planet . performing a number of statistical tests , we show that , despite some caveats , the existence of gj 581@xmath0 remains the most likely orbital solution to the currently available rv data . . And you have already written the first three sentences of the full article: the recently reported planet candidate around the nearby m dwarf gj 581 ( * ? ? ? * hereafter v10 ) has generated much public enthusiasm and a similar amount of skepticism within part of the scientific community @xcite . if confirmed , it will be the first planet potentially capable of hosting life as we know it now @xcite . the possible existence of this planet was announced based on analysis of the new hires / keck precision rv measurements combined with harps / eso data published by @xcite ( hereafter m09 ) .. Please generate the next two sentences of the article
v10 reported that the candidate planet gj 581@xmath0 ( hereafter _ planet g _ ) has a minimum mass of @xmath2 @xmath3 and a period of 36.5 days . the data for gj 581 contain the signal of at least 4 other low mass planets ( planets @xmath4 ) .
4,639
Suppose that you have an abstract for a scientific paper: we propose a two - level stochastic context - free grammar ( scfg ) architecture for parametrized stochastic modeling of a family of rna sequences , including their secondary structure . a stochastic model of this type can be used for maximum a posteriori estimation of the secondary structure of any new sequence in the family . the proposed scfg architecture models rna subsequences comprising paired bases as stochastically weighted dyck - language words , i.e. , as weighted balanced - parenthesis expressions . the length of each run of unpaired bases , forming a loop or a bulge , is taken to have a phase - type distribution : that of the hitting time in a finite - state markov chain . without loss of generality , each such markov chain can be taken to have a bounded complexity . the scheme yields an overall family scfg with a manageable number of parameters . . And you have already written the first three sentences of the full article: in biological sequence analysis , probability distributions over finite ( @xmath0-dimensional ) sequences of symbols , representing nucleotides or amino acids , play a major role . they specify the probability of a sequence belonging to a specified family , and are usually generated by markov chains . these include the stochastic finite - state moore machines called hidden markov models ( hmms ) ; or infinite - state markov chains such as stochastic push - down automata ( spdas ) . by computing the most probable path through the markov chain. Please generate the next two sentences of the article
, one can answer such questions as `` what hidden ( e.g. , phylogenetic ) structure does a sequence have ? '' , and `` what secondary structure will a sequence give rise to ? '' . the number of markov model parameters should ideally be kept to a minimum , to facilitate parameter estimation and model validation .
4,640
Suppose that you have an abstract for a scientific paper: we report the first detection of the prebiotic complex organic molecule ch@xmath0nco in a solar - type protostar , iras16293 - 2422 b. this species is one of the most abundant complex organic molecule detected on the surface of the comet 67p / churyumov - gerasimenko , and in the insterstellar medium it has only been found in hot cores around high - mass protostars . we have used multi - frequency alma observations from 90 ghz to 350 ghz covering 11 unblended transitions of ch@xmath0nco and 40 more transitions that appear blended with emission from other molecular species . our local thermodynamic equilibrium analysis provides an excitation temperature of 232@xmath141 k and a column density of ( [email protected])@xmath210@xmath3 @xmath4 , which implies an abundance of ( 7@xmath12)@xmath210@xmath5 with respect to molecular hydrogen . the derived column density ratios ch@xmath0nco / hnco , ch@xmath0nco / nh@xmath6cho , and ch@xmath0nco / ch@xmath0cn are @xmath70.3 , @xmath70.8 , and @xmath70.2 , respectively , which are different from those measured in hot cores and in comet 67p / churyumov - gerasimenko . our chemical modelling of ch@xmath0nco reproduces well the abundances and column density ratios ch@xmath0nco / hnco and ch@xmath0nco / nh@xmath6cho measured in iras16293 - 2422 b , suggesting that the production of ch@xmath0nco could occur mostly via gas - phase chemistry after the evaporation of hnco from dust grains . . And you have already written the first three sentences of the full article: understanding the origin of life is one of the main challenges of modern science . it is believed that some basic prebiotic chemistry could have developed in space , likely tranferring prebiotic molecules to the solar nebula , which were finally delivered to earth . studies of the chemical composition of comets have indeed reported that these objects exhibit a rich chemistry in complex organic molecules ( or coms ) that are commonly detected in the ism ( see e.g. , * ? ? ?. Please generate the next two sentences of the article
recently , the spacecraft rosetta studied the chemical composition of the comet 67p / churyumov - gerasimenko , and found 16 different coms of prebiotic interest such as glycoladehyde ( ch@xmath6ohcho ) , formamide ( nh@xmath6cho ) @xcite , and even the simplest amino acid glycine @xcite . interestingly , a new molecule was detected by the rosetta mission with a relatively high abundance compared to other coms present in the comet : the simplest isocyanate ( methyl isocyanate , ch@xmath0nco ; * ? ? ?
4,641
Suppose that you have an abstract for a scientific paper: the magnetic moments of heavy sextet @xmath0 baryons are calculated in framework of the light cone qcd sum rules method . linearly independent relations among the magnetic moments of these baryons are obtained . the results for the magnetic moments of heavy baryons obtained in this work are compared with the predictions of the other approaches . # 1#2#3 @xmath1 # 1#2#3 @xmath1 0^*0 5_5 q 5_5 o _ ^0 _ pacs numbers : 11.55.hx , 13.40.em , 14.20.mr , 14.20.lq . And you have already written the first three sentences of the full article: the quark model predicts the existence of heavy baryons composed of single , double and triple quarks . essential improvement has been achieved in heavy baryon spectroscopy in resent years . all baryons with a single charm quark that are predicted by the quark model have been observed in experiments .. Please generate the next two sentences of the article
moreover , heavy baryons with a single bottom quark , such as @xmath2 , @xmath3 , @xmath4 and @xmath5 have also been discovered ( for a review see @xcite ) . these progresses in experiments stimulated future investigation for the properties of these baryons at lhc , as well as further theoretical studies on this subject . remarkable information about the internal structure of baryons can be gained by studying their magnetic moments .
4,642
Suppose that you have an abstract for a scientific paper: we consider the gravitational model with additional spatial dimensions and anisotropic pressure which is nonzero only in these dimensions . cosmological solutions in this model include accelerated expansion of the universe at late age of its evolution and dynamical compactification of extra dimensions . this model describes observational data for type ia supernovae on the level or better than the @xmath0cdm model . we analyze two equations of state resulting in different predictions for further evolution , but in both variants the acceleration epoch is finite . . And you have already written the first three sentences of the full article: the most important event of last 15 years in astrophysics is conclusion about accelerated expansion of our universe at late stage of its evolution . this conclusion was based on observations of luminosity distances and redshifts for the type ia supernovae @xcite , cosmic microwave background @xcite , large - scale galaxy clustering @xcite , and other evidence @xcite . to explain accelerated evolution of the universe various mechanisms have been suggested , including the most popular cosmological model @xmath0cdm with a @xmath0 term ( dark energy ) and cold dark matter ( see reviews @xcite ) .. Please generate the next two sentences of the article
the @xmath0cdm model with 4% fraction of visible ( baryonic ) matter nowadays , 23% fraction of dark matter and 73% fraction of dark energy @xcite describes type ia supernovae , data rather well and satisfies observational evidence , connected with rotational curves of galaxies , galaxy clusters and anisotropies of cosmic microwave background . however , the @xmath0cdm model ( along with vague nature of dark matter and energy ) has some problems with fine tuning of the observed value of @xmath0 , which is many orders of magnitude smaller than expected vacuum energy density , and with different time dependence of dark energy @xmath1 and material @xmath2 fractions ( we have @xmath3 nowadays ) . therefore a large number of alternative cosmological models have been proposed .
4,643
Suppose that you have an abstract for a scientific paper: early universe ; inflation ; cosmic microwave background ; cosmological parameters ; large scale structure ; gravitational waves ; baryogenesis ; dark matter cosmology is nowadays one of the most active areas of research in fundamental science . we are going through a true revolution in the observations that are capable of providing crucial information about the origin and evolution of the universe . in the first years of the next millenium we will have , for the first time in the history of such an ancient science as cosmology , a precise knowledge about a handful of parameters that determine our standard cosmological model . this standard model is based on the inflationary paradigm , a period of exponential expansion in the early universe responsible for the large scale homogeneity and flatness of our observable patch of the universe . a spectrum of density perturbations , seen in the microwave background as temperature anisotropies , could have been produced during inflation from quantum fluctuations that were stretched to cosmological size by the expansion , and later gave rise , via gravitational collapse , to the observed large scale structure of clusters and superclusters of galaxies . furthermore , the same theory predicts that all the matter and radiation in the universe today originated at the end of inflation from an explosive production of particles that could also have been the origin of the present baryon asymmetry , before the universe reached thermal equilibrium at a very large temperature . from there on , the universe cooled down as it expanded , in the way described by the standard hot big bang model . with the observations that will soon become available in the next millenium , we will be able to test the validity of the inflationary paradigm , and determine with unprecedented accuracy the parameters of a truly standard model of cosmology . epsf [ firstpage ] . And you have already written the first three sentences of the full article: our present understanding of the universe is based upon the successful hot big bang theory , which explains its evolution from the first fraction of a second to our present age , around 13 billion years later . this theory rests upon four strong pillars , a theoretical framework based on general relativity , as put forward by albert einstein and alexander a. friedmann in the 1920s , and three strong observational facts . first , the expansion of the universe , discovered by edwin p. hubble in the 1930s , as a recession of galaxies at a speed proportional to their distance to us .. Please generate the next two sentences of the article
second , the relative abundance of light elements , explained by george gamow in the 1940s , mainly that of helium , deuterium and lithium [ see fig . 1 ] , which were cooked from the nuclear reactions that took place at around a second to a few minutes after the big bang , when the universe was a hundred times hotter than the core of the sun . third , the cosmic microwave background ( cmb ) , the afterglow of the big bang , discovered in 1965 by arno a. penzias and robert w. wilson as a very isotropic blackbody radiation at a temperature of about 3 degrees kelvin ( degrees centigrade above absolute zero ) , emitted when the universe was cold enough to form neutral atoms , and photons decoupled from matter , approximately 300000 years after the big bang .
4,644
Suppose that you have an abstract for a scientific paper: we use a quantum monte carlo method to investigate various classes of 2d spin models with long - range interactions at low temperatures . in particular , we study a dipolar xxz model with @xmath0 symmetry that appears as a hard - core boson limit of an extended hubbard model describing polarized dipolar atoms or molecules in an optical lattice . tunneling , in such a model , is short - range , whereas density - density couplings decay with distance following a cubic power law . we investigate also an xxz model with long - range couplings of all three spin components - such a model describes a system of ultracold ions in a lattice of microtraps . we describe an approximate phase diagram for such systems at zero and at finite temperature , and compare their properties . in particular , we compare the extent of crystalline , superfluid , and supersolid phases . our predictions apply directly to current experiments with mesoscopic numbers of polar molecules and trapped ions . . And you have already written the first three sentences of the full article: quantum simulators , as first proposed by feynman @xcite , which are devices built to evolve according to a postulated quantum hamiltonian and thus `` compute '' its properties , are one of the hot ideas which may provide a breakthrough in many - body physics . while one must be aware of possible difficulties ( see , e.g. , @xcite ) , impressive progress has been achieved in recent years in different systems employing cold atoms and molecules , nuclear magnetic resonance , superconducting qubits , and ions . the latter are extremely well controlled and already it has been demonstrated that , indeed , quantum spin systems may be simulated with cold - ion setups @xcite . quantum spin systems on a lattice constitute some of the most relevant cases for quantum simulations as there are many instances where standard numerical techniques to compute their dynamics or even their static behavior fail , especially for two- or three - dimensional systems .. Please generate the next two sentences of the article
chronologically the first proposition to use trapped ions to simulate lattice spin models came from porras and cirac @xcite who derived the effective spin - hamiltonian for the system , @xmath1-\mu\sum\limits_{i } s_{i}^{z } , \label{hamspin}\end{aligned}\ ] ] where @xmath2 is the chemical potential ( which in this case acts as an external magnetic field ) , @xmath3 is the interaction strength , and @xmath4 are the spin operators at site @xmath5 . all the long - ranged interactions fall off with a @xmath6 dipolar decay , which for the ions is due to the fact that they are both induced by the same mechanism , namely lattice vibrations mediated by the coulomb force @xcite .
4,645
Suppose that you have an abstract for a scientific paper: parking spaces are resources that can be pooled together and shared , especially when there are complementary day - time and night - time users . we answer two design questions . first , given a quality of service requirement , how many spaces should be set aside as contingency during day - time for night - time users ? next , how can we replace the first - come - first - served access method by one that aims at optimal efficiency while keeping user preferences private ? . And you have already written the first three sentences of the full article: it was recently reported that over one year in a small los angeles business district , cars cruising for parking burned 47,000 gallons of gasoline and produced 730 tons of carbon dioxide @xcite . meanwhile , the consulting firm mckinsey recently claimed that the average car owner in paris spends four years of his or her life searching for a parking space @xcite . the parking assignment problem associated with electric vehicles becomes even more acute .. Please generate the next two sentences of the article
due to the limited range of these vehicles , the marginal cost of expending energy to search for spaces may , in some cities , be prohibitively high . thus , there is a real and compelling societal and economic need to revisit parking .
4,646
Suppose that you have an abstract for a scientific paper: a further development of the evolutionary picture of _ a+a _ collisions , which we call the integrated hydrokinetic model ( ihkm ) , is proposed . the model comprises a generator of the initial state glissando , pre - thermal dynamics of _ a+a _ collisions leading to thermalization , subsequent relativistic viscous hydrodynamic expansion of quark - gluon and hadron medium ( vhlle ) , its particlization , and finally hadronic cascade ultrarelativistic qmd . we calculate mid - rapidity charged - particle multiplicities , pion , kaon , and antiproton spectra , charged - particle elliptic flows , and pion interferometry radii for pb+pb collisions at the energies available at the cern large hadron collider , @xmath0 tev , at different centralities . we find that the best description of the experimental data is reached when the initial states are attributed to the very small initial time 0.1 fm / c , the pre - thermal stage ( thermalization process ) lasts at least until 1 fm / c , and the shear viscosity at the hydrodynamic stage of the matter evolution has its minimal value , @xmath1 . at the same time it is observed that the various momentum anisotropies of the initial states , different initial and relaxation times , as well as even a treatment of the pre - thermal stage within just viscous or ideal hydrodynamic approach , leads sometimes to worse but nevertheless similar results , _ if _ the normalization of maximal initial energy density in most central events is adjusted to reproduce the final hadron multiplicity in each scenario . this can explain a good enough data description in numerous variants of hybrid models without a prethermal stage when the initial energy densities are defined up to a common factor . . And you have already written the first three sentences of the full article: hydrodynamics is considered now as the basic part of a spatiotemporal picture of the matter evolution in the processes of ultrarelativistic heavy ion collisions ( see recent reviews @xcite ) . to complete the description of _ a+a _ collision processes , hydrodynamics must be supplied with a generator of an initial non - equilibrated state , pre - thermal dynamics which forms the initial near locally equilibrated conditions for hydro - evolution , and prescription for particle production during the breakup of the continuous medium at the final stage of the matter expansion . as for the initial state , since it fluctuates on an event - by - event basis , monte carlo event generators are widely used to simulate it in relativistic @xmath2 collisions . the most commonly used models of initial state are the mc - glauber ( monte carlo glauber ) @xcite , mc - kln ( monte carlo kharzeev - levin - nardi ) @xcite , epos ( parton - based gribov - regge model ) @xcite , ekrt ( perturbative qcd + saturation model ) @xcite , and ip - glasma ( impact parameter dependent glasma ) @xcite . @xcite . ] the last model also includes non - trivial non - equilibrium dynamics of the gluon fields which , however , does not lead to a proper equilibration . to apply these models to data description. Please generate the next two sentences of the article
some thermalization process has to be assumed . evidently , in order to reduce uncertainties of results obtained by means of hydrodynamical models , one needs to convert a far - from - equilibrium initial state of matter in a nucleus - nucleus collision to a close to locally equilibrated one by means of a reasonable pre - equilibrium dynamics . a relaxation time method @xcite , initially developed for the post - hydrodynamic stage ,
4,647
Suppose that you have an abstract for a scientific paper: we use large cosmological n - body simulations to study the subhalo population in galaxy group sized halos . in particular , we look for fossil group candidates with typical masses @xmath0 10 - 25% of virgo cluster but with an order of magnitude less substructure . we examine recent claims that the earliest systems to form are deficient enough in substructure to explain the luminosity function found in fossil groups . although our simulations show a correlation between the halo formation time and the number of subhalos , the maximum suppression of subhalos is a factor of 2 - 2.5 , whereas a factor of 6 is required to match fossil groups and galaxies . while the number of subhalos depends weakly on the formation time , the slope of the halo substructure velocity function does not . the satellite population within cold dark matter ( cdm ) halos is self - similar at scales between galaxies and galaxy clusters regardless of mass , whereas current observations show a break in self - similarity at a mass scale corresponding to group of galaxies . psfig.sty = 10000 # 1to 0pt#1 # 1#2#1 8 _ 8 @xmath1 2 r_-2 @xmath2 2 c_-2 @xmath3 200 r_200 @xmath4 200 v_200 @xmath5 200 m_200 @xmath6 0 _ 0 @xmath7 200@xmath8 [ firstpage ] galaxies : formation galaxies : halos galaxies : structure cosmology : theory dark matter large - scale structure of universe methods : numerical , n - body simulation . And you have already written the first three sentences of the full article: in the current paradigm for cosmological structure formation , dark halos collapse from initial gaussian density fluctuations and grow by accretion and merging in a hierarchical fashion . a longstanding prediction of the theory is that the subhalo ( or satellite ) population is self - similar , meaning simply that low mass systems , such as galaxies are scale - down versions of larger systems , like galaxy clusters . a galaxy such as the milky way is predicted to have nearly the same scaled distribution of substructures as a more massive galaxy cluster such as virgo ( moore et al .. Please generate the next two sentences of the article
1999 , klypin et al . this prediction is tested by using the substructure velocity distribution function that expresses the number of sub - halos with circular rotational velocity @xmath9 greater than a certain fraction of the circular velocity of the parent halo @xmath10 .
4,648
Suppose that you have an abstract for a scientific paper: for a class @xmath0 of countable relational structures , a countable borel equivalence relation @xmath1 is said to be @xmath0-structurable if there is a borel way to put a structure in @xmath0 on each @xmath1-equivalence class . we study in this paper the global structure of the classes of @xmath0-structurable equivalence relations for various @xmath0 . we show that @xmath0-structurability interacts well with several kinds of borel homomorphisms and reductions commonly used in the classification of countable borel equivalence relations . we consider the poset of classes of @xmath0-structurable equivalence relations for various @xmath0 , under inclusion , and show that it is a distributive lattice ; this implies that the borel reducibility preordering among countable borel equivalence relations contains a large sublattice . finally , we consider the effect on @xmath0-structurability of various model - theoretic properties of @xmath0 . in particular , we characterize the @xmath0 such that every @xmath0-structurable equivalence relation is smooth , answering a question of marks . = 1 . And you have already written the first three sentences of the full article: * ( a ) * a countable borel equivalence relation on a standard borel space @xmath2 is a borel equivalence relation @xmath3 with the property that every equivalence class @xmath4_e$ ] , @xmath5 , is countable . we denote by @xmath6 the class of countable borel equivalence relations . over the last 25 years there has been an extensive study of countable borel equivalence relations and their connection with group actions and ergodic theory .. Please generate the next two sentences of the article
an important aspect of this work is an understanding of the kind of countable ( first - order ) structures that can be assigned in a uniform borel way to each class of a given equivalence relation . this is made precise in the following definitions ; see @xcite , section 2.5 .
4,649
Suppose that you have an abstract for a scientific paper: the presently observed cosmological baryon asymmetry has been finally determined at the time of the electroweak phase transition , when baryon and lepton number violating interactions fell out of thermal equilibrium . we discuss the thermodynamics of the phase transition based on the free energy of the su(2 ) higgs model at finite temperature , which has been studied in perturbation theory and lattice simulations . the results suggest that the baryon asymmetry has been generated by lepton number violating interactions in the symmetric phase of the standard model , i.e. , at temperatures above the critical temperature of the electroweak transition . the observed value of the baryon asymmetry , @xmath0 , is naturally obtained in an extension of the standard model with right - handed neutrinos where @xmath1 is broken at the unification scale @xmath2 gev . the corresponding pattern of masses and mixings of the light neutrinos @xmath3 , @xmath4 and @xmath5 is briefly described . = = = = = = = = = = = = = = = = = = = ' '' '' = = = = = = = # 1 = = = = = = = = = = = = = = = = = = ps . = = = = = = = = = = = # 1#1 # 1 # 1*#1 * # 1@xmath6 = = = = = = = = = = = = = # 1*#1 * # 1#1 # 1_#1 _ ii = = = c=== = = = ==u c i u # 1 # 1 # 1^#1 # 1_#1 # 1/#1 /#1#1 # 1#1 # 1#1 # 1#1 # 1 # 1 # 1 # 1#1| # 1| # 1 # 1#1 # 1| # 1| # 1 # 1 # 1 # 1 # 1 # 1#2#1#2 # 1#2#1#2 # 1#2#3 ^ 2 # 1#2 # 3 # 1#2^#1 # 1 # 1 # 1#2#3phys . lett . b#1 ( 19#2 ) # 3 # 1#2#3nucl . phys . b#1 ( 19#2 ) # 3 # 1#2#3phys . rev . lett . # 1 ( 19#2 ) # 3 # 1#2#3phys . rev . d#1 ( 19#2 ) # 3 # 1#2#3class . and quantum grav . # 1 ( 19#2 ) # 3 # 1#2#3commun . math . phys . # 1 ( 19#2 ) # 3 # 1#2#3j . math . phys . # 1 ( 19#2 ) # 3 # 1#2#3ann . of phys . # 1 ( 19#2 ) # 3 # 1#2#3phys . rep . # 1c ( 19#2 ) # 3 # 1#2#3progr . theor . phys . # 1 ( 19#2 ) # 3 # 1#2#3int . j. mod . phys . a#1 ( 19#2 ) # 3 # 1#2#3mod . phys . lett.... And you have already written the first three sentences of the full article: in the standard model of electroweak interactions all masses are generated by the higgs mechanism . as a consequence , at high temperatures a transition occurs from a massive low - temperature phase to a ` massless ' high - temperature phase , where the higgs vacuum expectation value ` evaporates ' and the electroweak symmetry is ` restored ' @xcite . due to the chiral nature of the weak interactions baryon number ( @xmath7 ) and lepton number ( @xmath8 ) are not conserved in the standard model @xcite . at zero temperature this has no observable effect due to the smallness of the weak coupling . however , as the temperature approaches the critical temperature @xmath9 of the electroweak phase transition , @xmath7 and @xmath8 violating processes come into thermal equilibrium @xcite .. Please generate the next two sentences of the article
their rate is determined by the free energy of sphaleron - type field configurations which carry topological charge . in the standard model they induce an effective interaction of all left - handed fermions ( cf .
4,650
Suppose that you have an abstract for a scientific paper: in the classic view introduced by r.a . fisher , a quantitative trait is encoded by many loci with small , additive effects . recent advances in qtl mapping have begun to elucidate the genetic architectures underlying vast numbers of phenotypes across diverse taxa , producing observations that sometimes contrast with fisher s blueprint . despite these considerable empirical efforts to map the genetic determinants of traits , it remains poorly understood how the genetic architecture of a trait should evolve , or how it depends on the selection pressures on the trait . here we develop a simple , population - genetic model for the evolution of genetic architectures . our model predicts that traits under moderate selection should be encoded by many loci with highly variable effects , whereas traits under either weak or strong selection should be encoded by relatively few loci . we compare these theoretical predictions to qualitative trends in the genetics of human traits , and to systematic data on the genetics of gene expression levels in yeast . our analysis provides an evolutionary explanation for broad empirical patterns in the genetic basis of traits , and it introduces a single framework that unifies the diversity of observed genetic architectures , ranging from mendelian to fisherian . * the evolution of genetic architectures underlying quantitative traits * + etienne rajon@xmath0 , joshua b. plotkin@xmath1 + @xmath2 department of biology , university of pennsylvania , philadelphia , pa 19104 , usa + @xmath3 e - mail : [email protected] + a quantitative trait is encoded by a set of genetic loci whose alleles contribute directly the trait value , interact epistatically to modulate each others contributions , and possibly contribute to other traits . the resulting genetic architecture of a trait @xcite influences its variational properties @xcite and therefore affects a population s capacity to adapt to new environmental conditions @xcite . over longer timescales , genetic.... And you have already written the first three sentences of the full article: our approach to understanding the evolution of genetic architectures combines standard models from quantitative genetics @xcite with the wright - fisher model from population genetics @xcite . in its simplest version , our model considers a continuous trait whose value , @xmath4 , is influenced by @xmath5 loci . each locus @xmath6 contributes additively an amount @xmath7 , so that the trait value is defined as the mean of the @xmath7 values across contributing loci .. Please generate the next two sentences of the article
this trait definition means that a gene s contribution to a trait is diluted when @xmath5 is large , which prevents direct selection on gene copy numbers when genes have similar contributions @xcite . we discuss this definition below , along with alternatives such as the sum .
4,651
Suppose that you have an abstract for a scientific paper: the nearest site of massive star formation in orion is dominated by the trapezium subsystem , with its four ob stars and numerous companions . the question of how these stars came to be in such close proximity has implications for our understanding of massive star formation and early cluster evolution . a promising route toward rapid mass segregation was proposed by @xcite , who showed that the merger product of faster - evolving sub clusters can inherit their apparent dynamical age from their progenitors . in this paper we briefly consider this process at a size and time scale more suited for local and perhaps more typical star formation , with stellar numbers from the hundreds to thousands . we find that for reasonable ages and cluster sizes , the merger of subclusters can indeed lead to compact configurations of the most massive stars , a signal seen both in nature and in large - scale hydrodynamic simulations of star formation from collapsing molecular clouds , and that sub - virial initial conditions can make an un - merged cluster display a similar type of mass segregation . additionally , we discuss a variation of the minimum spanning tree mass - segregation technique introduced by @xcite . [ firstpage ] methods:_n_-body simulations stars : formation stellar dynamics . And you have already written the first three sentences of the full article: the question of how dense , massive groupings like the trapezium are formed has yet to find a convincing answer . ignoring for the moment the striking multiplicity of the system @xcite , a first - order glance at the trapezium shows the most massive stars in the cluster arranged in a central , compact configuration . observations of a possible proto - trapezium , w3 irs 5 @xcite , seem to show a massive subcluster still in the embedded phase , which could be suggestive of either formation as a compact cluster or gas - driven migration during formation .. Please generate the next two sentences of the article
however , if a compact system of massive stars were to form _ in situ _ , it would be dynamically unstable @xcite ; if the stars in the trapezium formed in such a fashion , perhaps an order of magnitude more stars were originally in the system in order to leave behind the 4 ob stars today . alternatively , the stars may have migrated there by some combination of dynamical mass segregation or gas dynamical effects during formation .
4,652
Suppose that you have an abstract for a scientific paper: supermassive black hole binaries ( smbhbs ) in galactic nuclei are thought to be a common by product of major galaxy mergers . we use simple disk models for the circumbinary gas and for the binary - disk interaction to follow the orbital decay of smbhbs with a range of total masses ( @xmath0 ) and mass ratios ( @xmath1 ) , through physically distinct regions of the disk , until gravitational waves ( gws ) take over their evolution . prior to the gw driven phase , the viscous decay is generically in the stalled `` secondary dominated '' regime . smbhbs spend a non negligible fraction of a fiducial time of @xmath2 years at orbital periods between days @xmath3 year , and we argue that they may be sufficiently common to be detectable , provided they are luminous during these stages . a dedicated optical or x ray survey could identify coalescing smbhbs statistically , as a population of periodically variable quasars , whose abundance obeys the scaling @xmath4 within a range of periods around @xmath5 tens of weeks . smbhbs with @xmath6 , with @xmath7 , would probe the physics of viscous orbital decay , whereas the detection of a population of higher mass binaries , with @xmath8 , would confirm that their decay is driven by gws . the lowest mass smbhbs ( @xmath9 ) enter the gw - driven regime at short orbital periods , when they are already in the frequency band of the _ laser interferometric space antenna _ ( _ lisa _ ) . while viscous processes are negligible in the last few years of coalescence , they could reduce the amplitude of any unresolved background due to near stationary _ lisa _ sources . we discuss modest constraints on the smbhb population already available from existing data , and the sensitivity and sky coverage requirements for a detection in future surveys . smbhbs may also be identified from velocity shifts in their spectra ; we discuss the expected abundance of smbhbs as a function of their orbital velocity . . And you have already written the first three sentences of the full article: supermassive black holes ( smbhs ) appear to be present in the nucleus of most , and perhaps all , nearby galaxies ( see , e.g. , reviews by @xcite and @xcite ) . the correlations between the masses of the smbhs and various global properties of the host galaxies suggest that evolution of smbhs is closely related to the evolution of galaxies . in particular , in hierarchical structure formation models , galaxies are built up by mergers between lower mass progenitors .. Please generate the next two sentences of the article
each merger event is expected to deliver the nuclear smbhs ( e.g. @xcite ) , along with a significant amount of gas @xcite , to the central regions of the new post merger galaxy . there is some evidence for nuclear supermassive black hole binaries ( smbhbs ) , which would be expected to be produced in galaxy mergers .
4,653
Suppose that you have an abstract for a scientific paper: absolute cross sections for charge exchange , ionization , stripping and excitation in k@xmath0he collisions were measured in the ion energy range @xmath1 kev . the experimental data and the schematic correlation diagrams are used to analyze and determine the mechanisms for these processes . the increase of the excitation probability of inelastic channels with the angle of scattering is revealed . an exceptionally highly excited state of he is observed and a peculiarity for the excitation function of the resonance line is explained . the intensity ratio for the excitation of the k ii @xmath2 nm and @xmath3 nm lines is 5:1 which indicates the high probability for excitation of the singlet resonance level @xmath4p@xmath5 compared to the triplet level @xmath6p@xmath5 . the similarity of the population of the 4p state of the potassium ion and atom as well as the anomalously small values of the excitation cross sections are explained . . And you have already written the first three sentences of the full article: ion - atom collisions have been an attractive subject and are of considerable interest in atomic physics due to both their importance in fundamental physics and their application in many fields , such as laboratory and astrophysical plasmas @xcite , heavy ion inertial fusion @xcite , radiation physics , collisional and radioactive processes in the earth s upper atmosphere @xcite and many other technological areas . in recent decades , ion - atom collisions have been studied in detail experimentally as well as theoretically from low to relativistic collision energies ( see , for example , refs . there has been an increased need the evaluation of ion - atom cross sections of different processes for many accelerator applications shevelko .. Please generate the next two sentences of the article
for example , a beam interaction with the remaining background gas and gas desorbed from walls limits the intensity of bunches at the rhic ( relativistic heavy ion collider ) @xcite and a pressure rise from ion losses at the low - energy antiproton ring brought concerns for the lhc ( large hadron collider ) @xcite . at moderate energies , collisions of closed - shell ions with closed - shell atoms for various inelastic channels such as the ionization , charge - exchange , stripping , and excitation are well understood .
4,654
Suppose that you have an abstract for a scientific paper: we review and further develop an analytical model that describes how thermodynamic constraints on the stability of the native state influence protein evolution in a site - specific manner . to this end , we represent both protein sequences and protein structures as vectors : structures are represented by the principal eigenvector ( pe ) of the protein contact matrix , a quantity that resembles closely the effective connectivity of each site ; sequences are represented through the `` interactivity '' of each amino acid type , using novel parameters that are correlated with hydropathy scales . these interactivity parameters are more strongly correlated than the other hydropathy scales that we examine with : ( 1 ) the change upon mutations of the unfolding free energy of proteins with two - states thermodynamics ; ( 2 ) genomic properties as the genome - size and the genome - wide gc content ; ( 3 ) the main eigenvectors of the substitution matrices . the evolutionary average of the interactivity vector correlates very strongly with the pe of a protein structure . using this result , we derive an analytic expression for site - specific distributions of amino acids across protein families in the form of boltzmann distributions whose `` inverse temperature '' is a function of the pe component . we show that our predictions are in agreement with site - specific amino acid distributions obtained from the protein data bank , and we determine the mutational model that best fits the observed site - specific amino acid distributions . interestingly , the optimal model almost minimizes the rate at which deleterious mutations are eliminated by natural selection . . And you have already written the first three sentences of the full article: the need to maintain the thermodynamic stability of the native state is an important determinant of protein evolution . this requirement has different effects on the various positions of the protein , depending on their structural environment . therefore it is crucial to consider the effect of site - specific structural constraints in models of protein evolution , such as those used to estimate phylogenetic distances from the comparison of protein sequences ( nei and kumar , 2000 ) or to reconstruct phylogenetic trees through maximum likelihood methods ( felsenstein , 1981 ) .. Please generate the next two sentences of the article
the first and simplest models of this kind assumes a gamma distribution of site - specific substitution rates ( see nei and kumar , 2000 ) . the gamma distribution is flexible enough to interpolate between broad and narrow distributions , and its free parameter improves considerably the fit between models of evolution and multiple sequence alignments .
4,655
Suppose that you have an abstract for a scientific paper: we consider the large deviation function for a classical harmonic chain composed of @xmath0 particles driven at the end points by heat reservoirs , first derived in the quantum regime by saito and dhar @xcite and in the classical regime by saito and dhar @xcite and kundu et al . @xcite . within a langevin description we perform this calculation on the basis of a standard path integral calculation in fourier space . the cumulant generating function yielding the large deviation function is given in terms of a transmission green s function and is consistent with the fluctuation theorem . we find a simple expression for the tails of the heat distribution which turn out to decay exponentially . we , moreover , consider an extension of a single particle model suggested by derrida and brunet @xcite and discuss the two - particle case . we also discuss the limit for large @xmath0 and present a closed expression for the cumulant generating function . finally , we present a derivation of the fluctuation theorem on the basis of a fokker - planck description . this result is not restricted to the harmonic case but is valid for a general interaction potential between the particles . . . And you have already written the first three sentences of the full article: there is a current interest in the thermodynamics and statistical mechanics of fluctuating systems in contact with heat reservoirs and driven by external forces . the current focus stems from the recent possibility of direct manipulation of nano - systems and bio - molecules . these techniques permit direct experimental access to the probability distribution functions for the work or for the heat exchanged with the environment @xcite .. Please generate the next two sentences of the article
these methods have also yielded access to the experimental verification of the recent fluctuation theorems which relate the probability of observing entropy - generated trajectories with that of observing entropy - consuming trajectories @xcite . in recent works we studied the motion of a brownian particle in a general potential with a view to the distribution function for the heat exchange with the surroundings @xcite and a single bound brownian particle driven by two heat reservoirs @xcite . in the present paper we consider the harmonic chain driven by heat reservoirs at temperatures @xmath1 and @xmath2 @xcite . here the distribution of positions and momenta is given by a gaussian form with a correlation matrix with elements given by the static position and momentum correlations @xcite
4,656
Suppose that you have an abstract for a scientific paper: the observed long - term spin - down evolution of isolated radio pulsars can not be explained by the standard magnetic dipole radiation with a constant braking torque . however , how and why the torque varies still remains controversial , which is a major issue in understanding neutron stars . many pulsars have been observed with significant long - term changes of their spin - down rates modulated by quasi - periodic oscillations . applying the phenomenological model of pulsar timing noise we developed recently to the observed precise pulsar timing data of isolated neutron stars , here we show that the observed long - term evolutions of their spin - down rates and quasi - periodic modulations can be explained by hall effects in their crusts . therefore the evolution of their crustal magnetic fields , rather than that in their cores , dominates the observed long term spin - down evolution of these young pulsars . understanding of the nature of pulsar timing noise not only reveals the interior physics of neutron stars , but also allows physical modeling of pulsar spin - down and thus improves the sensitivity of gravitational wave detections with pulsars . . And you have already written the first three sentences of the full article: pulsars are very stable natural clocks with observed steady pulses . however , many pulsars exhibit significant timing irregularities , i.e. , unpredicted arrival times of pulses . hobbs et al . ( 2010 , hereafter h2010 ) carried out so far the most extensive study of the long - term timing irregularities of 366 pulsars , and ruled out some timing noise models in terms of observational imperfections , random walks ( boynton 1972 ; alpar et al . 1986 ; cheng 1987a , 1987b ) , and planetary companions ( cordes & shannon 2008 ) . lyne et al . ( 2010 , hereafter l2010 ) found that timing behaviors often result from two different spin - down rates , and pulsars switch abruptly between these states , often quasi - periodically , leading to the observed spin - down patterns .. Please generate the next two sentences of the article
the deviations of pulsars spin frequency @xmath0 from its long - term trend are correlated with changes in the pulse shapes ( see fig . 3 and 4 of l2010 ) , and therefore are magnetospheric in origin . by modeling the observed precise timing data of pulsars , we will show that the long - term linear change of @xmath1 is consistent with the timescale of hall drift , and the oscillatory structure of @xmath1 and the changes in pulse shapes can be produced by the hall waves in the crust of a neutron star ( ns ) . consequently the mechanism of magnetic field evolution and the origin of the timing noise of pulsars are better understood , which may improve the sensitivity of gravitational wave detections with pulsars .
4,657
Suppose that you have an abstract for a scientific paper: we analyze the behaviour of the high - energy scattering amplitude within the brane world scenario in extra dimensions . we argue that contrary to the popular opinion based on the kaluza - klein approach , the cross - section does not increase with energy , but changes the slope close to the compactification scale and then decreases like in the 4-dimensional theory . a particular example of the quark - antiquark scattering due to the gluon exchange in the bulk is considered . _ @xmath0bogoliubov laboratory of theoretical physics , joint institute for nuclear research , dubna , russia + @xmath1institute for theoretical and experimental physics , moscow , russia + @xmath2moscow institute of physics and technology , moscow , russia _ . And you have already written the first three sentences of the full article: extra dimensional theories @xcite have attracted considerable attention in recent years . various brane world models provide wide possibilities for phenomenological applications ( for review see , e.g. @xcite ) . however , the lack of a consistent field theory in extra dimensions compels one to stick to the kaluza - klein approach at the tree level and assume that the string theory cures the problems with divergences at high energy .. Please generate the next two sentences of the article
one of the immediate consequences of the k - k approach is the increase in the scattering cross section with energy due to the exchange of an infinite tower of k - k modes @xcite . to get a finite result , one usually needs to introduce some cutoff , thus making the amplitude essentially cutoff dependent .
4,658
Suppose that you have an abstract for a scientific paper: direct detection experiments have reached the sensitivity to detect dark matter wimps . demonstrating that a putative signal is due to wimps , and not backgrounds , is a major challenge however . the direction dependence of the wimp scattering rate provides a potential wimp ` smoking gun ' . if the wimp distribution is predominantly smooth , the galactic recoil distribution is peaked in the direction opposite to the direction of solar motion . previous studies have found that , for an ideal detector , of order 10 wimp events would be sufficient to reject isotropy , and rule out an isotropic background . we examine how the median recoil direction could be used to confirm the wimp origin of an anisotropic recoil signal . specifically we determine the number of events required to confirm the direction of solar motion as the median inverse recoil direction at 95% confidence . we find that for zero background 31 events are required , a factor of @xmath0 more than are required to simply reject isotropy . we also investigate the effect of a non - zero isotropic background . as the background rate is increased the number of events required increases , initially fairly gradually and then more rapidly , once the signal becomes subdominant . we also discuss the effect of features in the speed distribution at large speeds , as found in recent high resolution simulations , on the median recoil direction . . And you have already written the first three sentences of the full article: weakly interacting massive particles ( wimps ) , and in particular the lightest neutralino in supersymmetric models , are a well motivated dark matter candidate @xcite . wimps can be detected directly , in the lab , via the elastic scattering of wimps on detector nuclei @xcite . experiments now have the sensitivity required to probe the theoretically favoured regions of parameter space @xcite and the cdms experiment has recently observed two events in its wimp signal region @xcite .. Please generate the next two sentences of the article
neutrons , from cosmic - ray induced muons or natural radioactivity , can produce nuclear recoils which ( on an event by event basis ) are indistinguishable from wimp induced recoils . furthermore perfect rejection of other backgrounds is impossible , for instance ` surface events ' ( electron recoils close to the detector surface ) in the case of cdms .
4,659
Suppose that you have an abstract for a scientific paper: a line defect on a metallic surface induces standing waves in the electronic local density of states ( ldos ) . asymptotically far from the defect , the wave number of the ldos oscillations at the fermi energy is usually equal to the distance between nesting segments of the fermi contour , and the envelope of the ldos oscillations shows a power - law decay as moving away from the line defect . here , we theoretically analyze the ldos oscillations close to a line defect on the surface of the topological insulator bi@xmath0te@xmath1 , and identify an important preasymptotic contribution with wave - number and decay characteristics markedly different from the asymptotic contributions . the calculated energy dependence of the wave number of the preasymptotic ldos oscillations is in quantitative agreement with the result of a recent scanning tunneling microscopy experiment [ phys . rev . lett . * 104 * , 016401 ( 2010 ) ] . . And you have already written the first three sentences of the full article: distinct surface - electronic properties , potentially relevant for spintronic applications , arise from the strong spin - orbit interaction in three - dimensional topological insulators ( 3dtis ) @xcite . although the bulk electronic structure of these materials resembles that of standard band insulators with electronic bands separated by an energy gap , the valence and conduction bands of the surface states form a conical dispersion and touch at the center of the surface brillouin zone . these gapless surface states lack the standard twofold spin degeneracy , they are protected against backscattering , and the spin orientation of each plane - wave surface state is determined unambiguously by its momentum vector . in the past few years , surface - sensitive experimental techniques. Please generate the next two sentences of the article
have been utilized to explore the remarkable properties of the surface electrons in 3dtis . the linear , dirac - cone - like electronic dispersion and deviations from that were observed in various 3dti materials using angle - resolved photoemission spectroscopy @xcite ( arpes ) , and the correlation between spin and momentum was demonstrated by the spin - resolved version of the same technique @xcite .
4,660
Suppose that you have an abstract for a scientific paper: using the `` teleparallel '' equivalent of general relativity as the gravitational sector , which is based on torsion instead of curvature , we add a canonical scalar field , allowing for a nonminimal coupling with gravity . although the minimal case is completely equivalent to standard quintessence , the nonminimal scenario has a richer structure , exhibiting quintessence - like or phantom - like behavior , or experiencing the phantom - divide crossing . the richer structure is manifested in the absence of a conformal transformation to an equivalent minimally - coupled model . . And you have already written the first three sentences of the full article: the `` teleparallel '' equivalent of general relativity ( tegr ) @xcite is an equivalent formulation of classical gravity , in which instead of using the torsionless levi - civita connection one uses the curvatureless weitzenbck one . the dynamical objects are the four linearly independent vierbeins ( these are _ parallel _ vector fields represented by the appellation `` teleparallel '' ) . the advantage of this framework is that the torsion tensor is formed solely from products of first derivatives of the tetrad .. Please generate the next two sentences of the article
as described in @xcite , the lagrangian density @xmath0 can be constructed from this torsion tensor under the assumptions of invariance under general coordinate transformations , global lorentz transformations , and the parity operation , along with requiring the lagrangian density to be second order in the torsion tensor . thus , apart from possible conceptual differences , tegr is completely equivalent and indistiguishable form general relativity ( gr ) at the level of equations , both background and perturbation ones .
4,661
Suppose that you have an abstract for a scientific paper: the binary system , , is unusual both because of the dramatic , periodic , radio outbursts , and because of its possible association with the 100 mev gamma - ray source , 2cg 135 + 01 . we have performed simultaneous radio and _ rossi x - ray timing explorer _ x - ray observations at eleven intervals over the 26.5 day orbit , and in addition searched for variability on timescales ranging from milliseconds to hours . we confirm the modulation of the x - ray emission on orbital timescales originally reported by taylor _ et al . _ ( 1996 ) , and in addition we find a significant offset between the peak of the x - ray and radio flux . we argue that based on these results , the most likely x - ray emission mechanism is inverse compton scattering of stellar photons off of electrons accelerated at the shock boundary between the relativistic wind of a young pulsar and the be star wind . in these observations we also detected 2 150 kev flux from the nearby low - redshift quasar qso 0241 + 622 . comparing these measurements to previous hard x - ray and gamma - ray observations of the region containing both and qso 0241 + 622 , it is clear that emission from the qso dominates . @xmath0 _ et al . _ # 1to 0pt#1 . And you have already written the first three sentences of the full article: the be binary system ( associated with the radio source gt 0236 + 610 ) is remarkable for dramatic radio outbursts occurring with the 26.5 day orbital cycle ( @xcite ) , and for its possible association with the 100 mev gamma - ray source 2cg 135 + 01 ( @xcite ) . radio flares lasting several days occur every orbit , with the peak flux varying in phase by up to half the orbital cycle . the binary is a weak , variable x - ray source ( @xcite ) with a non - thermal spectrum .. Please generate the next two sentences of the article
egret measurements indicate that the 100 mev emission from the region may also be variable ( @xcite ) . the gamma - ray error box contains no other likely counterparts .
4,662
Suppose that you have an abstract for a scientific paper: in this paper we first apply the general analysis described in our first paper to a binary mixture of cyclohexane and @xmath0-hexane . we use the square gradient model for the continuous description of a non - equilibrium surface and obtain numerical profiles of various thermodynamic quantities in various stationary state conditions . in the second part of this paper we focus on the verification of local equilibrium of the surface as described with excess quantities . we give a definition of the temperature and chemical potential difference for the surface and verify that these quantities are independent of the choice of the dividing surface . we verify that the non - equilibrium surface can be described in terms of gibbs excess densities which are in good approximation equal to their equilibrium values at the temperature and chemical potential difference of the surface . . And you have already written the first three sentences of the full article: in a previous article @xcite , referred to as paper i , we have established the general approach for the square gradient description of the interface between two phases in non - equilibrium mixtures . we considered phenomena like temperature , density and mass fraction gradients ; heat and diffusion fluxes as well as evaporation or condensation fluxes through the interface . some profiles were given , without going into details of the numerical procedures used to obtain them .. Please generate the next two sentences of the article
in this paper we will do this . in the general description of the interface one uses contributions to the helmholtz free energy density proportional to the square of the density and mass fraction gradients . these contribution imply that it is not possible to use _ continuous local equilibrium thermodynamics _ in the interface , i.e. to calculate the local values of the various thermodynamic parameters in terms of the local density , mass fractions and temperature only .
4,663
Suppose that you have an abstract for a scientific paper: gravitational waves from inspiraling , compact binaries will be searched for in the output of the ligo / virgo interferometric network by the method of `` matched filtering''i.e . , by correlating the noisy output of each interferometer with a set of theoretical waveform templates . these search templates will be a discrete subset of a continuous , multiparameter family , each of which approximates a possible signal . the search might be performed _ hierarchically _ , with a first pass through the data using a low threshold and a coarsely - spaced , few - parameter template set , followed by a second pass on threshold - exceeding data segments , with a higher threshold and a more finely spaced template set that might have a larger number of parameters . alternatively , the search might involve a single pass through the data using the larger threshold and finer template set . this paper extends and generalizes the sathyaprakash - dhurandhar ( s - d ) formalism for choosing the discrete , finely - spaced template set used in the final ( or sole ) pass through the data , based on the analysis of a single interferometer . the s - d formalism is rephrased in geometric language by introducing a metric on the continuous template space from which the discrete template set is drawn . this template metric is used to compute the loss of signal - to - noise ratio and reduction of event rate which result from the coarseness of the template grid . correspondingly , the template spacing and total number @xmath0 of templates are expressed , via the metric , as functions of the reduction in event rate . the theory is developed for a template family of arbitrary dimensionality ( whereas the original s - d formalism was restricted to a single nontrivial dimension ) . the theory is then applied to a simple post@xmath1-newtonian template family with two nontrivial dimensions . for this family , the number of templates @xmath0 in the finely - spaced grid is related to the spacing - induced fractional loss.... And you have already written the first three sentences of the full article: compact binary star systems are likely to be an important source of gravitational waves for the broadband laser interferometric detectors now under construction @xcite , as they are the best understood of the various types of postulated gravity wave sources in the detectable frequency band and their waves should carry a large amount of information . within our own galaxy , there are three known neutron star binaries whose orbits will decay completely under the influence of gravitational radiation reaction within less than one hubble time , and it is almost certain that there are many more as yet undiscovered . current estimates of the rate of neutron star / neutron star ( ns / ns ) binary coalescences @xcite based on these ( very few ) known systems project an event rate of three per year within a distance of roughly 200 mpc ; and estimates based on the evolution of progenitor , main - sequence binaries suggest a distance of as small as roughly 70 mpc for three events per year . these distances correspond to a signal strength which is within the target sensitivities of the ligo and virgo interferometers @xcite .. Please generate the next two sentences of the article
however , to find the signals within the noisy ligo / virgo data will require a careful filtering of the interferometer outputs . because the predicted signal strengths lie so close to the level of the noise , it will be necessary to filter the interferometer data streams in order to detect the inspiral events against the background of spurious events generated by random noise .
4,664
Suppose that you have an abstract for a scientific paper: we study the effective 3-d su(2 ) gauge - higgs model at finite temperature for higgs - masses in the range from @xmath0 gev up to @xmath1 gev . the first order electroweak phase transition weakens with increasing higgs - mass and terminates at a critical end - point . for higgs - mass values larger than about @xmath2 gev the thermodynamic signature of the transition is described by a crossover . close to this higgs - mass value we investigate the vector boson propagator in landau gauge . the calculated w - boson screening masses are compared with predictions based on gap equations . epsf.sty # 1 # 2 # 3 # 4 # 5 . And you have already written the first three sentences of the full article: the standard model of electroweak interactions predicts the existence of a phase transition inbetween a low temperature symmetry broken and a high temperature symmetric phase @xcite . its thermodynamic properties lead to cosmological consequences . one might hope that the baryon asymmetry can be generated at the electroweak phase transition , if the transition is of strong first order . for values of the zero temperature higgs - mass @xmath3. Please generate the next two sentences of the article
gev the phase transition is of first order @xcite . as the higgs - mass is increased further , the thermodynamic singularity at the electroweak phase transition weakens .
4,665
Suppose that you have an abstract for a scientific paper: we perform a comparative study of the quantum and classical transport probabilities of low - energy quasiparticles ballistically traversing normal and andreev two - dimensional open cavities with a sinai - billiard shape . we focus on the dependence of the transport on the strength of an applied magnetic field @xmath0 . with increasing field strength the classical dynamics changes from mixed to regular phase space . averaging out the quantum fluctuations , we find an excellent agreement between the quantum and classical transport coefficients in the complete range of field strengths . this allows an overall description of the non - monotonic behavior of the average magnetoconductance in terms of the corresponding classical trajectories , thus , establishing a basic tool useful in the design and analysis of experiments . . And you have already written the first three sentences of the full article: ballistic transport of particles across billiards is a field of major importance due to its fundamental properties as well as physical applications ( see for example the reviews @xcite ) . in such systems , a two - dimensional cavity is defined by a steplike single - particle potential where confined particles can propagate freely between bounces at the billiard walls . for open systems the possibility of particles being injected and escaping through holes in the boundary is also allowed . as an example , we consider the open geometry of the extensively studied sinai billiard shown in fig . [ fig : fig1 ] .. Please generate the next two sentences of the article
experimental realizations are based on exploiting the analogy between quantum and wave mechanics in either microwave and acoustic cavities or vibrating plates @xcite , and on structured two - dimensional electron gases in artificially tailored semiconductor heterostructures @xcite . in the latter case , the particles are also charge carriers making these nanostructures relevant to applied electronics . the open geometry of the sinai billiard considered in this study.,width=302 ] focussing the attention on the electronic analogues , more recently the possibility to couple a superconductor to a ballistic quantum dot has been considered both theoretically @xcite and experimentally @xcite , so that some part of the billiard boundary exerts the additional property of andreev reflection @xcite . during this process particles with energies much smaller than the superconducting gap @xmath1
4,666
Suppose that you have an abstract for a scientific paper: bd + 30@xmath0 3639 , the brightest planetary nebula at x - ray energies , was observed with _ suzaku _ , an x - ray observatory launched on 2005 july 10 . using the x - ray imaging spectrometer , the k - lines from c vi , o vii , and o viii were resolved for the first time , and c / o , n / o , and ne / o abundance ratios determined . the c / o and ne / o abundance ratios exceed the solar value by a factor of at least 30 and 5 , respectively . these results indicate that the x - rays are emitted mainly by helium shell - burning products . . And you have already written the first three sentences of the full article: intermediate - mass stars , with initial masses @xmath1 , are thought to contribute significantly to the synthesis of c , n , o , and ne through the cno cycle and he burning . these nuclear fusion products are ejected through mass loss as the stars evolve from their agb ( asymptotic giant branch ) phase into planetary nebulae ( pne ) . hence , a pn can be regarded as a messenger bearing information on the nucleosynthesis within these stars . however , the optically - visible material in pne represents matter accumulated over the pn lifetime , making it difficult to extract information on , for instance , pure he - burning products just by observing the pne shells .. Please generate the next two sentences of the article
soft x - rays , detected from several pne , are thought to originate in hot plasmas produced by shocks in fast stellar winds @xcite . these fast winds develop during the later evolutionary stages of the central star , so the x - rays are thought to be emitted by the star s late - phase products .
4,667
Suppose that you have an abstract for a scientific paper: the evolving galaxy is considered as a system of baryonic fragments embedded into the static dark nonbaryonic ( dh ) and baryonic ( bh ) halo and subjected to gravitational and viscous interactions . although the chemical evolution of each separate fragment is treated in the frame of one zone close box model with instantaneous recycling , its star formation ( sf ) activity is a function of mean local gas density and , therefore , is strongly influenced by other interacting fragments . in spite of its simplicity this model provides a realistic description of the process of galaxy formation and evolution over the hubble timescale . . And you have already written the first three sentences of the full article: recent advances in extragalactic astrophysics show the close link of a disk galaxy dynamical evolution and its chemical and photometric behaviour over the hubble timescale . in spite of remarkable succes of the modern theory of galaxy chemical evolution in explaining the properties of evolving galaxies ( @xcite ) its serious shortcomings concern the multiparameter character and practical neglecting of dynamical effects . the inclusion of simplified dynamic into the chemical network ( @xcite ) and vice versa the inclusion of simplified chemical scheme into the sophisticated 3d hydrodynamical code ( @xcite ) gives very promising results and allows to avoid a formal approach typical to standard theory . in this paper the interplay between a disk galaxy dynamical evolution and its chemical behaviour is studied in a frame of a simplified model which provides a realistic description of the process of galaxy formation and evolution over the cosmological timescale .. Please generate the next two sentences of the article
the evolving galaxy is treated as a system of baryonic fragments embedded into the extended halo composed of dark nonbaryonic and baryonic matter . the halo is modelled as a static structure with dark ( dh ) and diluted baryonic ( bh ) halo components having plummer type density profiles ( @xcite ) : @xmath0 and @xmath1 where @xmath2 @xmath3 the dense baryonic matter ( future galaxy disk and bulge ) of total mass of @xmath4 is assumed to be distributed among @xmath5 particles fragments .
4,668
Suppose that you have an abstract for a scientific paper: we consider the electromagnetic waves propagating in the system of coupled waveguides . one of the system components is a standard waveguide fabricated from nonlinear medium having positive refraction and another component is a waveguide produced from an artificial material having negative refraction . the metamaterial constituting the second waveguide has linear characteristics and a wave propagating in the waveguide of this type propagates in the direction opposite to direction of energy flux . it is found that the coupled nonlinear solitary waves propagating both in the same direction are exist in this oppositely - directed coupler due to linear coupling between nonlinear positive refractive waveguide and linear negative refractive waveguide . the corresponding analytical solution is found and it is used for numerical simulation to illustrate that the results of the solitary wave collisions are sensible to the relative velocity of the colliding solitary waves . . And you have already written the first three sentences of the full article: the waveguide structure fabricated from the two closely placed waveguides is of common use in the fiber and integrated optics . coupling between the waveguides is due to tunnel penetration of light from one waveguide into another waveguide @xcite . this coupler preserves direction of light propagation , and for this reason it is named a directed coupler .. Please generate the next two sentences of the article
it was found @xcite that the steady state pair of electromagnetic pulses can exist in the extended directed coupler or twin - core fibers @xcite . sometimes it termes soliton .
4,669
Suppose that you have an abstract for a scientific paper: we propose a new general approach for estimating the effect of a binary treatment on a continuous and potentially highly skewed response variable , the _ generalized quantile treatment effect _ ( gqte ) . the gqte is defined as the difference between a function of the quantiles under the two treatment conditions . as such , it represents a generalization over the standard approaches typically used for estimating a treatment effect ( i.e. , the average treatment effect and the quantile treatment effect ) because it allows the comparison of any arbitrary characteristic of the outcome s distribution under the two treatments . following @xcite , we assume that a pre - specified transformation of the two quantiles is modeled as a smooth function of the percentiles . this assumption allows us to link the two quantile functions and thus to borrow information from one distribution to the other . the main theoretical contribution we provide is the analytical derivation of a closed form expression for the likelihood of the model . exploiting this result we propose a novel bayesian inferential methodology for the gqte . we show some finite sample properties of our approach through a simulation study which confirms that in some cases it performs better than other nonparametric methods . as an illustration we finally apply our methodology to the @xmath0 national medicare expenditure survey data to estimate the difference in the single hospitalization medical cost distributions between cases ( i.e. , subjects affected by smoking attributable diseases ) and controls . ./style / arxiv - ba.cfg , , . And you have already written the first three sentences of the full article: the effect of a treatment on an outcome is often the main parameter of interest in many scientific fields . the standard approach used to estimate it is the so called average treatment effect ( ate ) , the difference between the expected values of the response s distributions under the two treatment regimes . while intuitive and useful in many situations , it suffers from some limitations ; in particular , it becomes highly biased when the response is skewed .. Please generate the next two sentences of the article
a further drawback of the ate is its coarseness as a summary of the distance between the expected value of the response s distributions under the two treatments . it is a matter of fact indeed that the effect of the treatment on the outcome often varies as we move from the lower to the upper tail of the outcome s distribution .
4,670
Suppose that you have an abstract for a scientific paper: signed directed social networks , in which the relationships between users can be either positive ( indicating relations such as trust ) or negative ( indicating relations such as distrust ) , are increasingly common . thus the interplay between positive and negative relationships in such networks has become an important research topic . most recent investigations focus upon edge sign inference using structural balance theory or social status theory . neither of these two theories , however , can explain an observed edge sign well when the two nodes connected by this edge do not share a common neighbor ( e.g. , common friend ) . in this paper we develop a novel approach to handle this situation by applying a new model for node types . initially , we analyze the local node structure in a fully observed signed directed network , inferring underlying node types . the sign of an edge between two nodes must be consistent with their types ; this explains edge signs well even when there are no common neighbors . we show , moreover , that our approach can be extended to incorporate directed triads , when they exist , just as in models based upon structural balance or social status theory . we compute bayesian node types within empirical studies based upon partially observed wikipedia , slashdot , and epinions networks in which the largest network ( epinions ) has 119k nodes and 841k edges . our approach yields better performance than state - of - the - art approaches for these three signed directed networks . signed directed social networks ; node types ; bayesian node features ; edge sign prediction . . And you have already written the first three sentences of the full article: with the rapid emergence of social networking websites , e.g. , facebook , twitter , linkedin , epinions , etc . , a considerable amount of attention has been devoted to investigating the underlying social mechanisms in order to enhance users experiences @xcite@xcite@xcite@xcite . traditional social network analysis concerns itself primarily with unsigned social networks such as facebook or myspace which can be modeled as graphs , with nodes representing entities , and positively weighted edges representing the existence of relationships between pairs of entities .. Please generate the next two sentences of the article
recently , signed directed social networks , in which the relationships between users can be either positive ( indicating relations such as trust ) or negative ( indicating relations such as distrust ) , are increasingly common . for instance , in epinions @xcite , which is a product review website with an active user community , users can indicate whether they trust or distrust other users based upon their reviews ; in slashdot @xcite@xcite , which is a technology - related news website , users can tag each other as `` friend '' or `` foe '' based upon their comments . such a signed directed network can be modeled as a graph expressed as an asymmetric adjacency matrix in which an entry is @xmath0 ( or @xmath1 ) if the relationship is positive ( or negative ) and 0 if the relationship is absent .
4,671
Suppose that you have an abstract for a scientific paper: we report on experimental studies of synchronization phenomena in a pair of analog electronic neurons ( ens ) . the ens were designed to reproduce the observed membrane voltage oscillations of isolated biological neurons from the stomatogastric ganglion of the california spiny lobster _ panulirus interruptus_. the ens are simple analog circuits which integrate four dimensional differential equations representing fast and slow subcellular mechanisms that produce the characteristic regular / chaotic spiking - bursting behavior of these cells . in this paper we study their dynamical behavior as we couple them in the same configurations as we have done for their counterpart biological neurons . the interconnections we use for these neural oscillators are both direct electrical connections and excitatory and inhibitory chemical connections : each realized by analog circuitry and suggested by biological examples . we provide here quantitative evidence that the ens and the biological neurons behave similarly when coupled in the same manner . they each display well defined bifurcations in their mutual synchronization and regularization . we report briefly on an experiment on coupled biological neurons and four dimensional ens which provides further ground for testing the validity of our numerical and electronic models of individual neural behavior . our experiments as a whole present interesting new examples of regularization and synchronization in coupled nonlinear oscillators . = 6.6 in = 9.10 in = -0.0 in = -0.5 in . And you have already written the first three sentences of the full article: synchronization of nonlinear oscillators is widely studied in physical and biological systems @xcite for underlying interests ranging from novel communications strategies @xcite to understanding how large and small neural assemblies efficiently and sensitively achieve desired functional goals @xcite . the analysis of biological systems may , beyond their intrinsic interest , often provide physicists with novel dynamical systems possessing interesting properties in their component oscillators or in the nature of the interconnections . we have presented our analysis of the experimental synchronization of two biological neurons as the electrical coupling between them is changed in sign and magnitude @xcite .. Please generate the next two sentences of the article
subsequent to that analysis we have developed computer simulations of the dynamics of the neurons which are based on conductance based hodgkin - huxley ( hh ) @xcite neuron models . these numerical simulations quantitatively reproduced the observations in the laboratory @xcite .
4,672
Suppose that you have an abstract for a scientific paper: we present two rosat pspc observations of the radio - loud , lobe - dominated quasar 3c 351 , which shows an ` ionized absorber ' in its x - ray spectrum . the factor 1.7 change in flux in the @xmath02 years between the observations allows a test of models for this ionized absorber . the absorption feature at @xmath1 kev ( quasar frame ) is present in both spectra but with a lower optical depth when the source intensity - and hence the ionizing flux at the absorber - is higher , in accordance with a simple , single - zone , equilibrium photoionization model . detailed modeling confirms this agrement quantitatively . the maximum response time of 2 years allows us to limit the gas density : @xmath2 @xmath3 ; and the distance of the ionized gas from the central source r @xmath4 19 pc . this produces a strong test for a photoionized absorber in 3c 351 : a factor 2 flux change in @xmath01 week in this source _ must _ show non - equilibrium effects in the ionized absorber . . And you have already written the first three sentences of the full article: the most luminous known agn with an ionized absorber is the radio - loud lobe - dominated quasar 3c351 ( l@xmath5 erg s@xmath6 , z=0.371 , fiore et al . , 1993 ) . ionized absorbers are common in low redshift , low luminosity seyfert 1 galaxies ( reynolds , 1997 ) , but rare in higher redshift , higher luminosity quasars . high luminosity agns are very likely physically larger and so may exhibit slower time variability and have different physical conditions .. Please generate the next two sentences of the article
3c 351 has a very high uv to x - ray ratio ( @xmath7 , tanambaun et al . , 1989 ) and its ir to uv spectrum does not show any evidence of reddening , in contrast to several low redshift seyfert 1 galaxies thought to host dusty / warm absorbers .
4,673
Suppose that you have an abstract for a scientific paper: we report the magnetic properties of mechanically milled co@xmath0zn@xmath1fe@xmath2o@xmath3 spinel oxide . after 24 hours milling of the bulk sample , the xrd spectra show nanostructure with average particle size @xmath4 20 nm . the as milled sample shows an enhancement in magnetization and ordering temperature compared to the bulk sample . if the as milled sample is annealed at different temperatures for the same duration , recrystallization process occurs and approaches to the bulk structure on increasing the annealing temperatures . the magnetization of the annealed samples first increases and then decreases . at higher annealing temperature ( @xmath5 1000@xmath6c ) the system shows two coexisting magnetic phases _ i.e. _ , spin glass state and ferrimagnetic state , similar to the as prepared bulk sample . the room temperature mssbauer spectra of the as milled sample , annealed at 300@xmath6c for different durations ( upto 575 hours ) , suggest that the observed change in magnetic behaviour is strongly related with cations redistribution between tetrahedral ( a ) and octahedral ( o ) sites in the spinel structure . apart from the cation redistribution , we suggest that the enhancement of magnetization and ordering temperature is related with the reduction of b site spin canting and increase of strain induced anisotropic energy during mechanical milling . . And you have already written the first three sentences of the full article: in recent years , several research groups @xcite are involved in the investigations of nanoparticle spinel oxides because of their potential applications in magnetic devices , in micro wave technology @xcite , in high density magnetic recording media @xcite , in magnetic fluids as drug carrier etc . @xcite . various types of nanoparticle materials such as , metal : fe , co , ni @xcite , metallic alloys : fe - cu @xcite , and metallic oxides : mnfe@xmath2o@xmath3 @xcite and znfe@xmath2o@xmath3 @xcite , are under current research activity . while metal and inter metallic nanoparticles suffer from stability problems in atmospheric condition , metallic oxides are highly stable under ambient conditions @xcite . various factors such as ,. Please generate the next two sentences of the article
particle size distribution @xcite , inter - particle interactions @xcite , grain ( core ) and grain boundary ( shell ) structure @xcite and metastable structure of the system @xcite control the properties of nanoparticles . some of the specific properties of the nanoparticles which are of interest are quantum magnetic tunneling @xcite , various magnetic order like ferrimagnet / ferromagnet , spin glass / superparamagnet and spin canting effects @xcite . + the interesting aspect of magnetism in spinel oxides is that the magnetic order is strongly dependent on the competition between various superexchange type interactions i.e. , j@xmath7 ( a - o - b ) and j@xmath8 ( b - o - b ) , where a : tetrahedral ( a ) sites moments and b : octahedral ( b ) sites moments , o : is o@xmath9 ions @xcite .
4,674
Suppose that you have an abstract for a scientific paper: we present a stellar population analysis of the nearby , face - on , sa(s)c galaxy , ngc 628 , which is part of the ppak ifs nearby galaxies survey ( pings ) . the data cover a field of view of @xmath0 6 arcmin in diameter with a sampling of @xmath02.7 arcsec per spectrum and a wavelength range ( 3700 - 7000 ) . we apply spectral inversion methods to derive 2-dimensional maps of star formation histories and chemical enrichment . we present maps of the mean ( luminosity- and mass - weighted ) age and metallicity that reveal the presence of structures such as a nuclear ring , previously seen in molecular gas . the disk is dominated in mass by an old stellar component at all radii sampled by our data , while the percentage of young stars increase with radius . the mean stellar age and metallicity profiles have a two defined regions , an inner one with flatter gradients ( even slightly positive ) and an external ones with a negative , steeper one , separated at @xmath060 arcsec . this break in the profiles is more prominent in the old stellar component . the young component shows a metallicity gradient that is very similar to that of the gas , and that is flatter in the whole disc . the agreement between the metallicity gradient of the young stars and the gas , and the recovery of the measured colours from our derived star formation histories validate the techniques to recover the age - metallicity and the star formation histories in disc galaxies from integrated spectra . we speculate about the possible origin of the break and conclude that the most likely scenario is that we are seeing , in the center of ngc 628 , a dissolving bar , as predicted in some numerical simulations . [ firstpage ] galaxies : abundances ; galaxies : evolution ; galaxies : formation ; galaxies : spiral ; galaxies : stellar content . And you have already written the first three sentences of the full article: quantifying the star formation histories of galaxies constitutes one of the major unsolved issues towards a complete understanding of galaxy formation . although this task is difficult , analyses of stellar populations constitute a step forward toward the achievement of this goal . in the majority of cases , these studies have to be made using integrated spectra or colours , as we can only resolve stars in a limited number of galaxies . while studies of stellar populations for early - type galaxies are abundant in the literature ( e.g. , trager et al .. Please generate the next two sentences of the article
2000 ; kuntschner 2000 , thomas et al . 2005 ; snchez - blzquez et al . 2006abc ; smith , lucy & hudson 2007 , among may others ) , these studies are much more sparse for disc galaxies . until very recently , the study of the stellar component in disk galaxies outside the local group was restricted to broadband photometric data ( bell & de jong 2000 ; macarthur et al .
4,675
Suppose that you have an abstract for a scientific paper: the ngc 1023 group is one of the most studied nearby groups . we want to give an insight into the evolution of its innermost region by means of ultraviolet observations and proper models . we used the fuv and nuv galex archival data as well as a large set of sph simulations with chemo - photometric implementation . from the uv observations we found that several , already known , dwarf galaxies very close to ngc 1023 are also detected in uv and two more objects ( with no optical counterpart ) can be added to the group . using these data we construct exhaustive models to account for their formation . we find that the whole sed of ngc 1023 and its global properties are well matched by a simulation which provides a minor merger with a companion system 5 times less massive . the strong interaction phase started 7.7gyr ago and the final merger 1.8gyr ago . [ firstpage ] galaxies : structure galaxies : individual : ngc 1023 . And you have already written the first three sentences of the full article: the ngc 1023 group is one of the most extensively systems studied ( 300 related references are listed in the nasa / ipac extragalactic database ) . the first large scale study of it has been performed by @xcite . later on the presence of several outstanding dwarf galaxies drew the attention of @xcite who studied in detail four of them in the optical bands .. Please generate the next two sentences of the article
the most peculiar feature of ngc 1023 is its proximity to the ( likely tidally interacting ) fainter galaxy ngc 1023a . the latter galaxy appears as a small companion located at the east end of ngc 1023 .
4,676
Suppose that you have an abstract for a scientific paper: it is possible to discuss the propagation of an electronic current through certain layered nanostructures modeling them as a collection of random one - dimensional interfaces , through which a coherent signal can be transmitted or reflected while being scattered at each interface . we present a simple model in which a persistent random walk ( the t - r model in 1-d ) is used as a representation of the propagation of a signal in a medium with such random interfaces . in this model all the possible paths through the system leading to transmission or reflection can be enumerated in an expansion in the number of loops described by the path . this expansion allows us to conduct a statistical analysis of the length of the paths for different geometries and boundary conditions and understand their scaling with the size of the system . by tuning the parameters of the model it is possible to interpolate smoothly between the ballistic and the diffusive regimes of propagation . an extension of this model to higher dimensions is presented . we show monte carlo simulations that support the theoretical results obtained . . And you have already written the first three sentences of the full article: the seminal work of anderson@xcite raising the possibility that disorder can lead to non - diffusive behavior ( the so called localized regime ) refocused the attention of the physics community on the problem of the propagation of waves in disordered systems . in the last two decades new theoretical ideas ( like the scaling theory of localization@xcite , weak localization@xcite , universal conductance fluctuations@xcite and wigner dwelling times@xcite ) were advanced , and a new field ( soon called mesoscopic physics ) emerged . it reached and influenced many experimental areas , among them electronic systems@xcite , microwaves@xcite , optics@xcite , acoustics@xcite , geophysics@xcite , laser physics@xcite , medical physics@xcite and atomic physics@xcite . it has become an extremely important problem in this field to understand what should be the signature of the propagation of a signal in the different regimes ( ballistic , diffusive , localized ) since concomitant phenomena , like absorption can complicate the interpretation of experimental results@xcite .. Please generate the next two sentences of the article
it is for that reason that theoretical analyses of the characteristics of the propagation , and in particular its statistical properties@xcite@xcite , are of great interests , since those properties have become recently experimentally accessible@xcite@xcite . when the inelastic scattering length in a system is large compared to its size , the wave propagates coherently in the sense of its phase being preserved while its direction is randomized by elastic scattering processes with the impurities constituting the random medium .
4,677
Suppose that you have an abstract for a scientific paper: we measure carbon and nitrogen abundances to @xmath0dex for @xmath1 giant stars from their low - resolution ( @xmath2 ) lamost dr2 survey spectra . we use these and measurements , together with empirical relations based on the apokasc sample , to infer stellar masses and implied ages for 230,000 of these objects to @xmath3dex and @xmath4dex respectively . we use _ the cannon _ , a data - driven approach to spectral modeling , to construct a predictive model for lamost spectra . our reference set comprises 8125 stars observed in common between the apogee and lamost surveys , taking seven apogee dr12 labels ( parameters ) as ground truth : , , , , , , and . we add seven colors to the cannon model , based on the _ g , r , i , j , h , k , w1 , w2 _ magnitudes from apass , 2mass & wise , which improves our constraints on and by up to 20% and on by up to 70% . cross - validation of the model demonstrates that , for high- objects , our inferred labels agree with the apogee values to within 50k in temperature , 0.04 magnitudes in , and @xmath5dex in , , , , and . we apply the model to 450,000 giants in lamost dr2 that have _ not _ been observed by apogee . this demonstrates that precise individual abundances can be measured from low - resolution spectra , and represents the largest catalog of , , masses and ages to date . as as result , we greatly increase the number and sky coverage of stars with mass and age estimates . . And you have already written the first three sentences of the full article: an empirical description of the milky way s present structure and formation history requires accurate and consistent age estimates for large samples of stars distributed throughout the galaxy . although we have recently entered an era of extensive spatial , kinematic , and chemical information beyond the solar neighborhood , comparably extensive age constraints remain elusive . stellar age is a property that must be inferred from observations with the help of stellar evolution models ; generally , it can not be measured directly " .. Please generate the next two sentences of the article
therefore , results are inherently limited by the applicability and accuracy of the model used ( see @xcite for a comprehensive review . ) as stellar ages are difficult to measure directly , abundances such as and are commonly used as an age - dating proxy ( e.g. via making maps of mono - age populations ; see @xcite and @xcite ) because the determination of photospheric abundances from spectra is more straightforward . unfortunately for milky way studies , the population of stars that is most readily observable throughout the galaxy red giant stars is also the one for which it is particularly challenging to estimate ages .
4,678
Suppose that you have an abstract for a scientific paper: in this paper quasi - stationary , two - and - a - half - dimensional magnetic reconnection is studied in the framework of incompressible resistive magnetohydrodynamics ( mhd ) . a new theoretical approach for calculation of the reconnection rate is presented . this approach is based on local analytical derivations in a thin reconnection layer , and it is applicable to the case when resistivity is anomalous and is an arbitrary function of the electric current and the spatial coordinates . it is found that a quasi - stationary reconnection rate is fully determined by a particular functional form of the anomalous resistivity and by the local configuration of the magnetic field just outside the reconnection layer . it is also found that in the special case of constant resistivity reconnection is sweet - parker and not petschek . . And you have already written the first three sentences of the full article: magnetic reconnection is the physical process of breaking and rearrangement of magnetic field lines , which changes the topology of the field . it is one of the most fundamental processes of plasma physics and is believed to be at the core of many dynamic phenomena in laboratory experiments and in cosmic space . unfortunately , in spite of being so important , magnetic reconnection is still relatively poorly understood from the theoretical point of view .. Please generate the next two sentences of the article
the reason is that plasmas usually have very high temperatures and low densities . in such plasmas , the spitzer resistivity is extremely small and magnetic fields are almost perfectly frozen into cosmic plasmas . as a result , simple theoretical models , such as the sweet - parker reconnection model @xcite predict that the magnetic reconnection processes should be extremely slow and insignificant throughout the universe . on the other hand ,
4,679
Suppose that you have an abstract for a scientific paper: we investigate the time evolution of the mass distribution of pre - stellar cores ( pscs ) and their transition to the initial stellar mass function ( imf ) in the central parts of a molecular cloud ( mc ) under the assumption that the coalescence of cores is important . our aim is to explain the observed shallow imf in dense stellar clusters such as the arches cluster . the initial distributions of pscs at various distances from the mc center are those of gravitationally unstable cores resulting from the gravo - turbulent fragmentation of the mc . as time evolves , there is a competition between the pscs rates of coalescence and collapse . whenever the local rate of collapse is larger than the rate of coalescence in a given mass bin , cores are collapsed into stars . with appropriate parameters , we find that the coalescence - collapse model reproduces very well all the observed characteristics of the arches stellar cluster imf ; namely , the slopes at high and low mass ends and the peculiar bump observed at @xmath0 @xmath1 . our results suggest that today s imf of the arches cluster is very similar to the primordial one and is prior to the dynamical effects of mass segregation becoming important . [ firstpage ] galaxies : star clusters - galaxy : centre - turbulence - ism : clouds - open clusters and associations : individual : arches . And you have already written the first three sentences of the full article: understanding the origin of the initial stellar mass function ( imf ) remains one of the most challenging issues in modern astrophysics . when averaged over the total volume of galaxies or whole stellar clusters , the imf is observed to follow a nearly uniform behavior which consists in an increased number of stars counted when going from the most massive stars up to @xmath2 @xmath1 , followed by a shallower increase between @xmath2 and @xmath3 @xmath1 and a decline in the number of stars at masses @xmath4 @xmath5 . this standard imf has been described , with continuous refinements , by several analytical functions ( e.g. , salpeter 1955 ; miller - scalo 1979 ; kroupa 2002 ; chabrier 2003 ) . yet. Please generate the next two sentences of the article
, deviations from the standard imf at low and high mass ends have been reported in many observations ( see review in elmegreen 2004 ) . at high mass , the imf is observed to be generally top - heavy in dense cluster cores such as in the arches cluster ( e.g. , stolte et al . 2005 ; kim et al .
4,680
Suppose that you have an abstract for a scientific paper: we introduce winding numbers @xmath0 and free winding numbers @xmath1 of regular closed curves on surfaces with a nice euclidean or hyperbolic geometry . such surfaces include the euclidean plane , the annulus , the mbius band , the torus , the klein bottle , closed hyperbolic surfaces , and complete hyperbolic surfaces . we show that ( 1 ) regular closed curves with the same base point are regularly homotopic fixing the base point if and only if they represent the same element of the fundamental group and have the same winding number @xmath0 and ( 2 ) two regular closed curves are regularly homotopic if and only if they are freely homotopic and have the same free winding number @xmath1 . these two winding numbers are integers if the curve is orientation preserving , and are integers modulo 2 if the curve is orientation reversing . we also give a combinatorial formula for the ` winding numbers ' of regular nonclosed plane curves , which is useful for computing these winding numbers of closed curves . . And you have already written the first three sentences of the full article: whitney - graustein theorem says that two regular closed curves on the euclidean plane @xmath2 are regularly homotopic if and only if they have the same ` rotation number ' ( _ a.k.a . _ ` winding number')@xcite . smale classified regular closed curves on an arbitrary surface up to regular homotopy ( fixing the base point and the base direction ) @xcite .. Please generate the next two sentences of the article
but he did not try to define the winding numbers ; he was more interested in the differences . winding numbers of closed curves on surfaces were introduced by reinhart @xcite when the surface is orientable , and later extended by chillingworth in the non - orientable case @xcite .
4,681
Suppose that you have an abstract for a scientific paper: we report the discovery of a threshold in the hi column density of galactic gas clouds below which the formation of the cold phase of hi is inhibited . this threshold is at @xmath0 per @xmath1 ; sightlines with lower hi column densities have high spin temperatures ( median @xmath2 k ) , indicating low fractions of the cold neutral medium ( cnm ) , while sightlines with @xmath3 per @xmath1 have low spin temperatures ( median @xmath4 k ) , implying high cnm fractions . the threshold for cnm formation is likely to arise due to inefficient self - shielding against ultraviolet photons at lower hi column densities . the threshold is similar to the defining column density of a damped lyman-@xmath5 absorber ; this indicates a physical difference between damped and sub - damped lyman-@xmath5 systems , with the latter class of absorbers containing predominantly warm gas . . And you have already written the first three sentences of the full article: the diffuse interstellar medium ( ism ) contains gas over a wide range of densities , temperatures and ionization states . these can be broadly sub - divided into the molecular phase ( e.g. @xcite ) , the neutral atomic phase ( e.g. @xcite ) , and the warm and hot ionized phases ( e.g. @xcite ) . the neutral atomic medium ( mostly neutral hydrogen , hi ) is further usually sub - divided into `` cold '' and `` warm '' phases ( the `` cnm '' and `` wnm '' , respectively ) . typical cnm temperatures and densities are @xmath6 k and @xmath7 @xmath8 , respectively , with corresponding wnm values of @xmath9 k and @xmath10 @xmath8 .. Please generate the next two sentences of the article
this was originally an observational definition , to distinguish between phases producing strong narrow absorption lines towards background radio - loud quasars and smooth broad emission lines @xcite . later , this separation into cold and warm phases was found to arise naturally in the context of models in which the atomic and ionized phases are in pressure equilibrium .
4,682
Suppose that you have an abstract for a scientific paper: the non - chiral edge excitations of quantum spin hall systems and topological insulators are described by means of their partition function . the stability of topological phases protected by time - reversal symmetry is rediscussed in this context and put in relation with the existence of discrete anomalies and the lack of modular invariance of the partition function . the @xmath0 characterization of stable topological insulators is extended to systems with interacting and non - abelian edge excitations . .6 in partition functions and stability criteria of topological insulators 0.2 in andrea cappelli + _ infn + via g. sansone 1 , 50019 sesto fiorentino - firenze , italy _ enrico randellini + _ dipartimento di fisica + via g. sansone 1 , 50019 sesto fiorentino - firenze , italy _ .2 in . And you have already written the first three sentences of the full article: the study of topological phases of matter has considerably grown in recent years and new systems have been investigated both theoretically and experimentally @xcite . it is now apparent that some remarkable topological features of quantum hall states can occur in a wider set of systems and thus be more universal and robust . in particular , non - chiral topological states , such as those of the quantum spin hall effect , of topological insulators and of topological superconductors , do not require strong magnetic fields and exist in three space dimensions @xcite@xcite . a characteristic feature of topological states is the existence of massless edge excitations that are well accounted for by low - energy effective field theory descriptions @xcite . the response of topological states to external disturbances does not occur in the gapped bulk , but manifests itself through the edge dynamics .. Please generate the next two sentences of the article
while the edge excitations of chiral states , such as quantum hall states and chern insulators , are absolutely stable , those of non - chiral topological states can interact and became gapful , leading to the decay into topologically trivial phases . in some cases , edge interactions are forbidden by the presence of ( discrete ) symmetries : we then speak of symmetry protected topological phases of matter @xcite .
4,683
Suppose that you have an abstract for a scientific paper: we found a molecular cloud connecting from the outer region to the galactic center mini - spiral ( gcms ) " which is a bundle of the ionized gas streams adjacent to sgr a@xmath0 . the molecular cloud has a filamentary appearance which is prominent in the cs @xmath1 emission line and is continuously connected with the gcms . the velocity of the molecular cloud is also continuously connected with that of the ionized gas in the gcms observed in the h42@xmath2 recombination line . the morphological and kinematic relations suggest that the molecular cloud is falling from the outer region to the vicinity of sgr a@xmath0 , being disrupted by the tidal shear of sgr a@xmath0 and ionized by uv emission from the central cluster . we also found the sio @xmath1 emission in the boundary area between the filamentary molecular cloud and the gcms . there seems to exist shocked gas in the boundary area . . And you have already written the first three sentences of the full article: the galactic center is the nuclear region of the nearest spiral galaxy , milky way . the environment is unique in the galaxy because the region contains several peculiar objects . first , sagittarius a@xmath0 ( sgr a@xmath0 ) is a counter part of the galactic center black hole ( gcbh ) in the regime from radio to x - ray , which is located very near the dynamical center of the galaxy ( e.g. ( * ? ? ?. Please generate the next two sentences of the article
* reid 2003 ) ) and has a mass of @xmath3m@xmath4 ( e.g. ( * ? ? ? * ghez 2008 ) ; ( * ? ? ? * gillessen 2009 ) ) .
4,684
Suppose that you have an abstract for a scientific paper: the nuclear matrix elements @xmath0 of the neutrinoless double beta decay ( @xmath1 ) of most nuclei with known @xmath2-decay rates are systematically evaluated using the quasiparticle random phase approximation ( qrpa ) and renormalized qrpa ( rqrpa ) . the experimental @xmath2-decay rate is used to adjust the most relevant parameter , the strength of the particle - particle interaction . new results confirm that with such procedure the @xmath0 values become essentially independent on the size of the single - particle basis . furthermore , the matrix elements are shown to be also rather stable with respect to the possible quenching of the axial vector strength parametrized by reducing the coupling constant @xmath3 , as well as to the uncertainties of parameters describing the short range nucleon correlations . theoretical arguments in favor of the adopted way of determining the interaction parameters are presented . furthermore , a discussion of other implicit and explicit parameters , inherent to the qrpa method , is presented . comparison is made of the ways these factors are chosen by different authors . it is suggested that most of the spread among the published @xmath1 decay nuclear matrix elements can be ascribed to these choices . . And you have already written the first three sentences of the full article: inspired by the spectacular discoveries of oscillations of atmospheric @xcite , solar @xcite , and reactor neutrinos@xcite ( for recent reviews see @xcite ) the physics community worldwide is embarking on the next challenging problem , finding whether neutrinos are indeed majorana particles as many particle physics models suggest . study of the neutrinoless double beta decay ( @xmath1 ) is the best potential source of information about the majorana nature of the neutrinos @xcite . moreover , the rate of the @xmath1 decay , or limits on its value , can be used to constrain the neutrino mass pattern and the absolute neutrino mass scale , i.e. , information not available by the study of neutrino oscillations .. Please generate the next two sentences of the article
( the goals , and possible future directions of the field are described , e.g. , in the recent study @xcite . the issues particularly relevant for the program of @xmath1 decay search are discussed in @xcite . ) the observation of @xmath1 decay would immediately tell us that neutrinos are massive majorana particles .
4,685
Suppose that you have an abstract for a scientific paper: retinal image of surrounding objects varies tremendously due to the changes in position , size , pose , illumination condition , background context , occlusion , noise , and nonrigid deformations . but despite these huge variations , our visual system is able to invariantly recognize any object in just a fraction of a second . to date , various computational models have been proposed to mimic the hierarchical processing of the ventral visual pathway , with limited success . here , we show that the association of both biologically inspired network architecture and learning rule significantly improves the models performance when facing challenging invariant object recognition problems . our model is an asynchronous feedforward spiking neural network . when the network is presented with natural images , the neurons in the entry layers detect edges , and the most activated ones fire first , while neurons in higher layers are equipped with spike timing - dependent plasticity . these neurons progressively become selective to intermediate complexity visual features appropriate for object categorization . the model is evaluated on _ 3d - object _ and _ eth-80 _ datasets which are two benchmarks for invariant object recognition , and is shown to outperform state - of - the - art models , including deepconvnet and hmax . this demonstrates its ability to accurately recognize different instances of multiple object classes even under various appearance conditions ( different views , scales , tilts , and backgrounds ) . several statistical analysis techniques are used to show that our model extracts class specific and highly informative features . * keywords : * view - invariant object recognition , visual cortex , stdp , spiking neurons , temporal coding . And you have already written the first three sentences of the full article: humans can effortlessly and rapidly recognize surrounding objects @xcite , despite the tremendous variations in the projection of each object on the retina @xcite caused by various transformations such as changes in object position , size , pose , illumination condition and background context @xcite . this invariant recognition is presumably handled through hierarchical processing in the so - called ventral pathway . such hierarchical processing starts in v1 layers , which extract simple features such as bars and edges in different orientations @xcite , continues in intermediate layers such as v2 and v4 , which are responsive to more complex features @xcite , and culminates in the inferior temporal cortex ( it ) , where the neurons are selective to object parts or whole objects @xcite . by moving from the lower layers to the higher layers , the feature complexity , receptive field size and transformation invariance increase , in such a way that the it neurons can invariantly represent the objects in a linearly separable manner @xcite .. Please generate the next two sentences of the article
another amazing feature of the primates visual system is its high processing speed . the first wave of image - driven neuronal responses in it appears around 100 ms after the stimulus onset @xcite . recordings from monkey it cortex have demonstrated that the first spikes ( over a short time window of 12.5 ms ) , about 100 ms after the image presentation , carry accurate information about the nature of the visual stimulus @xcite .
4,686
Suppose that you have an abstract for a scientific paper: matrix perturbation inequalities , such as weyl s theorem ( concerning the singular values ) and the davis - kahan theorem ( concerning the singular vectors ) , play essential roles in quantitative science ; in particular , these bounds have found application in data analysis as well as related areas of engineering and computer science . in many situations , the perturbation is assumed to be random , and the original matrix has certain structural properties ( such as having low rank ) . we show that , in this scenario , classical perturbation results , such as weyl and davis - kahan , can be improved significantly . we believe many of our new bounds are close to optimal and also discuss some applications . . And you have already written the first three sentences of the full article: the singular value decomposition of a real @xmath0 matrix @xmath1 is a factorization of the form @xmath2 , where @xmath3 is a @xmath4 orthogonal matrix , @xmath5 is a @xmath0 rectangular diagonal matrix with non - negative real numbers on the diagonal , and @xmath6 is an @xmath7 orthogonal matrix . the diagonal entries of @xmath5 are known as the _ singular values _ of @xmath1 . the @xmath8 columns of @xmath3 are the _ left - singular vectors _ of @xmath1 , while the @xmath9 columns of @xmath10 are the _ right - singular vectors _ of @xmath1 . if @xmath1 is symmetric , the singular values are given by the absolute value of the eigenvalues , and the singular vectors are just the eigenvectors of @xmath1 . here , and in the sequel , whenever we write _ singular vectors _ , the reader is free to interpret this as left - singular vectors or right - singular vectors provided the same choice is made throughout the paper .. Please generate the next two sentences of the article
consider a real ( deterministic ) @xmath0 matrix @xmath1 with singular values @xmath11 and corresponding singular vectors @xmath12 we will call @xmath1 the data matrix . in general , the vector @xmath13 is not unique .
4,687
Suppose that you have an abstract for a scientific paper: we study the algebraic boundary of a convex semi - algebraic set via duality in convex and algebraic geometry . we generalize the correspondence of facets of a polytope with the vertices of the dual polytope to general semi - algebraic convex sets . in this case , exceptional families of extreme points might exist and we characterize them semi - algebraically . we also give an algorithm to compute a complete list of exceptional families , given the algebraic boundary of the dual convex set . . And you have already written the first three sentences of the full article: the algebraic boundary of a semi - algebraic set is the smallest algebraic variety containing its boundary in the euclidean topology . for a full - dimensional polytope @xmath0 , it is the hyperplane arrangement associated to its facets which has been studied extensively in discrete geometry and complexity theory in linear programming @xcite . the algebraic boundary of a convex set which is not a polytope has recently been considered in other special cases , most notably the convex hull of a variety by ranestad and sturmfels , cf .. Please generate the next two sentences of the article
@xcite and @xcite . this class includes prominent families such as the moment matrices of probability distributions and the highly symmetric orbitopes .
4,688
Suppose that you have an abstract for a scientific paper: the formation of stars is usually accompanied by the launching of protostellar outflows . observations with the atacama large millimetre / sub - millimetre array ( alma ) will soon revolutionalise our understanding of the morphologies and kinematics of these objects . in this paper , we present synthetic alma observations of protostellar outflows based on numerical magnetohydrodynamic collapse simulations . we find significant velocity gradients in our outflow models and a very prominent helical structure within the outflows . we speculate that the disk wind found in the alma science verification data of hd 163296 presents a first instance of such an observation . . And you have already written the first three sentences of the full article: protostellar outflows are generally byproducts of star formation in the full range from low- to high - mass star - forming regions @xcite . here we focus our attention on outflows from intermediate - mass protostars of a few solar masses , with typical mass - loss rates of @xmath0 to a few @xmath1yr@xmath2 @xcite and outflow momentum rates from @xmath3 to several @xmath4kms@xmath2yr@xmath2 @xcite . intermediate - mass outflows are typically elongated with collimation factors between 1 and 10 @xcite , but recent observations in w75n have revealed an apparently very young , spherical outflow @xcite .. Please generate the next two sentences of the article
there are two essentially independent mechanisms that can drive protostellar outflows with the help of magnetic fields . first , the disk material can be accelerated centrifugally and launch a disk wind ( e.g. * ? ? ?
4,689
Suppose that you have an abstract for a scientific paper: the surface structure of converging thin fluid films displays self - similar behavior , as was shown in the work by diez et al [ q. appl . math 210 , 155 , 1990 ] . extracting the related similarity scaling exponents from either numerical or experimental data is non - trivial . here we provide two such methods . we apply them to experimental and numerical data on converging fluid films driven by both surface tension and gravitational forcing . in the limit of pure gravitational driving , we recover diez semi - analytic result , but our methods also allow us to explore the entire regime of mixed capillary and gravitational driving , up to entirely surface tension driven flows . we find scaling forms of smoothly varying exponents up to surprisingly small bond numbers . our experimental results are in reasonable agreement with our numerical simulations , which confirm theoretically obtained relations between the scaling exponents . . And you have already written the first three sentences of the full article: thin layers of fluid on a solid substrate display surprisingly rich dynamics , due to the interplay of forces at many lengthscales @xcite . much progress has been achieved on the study of thin fluid film systems . they are mostly well characterized by the lubrication approximation of the navier - stokes equations .. Please generate the next two sentences of the article
this elegant approximate formalism allows for tractable analysis of a wide range of fluid dynamics problems on many lengthscales , such as liquids spreading on flat surfaces @xcite , inclined surfaces @xcite , spin coating applications @xcite , dam breaks @xcite and geophysical @xcite contexts : `` thin '' here means that the height @xmath0 of the film is small with respect to the typical spreading lengthscale . the dynamics of the spatiotemporal evolution of the height field @xmath1 is of a very general form , essentially a nonlinear diffusion equation .
4,690
Suppose that you have an abstract for a scientific paper: we have characterized the physical properties ( electron temperature , density , metallicity ) of the ionized gas and the ionizing population ( age , metallicity , presence of wr stars ) in the lynx arc , a hii galaxy at @xmath03.36 . the uv doublets ( ciii ] , siiii ] and niv ) imply the existence of a density gradient in this object , with a high density region ( 0.1 - 1.0 @xmath1 10@xmath2 @xmath3 ) and a lower density region ( @xmath43200 @xmath3 ) . the temperature sensitive ratio [ oiii]@xmath51661,1666/@xmath65007 implies an electron temperature @xmath7=17300@xmath8 k , in agreement within the errors with photoionization model predictions . nebular abundance determination using standard techniques and the results from photoionization models imply a nebular metallicity of o / h@xmath910@xmath103% ( o / h)@xmath11 , in good agreement with fosbury et al ( 2003 ) . both methods suggest that nitrogen is overabundant relative to other elements , with [ n / o]@xmath92.0 - 3.0 @xmath1 [ n / o]@xmath11 . we do not find evidence for si overabundance , as fosbury et al . ( 2003 ) . photoionization models imply that the ionizing stellar population in the lynx arc has an age of @xmath125 myr . if he@xmath13 is ionized by wr stars , then the ionizing stars in the lynx arc have metallicities z@xmath14@xmath155% z@xmath11 and ages @xmath92.8 - 3.4 myr ( depending on z@xmath14 ) , when wr stars appear and are responsible for the he@xmath16 emission . however , alternative excitation mechanisms for this species are not discarded . since the emission lines trace the properties of the present burst only , nothing can be said about the possible presence of an underlying old stellar population . the lynx arc is a low metallicity hii galaxy that is undergoing a burst of star formation of @xmath125 myr age . one possible scenario that explains the emission line spectrum of the lynx arc , the large strength of the nitrogen lines and the he@xmath16 emission is that the object has experienced a merger.... And you have already written the first three sentences of the full article: h ii galaxies are dwarf emission - line galaxies undergoing a burst of star formation . they are characterized by strong and narrow emission lines originated in a giant star - forming region which dominate their observable properties at optical wavelengths ( e.g. @xcite ) . most are blue compact galaxies ( bcgs ) .. Please generate the next two sentences of the article
they have very low metallicities , high rates of star formation and a very young stellar content . many are compact and isolated . one of the reasons why these objects have attracted significant attention is the possibility that they are very young galaxies in the process of formation .
4,691
Suppose that you have an abstract for a scientific paper: a light front formalism for deep inelastic lepton scattering from finite nuclei is developed . in particular , the nucleon plus momentum distribution and a finite system analog of the hugenholtz - van hove theorem are presented . using a relativistic mean field model , numerical results for the plus momentum distribution and ratio of bound to free nucleon structure functions for oxygen , calcium and lead are given . we show that we can incorporate light front physics with excellent accuracy while using easily computed equal time wavefunctions . assuming nucleon structure is not modified in - medium we find that the calculations are not consistent with the binding effect apparent in the data not only in the magnitude of the effect , but in the dependence on the number of nucleons . 5.5innt@uw-02 - 001 . And you have already written the first three sentences of the full article: the nuclear structure function @xmath0 is smaller than @xmath1 times the free nucleon structure function @xmath2 for values of @xmath3 in the regime where valence quarks are dominant . this phenomenon , known as the european muon collaboration ( emc ) effect @xcite , has been known for almost twenty years . nevertheless , the significance of this observation remains unresolved even though there is a clear interpretation within the parton model : a valence quark in a bound nucleon carries less momentum than a valence quark in a free one .. Please generate the next two sentences of the article
there are many possible explanations , but no universally accepted one . the underlying mechanism responsible for the transfer of momentum within the constituents of the nucleus has not yet been specified .
4,692
Suppose that you have an abstract for a scientific paper: geometric sigma model is a purely geometric theory in which spacetime coordinates are seen as scalar fields coupled to gravity . although it looks like ordinary sigma model , it has the peculiarity that its complete matter content can be gauged away . the remaining geometric theory possesses a vacuum solution that is predefined in the process of constructing the theory . the fact that vacuum configuration is specified in advance is another peculiarity of geometric sigma models . in this paper , i construct geometric sigma models based on the arbitrarily chosen geometry of the universe . whatever geometry is chosen , the dynamics of its small perturbations is shown to be classically stable . this way , any desirable metric is made a stable solution of a particular model . the inflationary universe and the standard model universe are considered as examples of how this is done in practice . . And you have already written the first three sentences of the full article: the latest astronomical observations have given a substantial boost to the development of modern cosmology @xcite . in particular , the accelerating expansion of the universe has drawn much attention . the early time acceleration is widely known as _ inflation _ , while the late time acceleration is usually referred to as the epoch of _ dark energy _. Please generate the next two sentences of the article
presently , the @xmath0cdm model , in which the cosmological constant @xmath0 plays the role of dark energy , is accepted as a standard cosmological model . there is an extensive literature on other forms of dark energy , too @xcite .
4,693
Suppose that you have an abstract for a scientific paper: gaia observations of eclipsing binary stars will have a large impact on stellar astrophysics . accurate parameters , including absolute masses and sizes will be derived for @xmath0 systems , orders of magnitude more than what has ever been done from the ground . observations of 18 real systems in the gaia - like mode as well as with devoted ground - based campaigns are used to assess binary recognition techniques , orbital period determination , accuracy of derived fundamental parameters and the need to automate the whole reduction and interpretation process . # 1_#1 _ # 1_#1 _ = # 1 1.25 in .125 in .25 in . And you have already written the first three sentences of the full article: gaia observations of eclipsing binary stars will be of utmost importance to advances in stellar astrophysics . for no other class of objects one could determine fundamental stellar parameters , i.e. absolute mass , size and surface temperature distribution with a comparable accuracy . solutions of wide detached binaries can be used to accurately position them on the absolute h - r diagram .. Please generate the next two sentences of the article
identical age of both components places useful constraints on the theoretical isochrones for the given metallicity and rotational velocity which will also be derived from gaia observations . components in short period systems are closer and mutually disturbed , so their evolution is different from that of single stars .
4,694
Suppose that you have an abstract for a scientific paper: the first results from heavy ion collisions at the large hadron collider for charged particle spectra and elliptic flow are compared to an event - by - event hybrid approach with an ideal hydrodynamic expansion . this approach has been shown to successfully describe bulk observables at rhic . without changing any parameters of the calculation the same approach is applied to pb+pb collisions at @xmath0 tev . this is an important test if the established understanding of the dynamics of relativistic heavy ion collisions is also applicable at even higher energies . specifically , we employ the hybrid approach with two different equations of state and the pure hadronic transport approach to indicate sensitivities to finite viscosity . the centrality dependence of the charged hadron multiplicity , @xmath1 spectra and differential elliptic flow are shown to be in reasonable agreement with the alice data . furthermore , we make predictions for the transverse mass spectra of identified particles and triangular flow . the eccentricities and their fluctuations are found to be surprisingly similar to the ones at lower energies and therefore also the triangular flow results are very similar . any deviations from these predictions will indicate the need for new physics mechanisms responsible for the dynamics of heavy ion collisions . recently , the first results from heavy ion collisions at @xmath2 tev have been published by the alice collaboration @xcite . to study strongly interacting matter at high temperatures has been the goal of the relativistic heavy ion program at the relativistic heavy ion collider ( rhic ) since more than a decade . the 10 times higher beam energies at the large hadron collider ( lhc ) allow for the investigation of the dynamical evolution of nucleus - nucleus collisions that have been established in au+au collisions at @xmath3 gev in a different kinematic range @xcite . it is especially interesting , if the matter still behaves as a almost perfect liquid or if the quark.... And you have already written the first three sentences of the full article: we are grateful to the open science grid for the computing resources . the author thanks dirk rischke for providing the 1 fluid hydrodynamics code . h.p . acknowledges a feodor lynen fellowship of the alexander von humboldt foundation . this work was supported in part by u.s . department of energy grant de - fg02 - 05er41367 .. Please generate the next two sentences of the article
thanks jan steinheimer for help with the extension of the equation of state , guangyou qin for providing the glauber calculation of the numebr of particpants and steffen a. bass and berndt mller for fruitful discussions . 99 k. aamodt _ et al . _
4,695
Suppose that you have an abstract for a scientific paper: we present new data on the mass of the light and strange quarks from sesam / t@xmath0l . the results were obtained on lattice - volumes of @xmath1 and @xmath2 points , with the possibility to investigate finite - size effects . since the sesam / t@xmath0l ensembles at @xmath3 have been complemented by configurations with @xmath4 , moreover , we are now able to attempt the continuum extrapolation ( ce ) of the quark masses with standard wilson fermions . . And you have already written the first three sentences of the full article: the precise determination of light quark masses , @xmath5 , from full qcd simulations is of considerable interest since the lattice provides the only known access to their absolute scale . we have pointed out some time ago @xcite that vacuum polarization effects have sizeable impact on extracting @xmath5 from the empirical @xmath6 ratio . this became manifest in a substantial ambiguity in the determination of @xmath5 from the vector ward identities .. Please generate the next two sentences of the article
indeed , there is a freedom of chiral extrapolation ( @xmath0e ) on the lattice : at @xmath7 _ e.g. _ , the extrapolation along the line of equal valence and sea quark hopping parameters , @xmath8 ( direct extrapolation ) , yields a value for @xmath9 which is roughly a factor of two below the result from a semi - quenched , two - step extrapolation procedure . in the latter one first determines the values of @xmath10 for each ensemble with fixed value of @xmath11 ( similar to quenched simulations ) . for each @xmath12 @xmath13
4,696
Suppose that you have an abstract for a scientific paper: we study the strain response to steady imposed stress in a spatially homogeneous , scalar model for shear thickening , in which the local rate of yielding @xmath0 of mesoscopic ` elastic elements ' is not monotonic in the local strain @xmath1 . despite this , the macroscopic , steady - state flow curve ( stress vs. strain rate ) is monotonic . however , for a broad class of @xmath0 , the response to steady stress is not in fact steady flow , but spontaneous oscillation . we discuss this finding in relation to other theoretical and experimental flow instabilities . within the parameter ranges we studied , the model does not exhibit rheo - chaos . the flow behaviour of shear - thickening materials such as dense colloidal suspensions can be complex @xcite . for example , imposition of a steady mean strain rate can lead to large , possibly chaotic , variations in the mean stress @xcite . the same occurs in some types of shear - thickening micellar surfactant solutions , where true temporal chaos seems now to be established @xcite ( and also in shear thinning systems ; see @xcite ) . other unexpected behaviour , such as a bifurcation to an oscillatory state , has also been seen in shear - thickening ` onion ' phases of surfactant @xcite . it is not yet known to what extent such unsteady flow is generic in shear - thickening systems ; in this letter we attempt to shed some light on the issue by studying a much - simplified , generic model . in this model we find , for a wide range of parameters , spontaneous rheological oscillation of the strain rate at fixed stress . rheo - chaos is , however , not found for the parameters studied so far . a feature that distinguishes the rheological instabilities encountered in shear - thickening from those arising in newtonian fluids is that the nonlinearity is not inertial ( not from the advective term of the navier stokes equation ) : the reynolds number is essentially zero @xcite . instead it arises from anharmonic elastic responses at large.... And you have already written the first three sentences of the full article: the steady state solution @xmath21 of ( [ e : master ] ) is found as ( setting @xmath22 from now on , for convenience ) @xmath23 \label{e : st3}\ ] ] where @xmath24 , and @xmath25 , the asymptotic jump rate , is fixed by normalization of @xmath26 . it is straightforward to show from ( [ e : st3 ] ) that in the limit of slow flows @xmath27 , the steady state stress response @xmath28 ( in an obvious notation ) is always linear : @xmath29 . hence there is no yield stress for any choice of @xmath30 .. Please generate the next two sentences of the article
this contrasts with a model having an exponential distribution of barrier heights @xmath11 for different elements , which does show onset of a yield stress , connected with the presence of a glass transition , as @xmath17 is reduced @xcite . for monodisperse @xmath11 , as here , there is no such transition . we now show that the steady state flow curve , for _ any _ choice of function @xmath20 , has a monotonically increasing @xmath31 .
4,697
Suppose that you have an abstract for a scientific paper: measurements of , hi , and co distributions in 61 normal spiral galaxies are combined with published far - infrared and co observations of 36 infrared - selected starburst galaxies , in order to study the form of the global star formation law , over the full range of gas densities and star formation rates ( sfrs ) observed in galaxies . the disk - averaged sfrs and gas densities for the combined sample are well represented by a schmidt law with index @xmath0 . the schmidt law provides a surprisingly tight parametrization of the global star formation law , extending over several orders of magnitude in sfr and gas density . an alternative formulation of the star formation law , in which the sfr is presumed to scale with the ratio of the gas density to the average orbital timescale , also fits the data very well . both descriptions provide potentially useful recipes " for modelling the sfr in numerical simulations of galaxy formation and evolution . . And you have already written the first three sentences of the full article: a key ingredient in the understanding and modelling of galaxy evolution is the relationship between the large - scale star formation rate ( sfr ) and the physical conditions in the interstellar medium ( ism ) . most current galaxy formation and evolution models treat star formation using simple ad hoc parametrizations , and our limited understanding of the actual form and nature of the sfr - ism interaction remains as one of the major limitations in these models ( e.g. , navarro & steinmetz 1997 ) . measurements of the star formation law in nearby galaxies can address this problem in two important respects , by providing empirical recipes " that can be incorporated into analytical models and numerical simulations , and by providing clues to the physical mechanisms that underlie the observed correlations .. Please generate the next two sentences of the article
the most widely applied star formation law remains the simple gas density power law introduced by schmidt ( 1959 ) , which for external galaxies is usually expressed in terms of the observable surface densities of gas and star formation : @xmath2 the validity of the schmidt law has been tested in dozens of empirical studies , with most measured values of @xmath3 falling in the range 1 @xmath4 2 , depending on the tracers used and the linear scales considered ( kennicutt 1997 ) . on large scales the star formation law shows a more complex character , with a schmidt law at high gas densities , and a sharp decline in the sfr below a critical threshold density ( kennicutt 1989 , hereafter k89 ) . these thresholds appear to be associated with large - scale gravitational stability thresholds for massive cloud formation ( e.g. , quirk 1972 ; fall & efstathiou 1980 ; k89 ) . at high gas densities , well above the stability threshold
4,698
Suppose that you have an abstract for a scientific paper: in contrast with entanglement , as measured by concurrence , in general , quantum discord does not possess the property of monogamy , that is , there is no tradeoff between the quantum discord shared by a pair of subsystems and the quantum discord that both of them can share with a third party . here , we show that , as far as monogamy is considered , quantum discord of pure states is equivalent to the entanglement of formation . this result allows one to analytically prove that none of the pure three - qubit states belonging to the subclass of w states is monogamous . a suitable physical interpretation of the meaning of the correlation information as a quantifier of monogamy for the total information is also given . finally , we prove that , for rank 2 two - qubit states , discord and classical correlations are bounded from above by single - qubit von neumann entropies . . And you have already written the first three sentences of the full article: entanglement , first recognized as the characteristic trait of quantum mechanics @xcite , has been used for a long time as the main indicator of the quantumness of correlations . indeed , as shown in ref . @xcite , for pure - state computation , exponential speed - up occurs only if entanglement grows with the size of the system . however , the role played by entanglement in mixed - state computation is less clear .. Please generate the next two sentences of the article
for instance , in the so - called deterministic quantum computation with one qubit ( dqc1 ) protocol @xcite , quantum speed - up can be achieved using factorized states . as shown in ref . @xcite , speed - up could be due to the presence of another quantifier , the so called quantum discord @xcite , which is defined as the difference between two quantum analogs of the classical mutual information .
4,699
Suppose that you have an abstract for a scientific paper: we present a new , dynamical way to study powers ( that is , repetitions ) in sturmian words based on results from diophantine approximation theory . as a result , we provide an alternative and shorter proof of a result by damanik and lenz characterizing powers in sturmian words [ powers in sturmian sequences , eur . j. combin . 24 ( 2003 ) , 377390 ] . further , as a consequence , we obtain a previously known formula for the fractional index of a sturmian word based on the continued fraction expansion of its slope . : sturmian word , standard word , power , combinatorics on words , continued fraction . And you have already written the first three sentences of the full article: in 2003 damanik and lenz @xcite completely described factors of length @xmath0 of a sturmian word which occur as @xmath1 powers for every @xmath2 and @xmath3 . damanik and lenz prove a series of results concerning how factors of a sturmian word align to the corresponding ( finite ) standard words . by a careful analysis of the alignment , they obtain the complete description of powers thanks to known results on powers of standard words .. Please generate the next two sentences of the article
our method is based on the dynamical view of sturmian words as codings of irrational rotations . translating word - combinatorial concepts into corresponding dynamical concepts allows us to apply powerful results from diophantine approximation theory ( such as the three distance theorem ) providing a more geometric proof of the result of damanik and lenz .