id
int64
0
203k
input
stringlengths
66
4.29k
output
stringlengths
0
3.83k
11,700
Suppose that you have an abstract for a scientific paper: statistical design of experiments is widely used in scientific and industrial investigations . an optimum orthogonal array can extract as much information as possible at a fixed cost . finding ( optimum ) orthogonal arrays is an open ( yet fundamental ) problem in design of experiments because constructing ( optimum ) orthogonal arrays or proving non - existence becomes intractable as the number of runs and factors increase . enumerating orthogonal arrays is equivalent to finding all feasible solutions to a class of inherently symmetric constraint satisfaction problems . the inherent symmetry implies that the solutions remain feasible under the action of a group permuting the variables . this causes the number of feasible solutions to be huge . we develop algorithms for enumerating orthogonal arrays that call the margot ( 2007 ) isomorphism pruning algorithm and bring the enumeration of oa@xmath0 for @xmath1 and oa@xmath2 for @xmath3 within computational reach . a catalog of the newly found optimum orthogonal arrays obtained using our algorithms is also presented . constraint programming ; extension algorithm ; integer linear programming ; isomorphism rejection ; ` nauty ` ; orthogonal designs . . And you have already written the first three sentences of the full article: this interdisciplinary work tailors and whets some state - of - the - art operations research methods to find solutions in a fundamental class of problems in design of experiments . our exact algorithms call the margot ( 2007 ) integer linear programming ( ilp ) solver for enumerating orthogonal arrays ( oas ) or finding optimum oas . we make comparisons to the efficient constraint programming ( cp ) with isomorphism rejection oa enumeration algorithm of schoen , eendebak and nguyen ( 2010 ) and exploit equivalence of orthogonal designs ( ods ) to speed up our exact algorithms .. Please generate the next two sentences of the article
a factorial design @xmath4 with @xmath5 runs and @xmath6 factors each having @xmath7-levels is an orthogonal array of strength @xmath8 , @xmath9 , denoted by @xmath10 , if each of the @xmath11 level combinations appears exactly @xmath12 times when @xmath4 is projected onto any @xmath8 factors . clearly , an @xmath10 is also an oa@xmath13 for @xmath14 . the index @xmath15 of an @xmath10 is defined as @xmath12 .
11,701
Suppose that you have an abstract for a scientific paper: we propose a simple algorithm which produces a new category of networks , high dimensional random apollonian networks , with small - world and scale - free characteristics . we derive analytical expressions for their degree distributions and clustering coefficients which are determined by the dimension of the network . the values obtained for these parameters are in good agreement with simulation results and comparable to those coming from real networks . we estimate also analytically that the average path length of the networks increases at most logarithmically with the number of vertices . complex networks , scale - free networks , small - world networks , disordered systems , networks 89.75.da,89.75.fb,89.75.hc . And you have already written the first three sentences of the full article: since the pioneering papers by watts and strogatz on small - world networks @xcite and barabsi and albert on scale - free networks @xcite , complex networks have received considerable attention as an interdisciplinary subject @xcite . complex networks describe many systems in nature and society , such as internet @xcite , world wide web @xcite , metabolic networks @xcite , protein networks in the cell @xcite , co - author networks @xcite and sexual networks @xcite , most of which share three apparent features : power - law degree distribution , small average path length ( apl ) and high clustering coefficient . in recent years , many evolving models @xcite have been proposed to describe real - life networks .. Please generate the next two sentences of the article
the original ba model @xcite captures the two main mechanisms responsible for the power - law degree distribution of degrees , namely growth and preferential attachment . dorogovtsev , mendes , and samukhin @xcite gave an exact solution for a class of growing network models thanks to the use of a `` master - equation '' .
11,702
Suppose that you have an abstract for a scientific paper: these notes show and comment the examples that have been used to validate the cosmicfish code . we compare the results obtained with the code to several other results available in literature finding an overall good level of agreement . we will update this set of notes when relevant modifications to the cosmicfish code will be released or other validation examples are worked out . + the cosmicfish code and the package to produce all the validation results presented here are publicly available at http://cosmicfish.github.io . + the present version is based on cosmicfish jun16 . . And you have already written the first three sentences of the full article: in @xcite we introduced the cosmicfish code as a powerful tool to perform forecast on many different models with future cosmological experiments . + in this set of notes we show the validation pipeline that was used for the code . we compared the results obtained with the cosmicfish code to other results in literature .. Please generate the next two sentences of the article
we find an overall good level of agreement . + together with these notes we release a cosmicfish package that contains the relevant code to produce all the results presented here . this package is going to be updated as new validation results become available .
11,703
Suppose that you have an abstract for a scientific paper: this work generalizes the additively partitioned runge - kutta methods by allowing for different stage values as arguments of different components of the right hand side . an order conditions theory is developed for the new family of generalized additive methods , and stability and monotonicity investigations are carried out . the paper discusses the construction and properties of implicit - explicit and implicit - implicit , methods in the new framework . the new family , named gark , introduces additional flexibility when compared to traditional partitioned runge - kutta methods , and therefore offers additional opportunities for the development of flexible solvers for systems with multiple scales , or driven by multiple physical processes . computational science laboratory technical report csl - tr-5/2013 + adrian sandu and michael gnther `` a class of generalized + additive runge - kutta methods '' computational science laboratory + computer science department + virginia polytechnic institute and state university + blacksburg , va 24060 + phone : ( 540)-231 - 2193 + fax : ( 540)-231 - 6075 + email : [email protected] + web : http://csl.cs.vt.edu [ cols="^,^,^ " , ] partitioned runge - kutta methods , nb - series , algebraic stability , absolute monotonicity , implicit - explicit , implicit - implicit methods 65l05 , 65l06 , 65l07 , 65l020 . . And you have already written the first three sentences of the full article: in many applications , initial value problems of ordinary differential equations are given as _ additively _ partitioned systems @xmath0 where the right - hand side @xmath1 is split into @xmath2 different parts with respect to , for example , stiffness , nonlinearity , dynamical behavior , and evaluation cost . additive partitioning also includes the special case of _ component _ partitioning where the solution vector is split into @xmath2 disjoint sets , @xmath3 , with the @xmath4-th set containing the components @xmath5 with indices @xmath6 .. Please generate the next two sentences of the article
one defines a corresponding partitioning of the right hand side @xmath7 where @xmath8 is the @xmath9-th column of the identity matrix , and superscripts ( without parentheses ) represent vector components . a particular case is _ coordinate _ partitioning where @xmath10 and @xmath11 , @xmath12 : @xmath13 the development of runge - kutta ( rk ) methods that are tailored to the partitioned system started with the early work of rice @xcite .
11,704
Suppose that you have an abstract for a scientific paper: x ray emission from large scale extragalactic jets is likely to be due to inverse compton scattering of relativistic particles off seed photons of both the cosmic microwave background field and the blazar nucleus . the first process dominates the observed high energy emission of large scale jets if the plasma is moving at highly relativistic speeds and if the jet is aligned with the line of sight , i.e. in powerful flat radio spectrum quasars . the second process is relevant when the plasma is moving at mildly bulk relativistic speeds , and can dominate the high energy emission in misaligned sources , i.e. in radio galaxies . we show that this scenario satisfactorily accounts for the spectral energy distribution detected by @xmath0 from the jet and core of pks 0637752 . 0_h_@xmath1 0_q_@xmath1 galaxies : active - galaxies : jets - galaxies : quasars : individual : pks 0637752 - radiation mechanisms : non thermal . And you have already written the first three sentences of the full article: the @xmath0 x ray observatory ( cxo ) is providing us with unprecedented high spatial ( and spectral ) resolution data on extended structures . the very first observation of a radio loud quasar , pks 0637752 , revealed the presence of an x - ray jet extending for @xmath2 10 arcsec ( chartas et al . 2000 ; schwartz et al . 2000 ) , and since then other x ray jets have been detected with high spatial resolution ( cen a , pictor a , see cxo www page http://chandra.harvard.edu/photo/cycle1.html ) . while the detection of an x ray jet is not unprecedented ( e.g. m87 , biretta , stern & harris , 1991 ; 3c273 , harris & stern 1987 ; rser et al . 2000 ) , it is interesting to notice that both radio galaxies and blazar like sources ( i.e. with a jet oriented close to the line of sight ) have been observed . models involving inverse compton scattering of the cosmic microwave background ( cmb ) and nuclear hidden quasar radiation to produce large scale x . Please generate the next two sentences of the article
rays have been proposed ( e.g. brunetti , setti & comastri 1997 in the case of emission from the lobes ) . however it has been found that the local broad band spectral energy distributions from the knots in both pks 0637752 and m87 , are not straightforward to interpret , since the level of the x ray emission is higher than what simple models predicts ( e.g. chartas et al .
11,705
Suppose that you have an abstract for a scientific paper: we study propagating alfvn waves by solving the time - dependent equations of magnetohydrodynamics ( mhd ) in one dimension numerically . in a homogeneous medium the circularly polarized alfvn wave is an exact solution of the ideal mhd equations , and therefore it does not suffer from any dissipation . a high - amplitude linearly polarized alfvn wave , on the other hand , steepens and form current sheets , in which the poynting flux is lost . in a stratified medium , however , a high - amplitude circularly polarized alfvn wave can also lose a significant fraction of its poynting flux . # 1_#1 _ # 1_#1 _ = # 1 1.25 in .125 in .25 in . And you have already written the first three sentences of the full article: the acceleration of the fast solar wind is a long - standing problem in solar physics . it is likely that alfvn waves play a significant role in this process ( e.g. leer et al . . they can propagate over long distance , which allows them to reach the outer corona , because to lowest order they are incompressible and do not dissipate .. Please generate the next two sentences of the article
however dissipative damping is required at some point to avoid too high wind velocities ( e.g. holzer et al . a closer examination shows though that a linearly polarized alfvn wave is compressible to second order , since the magnetic pressure , @xmath0 , varies with half the wave length of the alfvn wave itself ( alfvn & flthammar 1963 ) . in a circularly polarized alfvn wave , on the other hand , the magnetic pressure is constant along the wave , which is the physical reason why the circularly polarized alfvn wave in a homogeneous medium is an exact solution to the nonlinear mhd equations .
11,706
Suppose that you have an abstract for a scientific paper: we numerically implement the variational approach for reconstruction in the inverse crack and cavity problems developed by one of the authors . the method is based on a suitably adapted free - discontinuity problem . its main features are the use of phase - field functions to describe the defects to be reconstructed and the use of perimeter - like penalizations to regularize the ill - posed problem . the numerical implementation is based on the solution of the corresponding optimality system by a gradient method . numerical simulations are presented to show the validity of the method . * ams 2000 mathematics subject classification * primary 35r30 . secondary 65n21 , 65k10 . * keywords * inverse problems , cracks , cavities , phase - field , perimeter penalization , optimality system . . And you have already written the first three sentences of the full article: we consider a homogenous and isotropic conducting body , assumed to be contained in @xmath0 , a bounded , lipschitz domain of @xmath1 , @xmath2 . we assume that there exist @xmath3 , a lipschitz domain contained in , and different from , @xmath0 , and a closed set @xmath4 such that the interior of @xmath5 is not empty and @xmath5 has a positive distance from @xmath6 . we assume that @xmath5 is known and accessible to measurements. Please generate the next two sentences of the article
. in the body there might be present some defects , which we assume to be perfectly insulating and outside @xmath3 . namely , we model these defects by a closed set @xmath7 such that @xmath8 is empty .
11,707
Suppose that you have an abstract for a scientific paper: antares is a project aiming at the operation of an underwater detector at a depth of 2.5 km close to toulon in the south of france . the detector is expected to be completed at the beginning of 2007 . the main purpose of the experiment is the detection of high energy neutrinos produced in astrophysical sources . being weakly interacting , neutrinos could potentially be more powerful messengers of the universe compared to photons , but their detection is challenging . the technique employs phototubes to detect the arrival time and the amplitude of photons emitted by neutrino charged secondaries due to the cherenkov effect . antares will contribute significantly in the field of neutrino astronomy , observing the galactic centre with unprecedented pointing capabilities . . And you have already written the first three sentences of the full article: neutrinos are considered possible new messengers of the universe since their properties differ from those of photons , that currently provide most information . they could improve our understanding of the sources and mechanisms capable of accelerating cosmic rays up to energies larger than @xmath0 ev , that are observed by extensive air shower arrays . photons of energies larger than 10 tev can not bring information from distances @xmath1 mpc since they interact by pair production with background photons . a deep connection between high energy @xmath2 emissions and @xmath3 production exists according to the model of the `` beam dump '' .. Please generate the next two sentences of the article
protons or nuclei accelerated by an engine , interact on a gas of matter or photons . these interactions result mainly in neutral and charged pions which decay into photons and neutrinos .
11,708
Suppose that you have an abstract for a scientific paper: u.s . tsunami warning centers use real - time bottom pressure ( bp ) data transmitted from a network of buoys deployed in the pacific and atlantic oceans to tune source coefficients of tsunami forecast models . for accurate coefficients and therefore forecasts , tides at the buoys must be accounted for . in this study , five methods for coefficient estimation are compared , each of which accounts for tides differently . the first three subtract off a tidal prediction based on ( 1 ) a localized harmonic analysis involving 29 days of data immediately preceding the tsunami event , ( 2 ) 68 pre - existing harmonic constituents specific to each buoy , and ( 3 ) an empirical orthogonal function fit to the previous 25 hrs of data . method ( 4 ) is a kalman smoother that uses method ( 1 ) as its input . these four methods estimate source coefficients after detiding . method ( 5 ) estimates the coefficients simultaneously with a two - component harmonic model that accounts for the tides . the five methods are evaluated using archived data from eleven dart^^ buoys , to which selected artificial tsunami signals are superimposed . these buoys represent a full range of observed tidal conditions and background bp noise in the pacific and atlantic , and the artificial signals have a variety of patterns and induce varying signal - to - noise ratios . the root - mean - square errors ( rmses ) of least squares estimates of sources coefficients using varying amounts of data are used to compare the five detiding methods . the rmse varies over two orders of magnitude between detiding methods , generally decreasing in the order listed , with method ( 5 ) yielding the most accurate estimate of source coefficient . the rmse is substantially reduced by waiting for the first full wave of the tsunami signal to arrive . as a case study , the five method are compared using data recorded from the devastating 2011 japan tsunami . . And you have already written the first three sentences of the full article: to collect data needed to provide coastal communities with timely tsunami warnings , the national oceanic and atmospheric administration ( noaa ) has deployed an array of deep - ocean assessment and reporting of tsunamis ( dart^^ ) buoys at strategic locations in the pacific and atlantic oceans ( gonzlez _ et al . _ , 2005 ; titov _ et al . _ , 2005 ; spillane _ et al . _ , 2008 ; mofjeld , 2009 ) .. Please generate the next two sentences of the article
when a tsunami event occurs , data from these buoys are analyzed at u.s . tsunami warning centers ( twcs ) using the short - term inundation forecast for tsunamis ( sift ) application ( gica _ et al .
11,709
Suppose that you have an abstract for a scientific paper: the nrao vla and vlba were used from 1988 november to 1998 june to monitor the radio continuum counterpart to the optical broad line region ( blr ) in the seyfert galaxy ngc5548 . photometric and astrometric observations were obtained at 21 epochs . the radio nucleus appeared resolved , so comparisons were limited to observations spanning 10 - 60 days and 3 - 4 yr , and acquired at matched resolutions of 1210 mas @xmath0 640 pc ( 9 vla observations ) , 500 mas @xmath0 260 pc ( 9 vla observations ) , or 2.3 mas @xmath0 1.2 pc ( 3 vlba observations ) . the nucleus is photometrically variable at 8.4 ghz by @xmath1% and @xmath2% between vla observations separated by 41 days and 4.1 yr , respectively . the 41-day changes are milder ( @xmath3% ) at 4.9 ghz and exhibit an inverted spectrum ( @xmath4 , @xmath5 ) from 4.9 to 8.4 ghz . the nucleus is astrometrically stable at 8.4 ghz , to an accuracy of 28 mas @xmath0 15 pc between vla observations separated by 4.1 yr and to an accuracy of 1.8 mas @xmath0 0.95 pc between vlba observations separated by 3.1 yr . such photometric variability and astrometric stability is consistent with a black hole being the ultimate energy source for the blr , but is problematic for star cluster models that treat the blr as a compact supernova remnant and , for ngc5548 , require a new supernova event every 1.7 yr within an effective radius @xmath6 42 mas @xmath0 22 pc . a deep image at 8.4 ghz with resolution 660 mas @xmath0 350 pc was obtained by adding data from quiescent vla observations . this image shows faint bipolar lobes straddling the radio nucleus and spanning 12 arcsec @xmath0 6.4 kpc . these synchrotron - emitting lobes could be driven by twin jets or a bipolar wind from the seyfert 1 nucleus . . And you have already written the first three sentences of the full article: the seyfert 1 galaxy ngc5548 was the target of intensive space- and ground - based monitoring designed to measure the light travel time across its broad optical - line - emitting region ( @xcite , and references therein ) . these reverberation studies imply that the broad line region ( blr ) has a characteristic diameter @xmath7 40 light days , or @xmath8as for h@xmath9 km s@xmath10 mpc@xmath10 and the galaxy s recession velocity relative to the 3 k background ( 5354 km s@xmath10 ; @xcite ) . given this small size , is it possible to constrain models for the ultimate energy source for the blr in ngc5548 ?. Please generate the next two sentences of the article
the common view is that the energy source is a supermassive black hole fuelled by an accretion disk ( e.g. , @xcite ) , and three recent findings support this scenario for ngc5548 . first , analysis of variations in the emission line profiles of ngc5548 suggests that the blr material moves along randomly inclined keplerian orbits defined by a central mass of order @xmath11 ( @xcite ) .
11,710
Suppose that you have an abstract for a scientific paper: we present spectral and timing analysis of observations of the accreting x - ray pulsar rxp . the source was serendipitously observed during a campaign focused on the gamma - ray binary and was later targeted for a dedicated observation . the spectrum has a typical shape for accreting x - ray pulsars , consisting of a simple power law with an exponential cutoff starting at @xmath0 kev with a folding energy of @xmath1 kev . there is also an indication of the presence of a 6.4 kev iron line in the spectrum at the @xmath2 significance level . measurements of the pulsation period reveal that the pulsar has undergone a strong and steady spin - up for the last 20 years . the pulsed fraction is estimated to be @xmath3 , and is constant with energy up to 40 kev . the power density spectrum shows a break towards higher frequencies relative to the current spin period . this , together with steady persistent luminosity , points to a long - term mass accretion rate high enough to bring the pulsar out of spin equilibrium . . And you have already written the first three sentences of the full article: 2rxp j130159.6 - 635806 , first discovered by the _ rosat _ observatory , was later rediscovered in hard x - rays by the _ integral_/ibis telescope and designated with the name igrj13020 - 6359 @xcite . the first comprehensive analysis of the temporal and spectral x - ray properties of this source was done by @xcite using data from the _ asca _ , _ bepposax _ , _ integral _ and _ xmm - newton _ observatories . in particular , _ xmm - newton _ data showed coherent pulsations with a period of around 700 s. joint spectral analysis of _ xmm - newton _ and _ integral _ data demonstrated that the spectral shape is very typical for accretion - powered x - ray pulsars ( namely , an absorbed power law with a high - energy cut - off ) . based on 2mass archival data , @xcite proposed that the binary companion to 2rxp j130159.6 - 635806 is a be star at a distance of 47 kpc . this suggestion was later confirmed by , who reported the presence of emission lines of hei @xmath42.0594 @xmath5 m and br(74 ) @xmath42.1663 @xmath5 m , which are typical for a be star .. Please generate the next two sentences of the article
the spectral type of the optical counterpart was determined to be b0.5ve . the orbital period of the binary remains unknown .
11,711
Suppose that you have an abstract for a scientific paper: we use a simple nonlinear scaler displacement model to calculate the distribution of effect created by a shear stress on a double stranded dna ( dsdna ) molecule and the value of shear force @xmath0 which is required to separate the two strands of a molecule at a given temperature . it is shown that for molecules of base pairs less than 21 , the entire single strand moves in the direction of applied force whereas for molecules having base pairs more than 21 , part of the strand moves in the opposite direction under the influence of force acting on the other strand . this result as well as the calculated values of @xmath0 as a function of length of dsdna molecules are in very good agreement with the experimental values of hatch et al . ( phys . rev . e @xmath1 , 011920 ( 2008 ) ) . . And you have already written the first three sentences of the full article: a double stranded dna ( dsdna ) molecule consists of two polynucleotide strands connected loosely by hydrogen bonds through the base pairs and base - stacking between nearest neighbour pairs of base pairs and wound around each other to make a helix . the constraints of this helical structure require that the two base sequences on opposite strands must be complementary , with adenine ( a ) always binding to thymine ( t ) and guanine ( g ) binding to cytosine ( c ) @xcite . the force that holds the complementary strands of dna together is an important regulator of life s processes because the binding of regulatory proteins to dna often involves the procedure of mechanical separation of its strands .. Please generate the next two sentences of the article
the intermolecular forces of dna have been studied extensively using a variety of techniques , which cover a broad range of forces from a few piconewton(pn ) up to several hundred piconewtons ( pns ) . these techniques use either atomic force microscopy ( afm ) or laser optical traps and magnetic tweezers @xcite .
11,712
Suppose that you have an abstract for a scientific paper: hd170582 is an interacting binary of the double periodic variable ( dpv ) type , showing ellipsoidal variability with a period of 16.87 days along with a long photometric cycle of 587 days . it was recently studied by mennickent et al . ( 2015 ) , who found a slightly evolved b - type star surrounded by a luminous accretion disc fed by a roche - lobe overflowing a - type giant . here we extend their analysis presenting new spectroscopic data and studying the balmer emission lines . we find orbitally modulated double - peak h@xmath0 and h@xmath1 emissions whose strength also vary in the long - term . in addition , doppler maps of the emission lines reveal sites of enhanced line emission in the 1st and 4th velocity quadrants , the first one consistent with the position of one of the bright zones detected by the light curve analysis . we find a difference between doppler maps at high and low stage of the long cycle ; evidence that the emission is optically thicker at high state in the stream - disc impact region , possibly reflecting a larger mass transfer rate . we compare the system parameters with a grid of synthetic binary evolutionary tracks and find the best fitting model . the system is found to be semi - detached , in a conservative case - b mass transfer stage , with age 7.68 @xmath2 10@xmath3 yr and mass transfer rate 1.6 @xmath2 10@xmath4 @xmath5 . for 5 well - studied dpvs , the disc luminosity scales with the primary mass and is much larger than the theoretical accretion luminosity . [ firstpage ] stars : early - type - stars : evolution - stars : mass - loss - stars : emission - line - stars : variables - others . And you have already written the first three sentences of the full article: the interacting binary hd170582 ( bd-14 5085 , asas i d 183048 - 1447.5 , @xmath6 = 18:30:47.5 , @xmath7 = -14:47:27.8 , @xmath8 = 9.66 mag , @xmath9 = 0.41 mag , spectral type a9v ) is a member of the class of double periodic variables , algol - related binaries showing a long photometric cycle lasting about 33 times the orbital period with a period ratio mostly between 25 and 45 ( mennickent et al . 2003 , mennickent et al . 2008 , poleski et al .. Please generate the next two sentences of the article
2010 , mennickent et al . 2012a , 2012b , mennickent 2013 , garrido et al .
11,713
Suppose that you have an abstract for a scientific paper: stephan s quintet ( sq ) , discovered more than 100 years ago , is the most famous and well studied compact galaxy group . it has been observed in almost all wavebands , with the most advanced instruments including spitzer , galex , hst , chandra , vla , and various large mm / submm telescopes / arrays such as the iram 30 m and bima . the rich multi - band data reveal one of the most fascinating pictures in the universe , depicting a very complex web of interactions between member galaxies and various constituents of the intragroup medium ( igm ) , which in turn trigger some spectacular activities such as a 40 kpc large scale shock and a strong igm starburst . in this talk i will give a review on these observations . . And you have already written the first three sentences of the full article: as an interacting galaxy system , stephan s quintet ( hereafter sq ) is not as famous as arp 220 or m82 . taken as a popularity index , the number of references listed by ned ( until dec . 18th , 2005 ) under stephan s quintet is only 104 , far less than that under m82 ( 1355 ) and arp 220 ( 563 ) . however , its fame as a very special case of multi - galaxy collision has been rising steadily recently , thanks mostly to the ever increasing number of observational windows opened one after another in this space astronomy era .. Please generate the next two sentences of the article
sq has been observed in almost all wavebands , and it keeps revealing surprises when being looked at by new instruments . different from binary galaxies and mergers which have been thoroughly discussed elsewhere in this conference , sq belongs to a different class of interacting systems of galaxies : the compact groups ( hickson @xcite ) that are characterized by aggregates of 4 8 galaxies in implied space densities as high as those in cluster cores .
11,714
Suppose that you have an abstract for a scientific paper: a critical component of exoplanetary studies is an exhaustive characterization of the host star , from which the planetary properties are frequently derived . of particular value are the radius , temperature , and luminosity , which are key stellar parameters for studies of transit and habitability science . here we present the results of new observations of wolf 1061 , known to host three super - earths . our observations from the center for high angular resolution astronomy ( chara ) interferometric array provide a direct stellar radius measurement of @xmath0 @xmath1 , from which we calculate the effective temperature and luminosity using spectral energy distribution models . we obtained seven years of precise , automated photometry that reveals the correct stellar rotation period of @xmath2 days , finds no evidence of photometric transits , and confirms the radial velocity signals are not due to stellar activity . finally , our stellar properties are used to calculate the extent of the habitable zone for the wolf 1061 system , for which the optimistic boundaries are 0.090.23 au . our simulations of the planetary orbital dynamics shows that the eccentricity of the habitable zone planet oscillates to values as high as @xmath30.15 as it exchanges angular momentum with the other planets in the system . . And you have already written the first three sentences of the full article: it is frequently stated that we understand exoplanets only as well as we understand the host star . such a statement is particularly true for low - mass dwarf stars , whose atmospheres often diverge from blackbody models . there has been a concerted effort in recent years to obtain observational constraints on the stellar models for low - mass stars @xcite , especially for those monitored by the _ mission @xcite .. Please generate the next two sentences of the article
a further challenge includes the confusion that can be caused by the stellar rotation period of low - mass stars since that can often coincide with the range of orbital periods of planets that may exist in the habitable zone ( hz ) of those stars @xcite . even so , there have been several successful detections of terrestrial planets in or near the hz of low - mass stars , such as kepler-186 f @xcite , k2 - 3 d @xcite , and the recent discovery of proxima centauri b @xcite .
11,715
Suppose that you have an abstract for a scientific paper: we discuss certain general features of the pentaquark picture for the @xmath0 , its @xmath1 partner , @xmath2 , and possible heavy quark analogues . models employing spin - dependent interactions based on either effective goldstone boson exchange or effective color magnetic exchange are also used to shed light on possible corrections to the jaffe - wilczek and karliner - lipkin scenarios . some model - dependent features of the pentaquark picture ( splitting patterns and relative decay couplings ) are also discussed in the context of these models . . And you have already written the first three sentences of the full article: as a basis for the discussion which follows , i adopt the following version of the current `` experimental situation '' with regard to reported pentaquark resonances : * a significant number of medium - energy experiments report positive evidence for the @xmath0 . its existence is thus plausible . if these experiments are correct , the @xmath0 mass and width are @xmath3 mev and @xmath4 mev .. Please generate the next two sentences of the article
see ref . @xcite for a critical assessment of both positive and negative experimental searches , as well as references to the experimental literature . * the na49 exotic @xmath5 , @xmath6 ( @xmath2 ) signal with @xmath7 mev and narrow width
11,716
Suppose that you have an abstract for a scientific paper: we propose a simple method of combined synchronous modulations to generate the analytically exact solutions for a parity - time symmetric two - level system . such exact solutions are expressible in terms of simple elementary functions and helpful for illuminating some generalizations of appealing concepts originating in the hermitian system . some intriguing physical phenomena , such as stabilization of a non - hermitian system by periodic driving , non - hermitian analogs of coherent destruction of tunneling ( cdt ) and complete population inversion ( cpi ) , are demonstrated analytically and confirmed numerically . in addition , by using these exact solutions we derive a pulse area theorem for such non - hermitian cpi in the parity - time symmetric two - level system . our results may provide an additional possibility for pulse manipulation and coherent control of the parity - time symmetric two - level system . . And you have already written the first three sentences of the full article: two - level system ( tls ) has been an important paradigmatic model in many branches of contemporary physics ranging from radiation - matter interactions to collision physics@xcite . it lies at the heart of modern applications such as quantum control and quantum information processing as well as constitutes the foundation for fundamental studies of quantum mechanics . coherent manipulation of two - level systems can be achieved and optimized by introducing external fields , e.g. , the most commonly used types of periodic@xcite and pulse - shaped@xcite driving fields . in the past few decades , the two - level quantum system ( or a qubit ) involving single and/or combined modulations has proven a fertile ground for a variety of intriguing phenomena , including self - induced transparency@xcite , complete population inversion ( cpi)@xcite , dynamical stabilization@xcite , and coherent destruction of tunneling ( cdt)@xcite , to name only a few. Please generate the next two sentences of the article
. in certain situations , analytically exact solutions of driven two - level problems exist , offering many advantages in developing analytical approaches to the design of qubit control operations . unfortunately , it is usually extremely difficult to acquire exactly soluble two - level evolutions so far .
11,717
Suppose that you have an abstract for a scientific paper: we prove necessary optimality conditions for problems of the calculus of variations on time scales with a lagrangian depending on the free end - point . agnieszka b. malinowska delfim f. m. torres . And you have already written the first three sentences of the full article: the calculus on time scales was introduced by bernd aulbach and stefan hilger in 1988 @xcite . the new theory unify and extends the traditional areas of continuous and discrete analysis and the various dialects of @xmath0-calculus @xcite into a single theory @xcite , and is finding numerous applications in such areas as engineering , biology , economics , finance , and physics @xcite . the present work is dedicated to the study of problems of calculus of variations on a generic time scale @xmath1 .. Please generate the next two sentences of the article
as particular cases , one gets the classical calculus of variations @xcite by choosing @xmath2 ; the discrete - time calculus of variations @xcite by choosing @xmath3 ; and the @xmath0-calculus of variations @xcite by choosing @xmath4 , @xmath5 . the calculus of variations on time scales was born with the works @xcite and @xcite and seems to have interesting applications in economics @xcite .
11,718
Suppose that you have an abstract for a scientific paper: we define a renormalized energy " as an explicit functional on arbitrary point configurations of constant average density in the plane and on the real line . the definition is inspired by ideas of @xcite . roughly speaking , it is obtained by subtracting two leading terms from the coulomb potential on a growing number of charges . the functional is expected to be a good measure of disorder of a configuration of points . we give certain formulas for its expectation for general stationary random point processes . for the random matrix @xmath0-sine processes on the real line ( @xmath1 ) , and ginibre point process and zeros of gaussian analytic functions process in the plane , we compute the expectation explicitly . moreover , we prove that for these processes the variance of the renormalized energy vanishes , which shows concentration near the expected value . we also prove that the @xmath2 sine process minimizes the renormalized energy in the class of determinantal point processes with translation invariant correlation kernels . * keywords : * renormalized energy , random point processes , random matrices , sine processes , ginibre ensemble , zeros of gaussian analytic functions . + * msc classification : * 60g55 , 60g10 , 15b52 . And you have already written the first three sentences of the full article: the aim of this paper is to introduce and compute a function , called the renormalized energy " , for some specific random point processes that arise in random matrix models , and in this way to associate to each of these processes a unique number , which is expected to measure its disorder " . our renormalized energy " , that we denote @xmath3 , is defined over configurations of points lying either on the real line or on the plane , as the limit as @xmath4 of @xmath5 } \log \left|2\sin \frac{\pi(a_i - a_j ) } n\right| + \log n\qquad \text{in dimension 1},\\ \wc(\{a_i\ } ) & = \frac{1}{2\pi n^2 } \sum_{i\neq j , a_i , a_j \in [ 0,n]^2 } e_n(a_i - a_j ) + \log \frac{n}{2\pi \eta(i)^2}\qquad \text{in dimension 2},\end{aligned}\ ] ] where @xmath6 is an explicit eisenstein series , and @xmath7 is the dedekind eta function .. Please generate the next two sentences of the article
this definition is inspired by that of the renormalized energy " , denoted @xmath8 , introduced by sandier and the second author in @xcite in the case of points in the plane and in @xcite in the case of points on the real line . the definitions for @xmath8 and @xmath3 coincide when the point configuration has some periodicity ( this is where our new definition originates ) , and in that case they amount to computing a sum of pairwise interactions @xmath9 where @xmath10 are the points and @xmath11 is a suitable logarithmic kernel ( the green s function on the underlying torus ) ; however they are not identical in general
11,719
Suppose that you have an abstract for a scientific paper: recently oconnell introduced an interacting diffusive particle system in order to study a directed polymer model in 1 + 1 dimensions . the infinitesimal generator of the process is a harmonic transform of the quantum toda - lattice hamiltonian by the whittaker function . as a physical interpretation of this construction , we show that the oconnell process without drift is realized as a system of mutually killing brownian motions conditioned that all particles survive forever . when the characteristic length of interaction killing other particles goes to zero , the process is reduced to the noncolliding brownian motion ( the dyson model ) . 0.5 cm * keywords * mutually killing brownian motions @xmath0 survival probability @xmath0 quantum toda lattice @xmath0 whittaker functions @xmath0 the dyson model . And you have already written the first three sentences of the full article: in this paper we introduce a system of finite number of one - dimensional brownian motions which kill each other , and evaluate long - term asymptotics of the probability that all particles survive . then we define a process of mutually killing brownian motions _ conditioned that all particles survive forever_. we show that this conditional process is equivalent to a special case of the process recently introduced by oconnell in order to analyze a directed polymer model in 1 + 1 dimensions @xcite . as an introduction of our study of many - particle systems , here we consider a family of one - particle systems with a parameter @xmath1 .. Please generate the next two sentences of the article
let @xmath2 be such a one - dimensional brownian motion that its survival probability @xmath3 decays following the equation @xmath4 with a decay rate function @xmath5 the function ( [ eqn : v1 ] ) implies that , if the brownian particle moves in the positive region far from the origin @xmath6 , decay of survival probability is negligible , while as it approaches the origin the decay becomes large . note that the brownian particle is able to penetrate a negative region @xmath7 , but there the decay rate of survival probability grows exponentially as a function of @xmath8 .
11,720
Suppose that you have an abstract for a scientific paper: the schrdinger equation in @xmath0-dimensions for the manning - rosen potential with the centrifugal term is solved approximately to obtain bound states eigensolutions ( eigenvalues and eigenfunctions ) . the nikiforov - uvarov ( @xmath1 ) method is used in the calculations . we present numerical calculations of energy eigenvalues to two- and four - dimensional systems for arbitrary quantum numbers @xmath2 and @xmath3 with three different values of the potential parameter @xmath4 it is shown that because of the interdimensional degeneracy of eigenvalues , we can also reproduce eigenvalues of a upper / lower dimensional sytem from the well - known eigenvalues of a lower / upper dimensional system by means of the transformation @xmath5 . this solution reduces to the hulthn potential case . keywords : bound states ; manning - rosen potential ; nikiforov - uvarov method . pacs number(s ) : 03.65.-w ; 02.30.gp ; 03.65.ge ; 34.20.cf [ theorem]acknowledgement [ theorem]algorithm [ theorem]axiom [ theorem]claim [ theorem]conclusion [ theorem]condition [ theorem]conjecture [ theorem]corollary [ theorem]criterion [ theorem]definition [ theorem]example [ theorem]exercise [ theorem]lemma [ theorem]notation [ theorem]problem [ theorem]proposition [ theorem]remark [ theorem]solution [ theorem]summary . And you have already written the first three sentences of the full article: one of the important tasks of quantum mechanics is to find exact solutions of the wave equations ( nonrelativistic and relativistic ) for certain potentials of physical interest since they contain all the necessary information regarding the quantum system under consideration . it is well known that the exact solutions of these wave equations are only possible in a few simple cases such as the coulomb , the harmonic oscillator , pseudoharmonic and mie - type potentials [ 1 - 8 ] . for an arbitrary @xmath3-state , most quantum systems could be only treated by approximation methods . for the rotating morse potential. Please generate the next two sentences of the article
some semiclassical and/or numerical solutions have been obtained by using pekeris approximation [ 9 - 13 ] . in recent years , many authors have studied the nonrelativistic and relativistic wave equations with certain potentials for the @xmath6- and @xmath3-cases . the exact and approximate solutions of these models have been obtained analytically [ 10 - 14 ] .
11,721
Suppose that you have an abstract for a scientific paper: corot , which was launched successfully on the 27@xmath0 of december 2006 , is the first space mission to have the search for planetary transits at the heart of its science programme . it is expected to be able to detect transits of planets with radii down to approximately two earth radii and periods up to approximately a month . thus , corot will explore the hereto uncharted area of parameter space which spans the transition between the gaseous giant planets discovered in large numbers from the ground , and terrestrial planets more akin to our own . this papers briefly sketches out the main technical characteristics of the mission before summarising estimates of its detection potential and presenting the data analysis and follow - up strategy . . And you have already written the first three sentences of the full article: corot ( convection , rotation and transits ) is a modest scale satellite funded primarily by the french space agency cnes , with additional contributions from the european space agency ( esa ) , belgium , austria , germany , spain and brazil . the project was first proposed to cnes as an asteroseismology mission in 1993 . an exoplanet programme was added to the mission soon after the discovery of the first exoplanet around a sun - like star @xcite , and it soon became clear that a long baseline , high - precision stellar photometry mission had major potential in exo - planetology as well as in stellar structure and evolution ( see @xcite ) .. Please generate the next two sentences of the article
the new mission concept was formally selected by cnes in 2001 , and corot was launched on the @xmath1 of december 2006 after a successful development and integration phase . the main technical characteristics of the mission , and in particular of the instrument , are described in detail in the corot instrument handbook and the reference esa publication on the pre - launc status of the corot mission ( see section [ aig_concl ] ) and are thus only briefly sketched out here .
11,722
Suppose that you have an abstract for a scientific paper: the ultrahigh energy neutrino cross section is a crucial ingredient in the calculation of the event rate in high energy neutrino telescopes . currently there are several approaches which predict different behaviors for its magnitude for ultrahigh energies . in this contribution is presented a summary of current predictions based on the non - linear qcd evolution equations , the so - called perturbative saturation physics . in particular , predictions are shown based on the parton saturation approaches and the consequences of geometric scaling property at high energies are discussed . the scaling property allows an analytical computation of the neutrino scattering on nucleon / nucleus at high energies , providing a theoretical parameterization . . And you have already written the first three sentences of the full article: the investigation of ultrahigh energy ( uhe ) cosmic neutrinos provides an opportunity for study particle physics beyond the reach of the lhc @xcite . as an example , nowadays the pierre auger observatory is sensitive to neutrinos of energy @xmath0 gev @xcite . a crucial ingredient in the calculation of attenuation of neutrinos traversing the earth and the event rate in high energy neutrino telescopes is the high energy neutrino - nucleon cross section , which provides a probe of quantum chromodynamics ( qcd ) in the kinematic region of very small values of bjorken-@xmath1 . the typical @xmath1 value probed is @xmath2 , which implies that for @xmath3 gev one have @xmath4 at @xmath5 gev@xmath6 . this kinematical range was not explored by the hera measurements of the structure functions @xcite .. Please generate the next two sentences of the article
the description of qcd dynamics in such very high energy limit is a subject of intense debate @xcite . theoretically , at high energies ( small bjorken-@xmath1 ) one expects the transition of the regime described by the linear dynamics , where only the parton emissions are considered , to a new regime where the physical process of recombination of partons becomes important in the parton cascade and the evolution is given by a non - linear evolution equation .
11,723
Suppose that you have an abstract for a scientific paper: our goal is to find accurate and efficient algorithms , when they exist , for evaluating rational expressions containing floating point numbers , and for computing matrix factorizations ( like lu and the svd ) of matrices with rational expressions as entries . more precisely , _ accuracy _ means the relative error in the output must be less than one ( no matter how tiny the output is ) , and _ efficiency _ means that the algorithm runs in polynomial time . our goal is challenging because our accuracy demand is much stricter than usual . the classes of floating point expressions or matrices that we can accurately and efficiently evaluate or factor depend strongly on our model of arithmetic : 1 . in the `` traditional model '' ( tm ) , the floating point result of an operation like @xmath0 is @xmath1 , where @xmath2 must be tiny . 2 . in the `` long exponent model '' ( lem ) each floating point number @xmath3 is represented by the pair of integers @xmath4 , and there is no bound on the sizes of the exponents @xmath5 in the input data . the lem supports strictly larger classes of expressions or matrices than the tm . 3 . in the `` short exponent model '' ( sem ) each floating point number @xmath3 is also represented by @xmath4 , but the input exponent sizes are bounded in terms of the sizes of the input fractions @xmath6 . we believe the sem supports strictly more expressions or matrices than the lem . these classes will be described by factorizability properties of the rational expressions , or of the minors of the rational matrices . for each such class , we identify new algorithms that attain our goals of accuracy and efficiency . these algorithms are often exponentially faster than prior algorithms , which would simply use a conventional algorithm with sufficiently high precision . for example , we can factorize cauchy matrices , vandermonde matrices , totally positive generalized vandermonde matrices , and suitably discretized differential and integral operators.... And you have already written the first three sentences of the full article: -5 mm we will survey recent progress and describe open problems in the area of accurate floating point computation , in particular for matrix computations . a very short bibliography would include @xcite . we consider the evaluation of multivariate rational functions @xmath8 of floating point numbers , and matrix computations on rational matrices @xmath9 , where each entry @xmath10 is such a rational function .. Please generate the next two sentences of the article
matrix computations will include computing determinants ( and other minors ) , linear equation solving , performing gaussian elimination ( ge ) with various kinds of pivoting , and computing the singular value decomposition ( svd ) , among others . our goals are _ accuracy _ ( computing each solution component with tiny relative error ) and _ efficiency _ ( the algorithm should run in time bounded by a polynomial function of the input size ) .
11,724
Suppose that you have an abstract for a scientific paper: this paper constitutes the investigations regarding possible formation of compact stars in @xmath0 theory of gravity , where @xmath1 is ricci scalar and @xmath2 is trace of energy momentum tensor . in this connection , we use analytic solution of krori and barua metric ( krori and barua 1975 ) to spherically symmetric anisotropic star in context of @xmath0 gravity . the masses and radii of compact stars models namely model @xmath3 , model@xmath4 and model @xmath5 are employed to incorporate with unknown constants in krori and barua metric . the physical features such as regularity at center , anisotropy measure , casuality and well - behaved condition of above mentioned class of compact starts are analyzed . moreover , we have also discussed energy conditions , stability and surface redshift in @xmath0 gravity . * keywords : * compact stars , @xmath6 gravity . + . And you have already written the first three sentences of the full article: the late time evolution of stars influenced by strong gravitational pull has been a largely anticipated field in astrophysics and gravitational theories . it facilitates in investigations of diverse characteristics of gravitating sources via various physical phenomenons . baade and zwicky ( 1934 ) proposed the inception of massive compact stellar objects , establishing the argument that supernova may result in a small and super dense star .. Please generate the next two sentences of the article
this eventually came in to reality in 1967 , when bell and hewish ( longair 1994 , ghosh 2007 ) discovered pulsars that are highly magnetized and rotating neutron stars . so , in reality , we come across a fundamental revolutionary shift from normal stars to compact stars , with a wide range in the form of stars to neutron stars , quarks , dark stars , gravastars and finally black holes
11,725
Suppose that you have an abstract for a scientific paper: agile development processes and component - based software architectures are two software engineering approaches that contribute to enable the rapid building and evolution of applications . nevertheless , few approaches have proposed a framework to combine agile and component - based development , allowing an application to be tested throughout the entire development cycle . to address this problematic , we have built calico , a model - based framework that allows applications to be safely developed in an iterative and incremental manner . the calico approach relies on the synchronization of a model view , which specifies the application properties , and a runtime view , which contains the application in its execution context . tests on the application specifications that require values only known at runtime , are automatically integrated by calico into the running application , and the captured needed values are reified at execution time to resume the tests and inform the architect of potential problems . any modification at the model level that does not introduce new errors is automatically propagated to the running system , allowing the safe evolution of the application . in this paper , we illustrate the calico development process with a concrete example and provide information on the current implementation of our framework . . And you have already written the first three sentences of the full article: in many application domains , software systems need to perpetually and rapidly evolve to cope with new user and technology requirements . being able to modify existing systems or redesign new systems to rapidly take in account new functionalities or preferences has led to the proposition of several software engineering approaches such as the agile software development methodology @xcite . one of the key principles of agile software development is to build software through an incremental and iterative process .. Please generate the next two sentences of the article
each iteration adds a new feature and produces a fully working system by going through the whole the software lifecycle , _ i.e. , _ the analyze , develop and test phases .
11,726
Suppose that you have an abstract for a scientific paper: a long - lived decaying dark matter as a resolution to fermi , pamela and atic anomalies is investigated in the framework of split supersymmetry ( susy ) without r - parity , where the neutralino is regarded as the dark matter and the extreme fine - tuned couplings for the long - lived neutralino are naturally evaded in the usual approach . the energy spectra of electron and positron are from not only the direct neutralino decays denoted by @xmath0 , but also the decaying chains such as @xmath1 . we find that with a proper lifetime of the neutralino , slepton - mediated effects could explain the atic and pamela data well , but a inconsistence occurs to it the fermi and pamela data without considering the atic one . however , by a suitable combination of @xmath0 and @xmath2 , the sneutrino - mediated effects could simultaneously account for the fermi and pamela data . it is mysterious that the direct measured matter in the standard model ( sm ) only occupies @xmath3 of our universe while dark matter and dark energy have the occupancies of @xmath4 and @xmath5 , respectively . it becomes a very important issue to understand what the dark stuff is . through high energy colliders such as the large hadron collider ( lhc ) at cern , we may directly observe dark matter . on the other hand , by the study of the high energy cosmic - ray , we may have the chance to probe dark matter indirectly . recently , pamela @xcite and atic @xcite collaborations have published some astonished events in cosmic - ray measurements , in which the former finds the excess in the positron flux ratio with energies in the @xmath6 gev range , while the later observes anomaly in the electron+positron flux in the @xmath7 gev range . intriguingly , these data are consistent with the measurements of the high energy electron and positron fluxes in the cosmic ray spectra by ppb - bets @xcite , heat @xcite , ams @xcite and hess @xcite , respectively . inspired by the pamela / atic anomalies , various interesting.... And you have already written the first three sentences of the full article: we would like to think prof . shih - chang lee , prof . tsz - king wong and prof . anatoly v. borisov for useful discussions .. Please generate the next two sentences of the article
this work is supported in part by the national science council of r.o.c . under grant nos : nsc- 97 - 2112-m-006 - 001-my3 and nsc-95 - 2112-m-007 - 059-my3 . d. hooper , p. blasi and p. d. serpico , jcap * 0901 * , 025 ( 2009 ) [ arxiv:0810.1527 ] ; h. yuksel , m. d. kistler and t. stanev , arxiv:0810.2784 [ astro - ph ] ; s. profumo , arxiv:0812.4457 [ astro - ph ] .
11,727
Suppose that you have an abstract for a scientific paper: the constrained - search principle introduced by levy and lieb , is proposed as a practical , though conceptually rigorous , link between density functional theory ( dft ) and quantum monte carlo ( qmc ) . the resulting numerical protocol realizes in practice the implicit key statement of dft : + `` _ given the three dimensional electron density of the ground state of a system of @xmath0 electrons with external potential @xmath1 it is possible to find the corresponding @xmath2-dimensional wavefunction of ground state . _ '' + from a numerical point of view , the proposed protocol can be employed to speed up the qmc procedure by employing dft densities as a * pre - selection * criterion for the sampling of wavefunctions . . And you have already written the first three sentences of the full article: the hohenberg - kohn ( hk ) formulation of dft and the subsequent development of the kohn - sham ( ks ) scheme made dft the most popular tool for electronic structure calculations ( see , e.g. @xcite ) . the profound meaning of the hk theorems for many - electron systems was further strengthened by the levy - lieb formulation of the problem @xcite . in the next section the relevant aspects of the levy - lieb formulation will be reported .. Please generate the next two sentences of the article
such a formulation removes limitations of the hk theorem , such as the degeneracy of the ground state , and sets the exact correspondence between the 3-dimensional electron density and the 3n - dimensional wavefunction of the ground state of a system of @xmath0 electrons . in practical calculations , this formulation is never explicitly used and it is usually considered only a conceptual proof of validity of dft . in this paper
11,728
Suppose that you have an abstract for a scientific paper: we relate structurally dynamic cellular networks , a class of models we developed in fundamental space - time physics , to sdca , introduced some time ago by ilachinski and halpern . we emphasize the crucial property of a non - linear interaction of network geometry with the matter degrees of freedom in order to emulate the supposedly highly erratic and strongly fluctuating space - time structure on the planck scale . we then embark on a detailed numerical analysis of various large scale characteristics of several classes of models in order to understand what will happen if some sort of macroscopic or continuum limit is performed . of particular relevance in this context is a notion of network dimension and its behavior in this limit . furthermore , the possibility of phase transitions is discussed . 1.0 cm . And you have already written the first three sentences of the full article: in the beautiful book @xcite the title of chapter 12 reads : is nature , underneath it all , a ca ? . such ideas have in fact been around for quite some time ( cf . e.g.@xcite,@xcite,@xcite or @xcite , to mention a few references ) .. Please generate the next two sentences of the article
a little bit later t hooft analysed the possibility of deterministic ca underlying models of quantum field theory or quantum gravity ( @xcite and @xcite are two examples from a long list of papers ) . for more detailed historical information see @xcite or @xcite . a nice collection of references can also be found in @xcite .
11,729
Suppose that you have an abstract for a scientific paper: we investigate various galaxy occupation statistics of dark matter halos using a large galaxy group catalogue constructed from the sloan digital sky survey data release 4 ( sdss dr4 ) with an adaptive halo - based group finder . the conditional luminosity function ( clf ) , which describes the luminosity distribution of galaxies in halos of a given mass , is measured separately for all , red and blue galaxies , as well as in terms of central and satellite galaxies . the clfs for central and satellite galaxies can be well modelled with a log - normal distribution and a modified schechter form , respectively . about 85% of the central galaxies and about 80% of the satellite galaxies in halos with masses @xmath0 are red galaxies . these numbers decrease to 50% and 40% , respectively , in halos with @xmath1 . for halos of a given mass , the distribution of the luminosities of central galaxies , @xmath2 , has a dispersion of about @xmath3 dex . the mean luminosity ( stellar mass ) of the central galaxies scales with halo mass as @xmath4 ( @xmath5 ) for halos with masses @xmath6 , and both relations are significantly steeper for less massive halos . we also measure the luminosity ( stellar mass ) gap between the first and second brightest ( most massive ) member galaxies , @xmath7 ( @xmath8 ) . these gap statistics , especially in halos with @xmath9 , indicate that the luminosities of central galaxies are clearly distinct from those of their satellites . the fraction of fossil groups , defined as those groups with @xmath10 , ranges from @xmath11 for groups with @xmath12 to 18 - 60% for groups with @xmath13 . the number distribution of satellite galaxies in groups of a given mass follows a poisson distribution , in agreement with the occupation statistics of dark matter sub - halos . this provides strong support for the standard lore that satellite galaxies reside in sub - halos . finally , we measure the fraction of satellites , which changes from @xmath14 for galaxies with @xmath15 to.... And you have already written the first three sentences of the full article: in recent years , the halo occupation distribution and conditional luminosity function have become powerful statistical measures to probe the link between galaxies and their hosting dark matter halos . although these statistical measures themselves do not give physical explanations of how galaxies form and evolve , they provide important constraints on various physical processes that govern the formation and evolution of galaxies , such as gravitational instability , gas cooling , star formation , merging , tidal stripping and heating , and a variety of feedback processes . in particular , they constrain how their efficiencies scale with halo mass .. Please generate the next two sentences of the article
the halo occupation distribution ( hereafter hod ) , @xmath18 , which gives the probability of finding @xmath19 galaxies ( with some specified properties ) in a halo of mass @xmath20 , has been extensively used to study the galaxy distribution in dark matter halos and galaxy clustering on large scales ( e.g. jing , mo & brner 1998 ; peacock & smith 2000 ; seljak 2000 ; scoccimarro et al . 2001 ; jing , brner & suto 2002 ; berlind & weinberg 2002 ; bullock , wechsler & somerville 2002 ; scranton 2002 ; kang et al .
11,730
Suppose that you have an abstract for a scientific paper: we present sub - millimeter and mid - infrared images of the circumstellar disk around the nearby f2v star @xmath0 corvi . the disk is resolved at 850 @xmath1 m with a size of @xmath2 au . at 450 @xmath1 m the emission is found to be extended at all position angles , with significant elongation along a position angle of @xmath3 ; at the highest resolution ( @xmath4 ) this emission is resolved into two peaks which are to within the uncertainties offset symmetrically from the star at 100 au projected separation . modeling the appearance of emission from a narrow ring in the sub - mm images shows the observed structure can not be caused by an edge - on or face - on axisymmetric ring ; the observations are consistent with a ring of radius @xmath5 au seen at @xmath6 inclination . more face - on orientations are possible if the dust distribution includes two clumps similar to vega ; we show how such a clumpy structure could arise from the migration over 25 myr of a neptune mass planet from 80 - 105 au . the inner 100 au of the system appears relatively empty of sub - mm emitting dust , indicating that this region may have been cleared by the formation of planets , but the disk emission spectrum shows that iras detected an additional hot component with a characteristic temperature of @xmath7 k ( implying a distance of 1 - 2 au ) . at 11.9 @xmath1 m we found the emission to be unresolved with no background sources which could be contaminating the fluxes measured by iras . the age of this star is estimated to be @xmath8 gyr . it is very unusual for such an old main sequence star to exhibit significant mid - ir emission . the proximity of this source makes it a perfect candidate for further study from optical to mm wavelengths to determine the distribution of its dust . . And you have already written the first three sentences of the full article: the infrared satellite iras found that some 15% of nearby main sequence stars exhibit infrared emission in excess of that expected from the stellar photosphere alone ( e.g. , aumann et al . 1984 ; backman & paresce 1993 ) . this excess emission is thought to come from dust which is heated by the star and its temperature ( @xmath9 k ) implies that for the majority of stars this dust is in regions analogous to the kuiper belt in the solar system ( @xmath10 au ; wyatt et al .. Please generate the next two sentences of the article
this analogy is reinforced by detailed analysis of these systems which show that the dust must be continually replenished from a reservoir of larger ( km - sized ) planetesimals ( wyatt & dent 2002 ) . very few stars have been seen to exhibit warm emission from dust at @xmath11 au from the star ; those that do tend to be young with age @xmath12 myr ( laureijs et al .
11,731
Suppose that you have an abstract for a scientific paper: we investigate the degree distribution @xmath0 and the clustering coefficient @xmath1 of the line graphs constructed on the erds - rnyi networks , the exponential and the scale - free growing networks . we show that the character of the degree distribution in these graphs remains poissonian , exponential and power law , respectively , i.e. the same as in the original networks . when the mean degree @xmath2 increases , the obtained clustering coefficient @xmath1 tends to 0.50 for the transformed erds - rnyi networks , to 0.53 for the transformed exponential networks and to 0.61 for the transformed scale - free networks . these results are close to theoretical values , obtained with the model assumption that the degree - degree correlations in the initial networks are negligible . * clustering in random line graphs * + anna maka - kraso , advera mwijage@xmath3 and krzysztof kuakowski + _ _ faculty of physics and applied computer science , agh university of science and technology , al . mickiewicza 30 , pl-30059 krakw , poland + @xmath3on leave from mbeya institute of science and technology , p.o.box 131 mbeya , tanzania [email protected] _ pacs numbers : _ 64.60.aq ; 02.10.ox ; 05.10.ln _ keywords : _ line graphs ; random networks ; clustering coefficient ; degree distribution . And you have already written the first three sentences of the full article: the science of networks is indeed a new kind of science @xcite for its interdisciplinary character and its explosive development ; for some recent monographs we refer to @xcite . the list of applications of networks contains examples from physics , informatics , biology and social sciences . basic characteristics of networks are the degree distribution @xmath0 and the clustering coefficient @xmath1 .. Please generate the next two sentences of the article
the degree of a given node is the number of other nodes connected to that node ; the clustering coefficient measures the probability that two neighbours of a given node are connected to each other . as it was indicated only recently by mark newman @xcite , many real networks show a high clustering coefficient , usually some tens of percent . on the contrary to this
11,732
Suppose that you have an abstract for a scientific paper: we explore the possibility to discriminate between certain strongly - coupled technicolor ( tc ) models and warped extra - dimensional models where the standard model fields are propagating in the extra dimension . we consider a generic qcd - like tc model with running coupling as well as two tc models with walking dynamics . we argue that due to the different production mechanisms for the lowest - lying composite tensor state in these tc theories compared to the first kaluza - klein graviton mode of warped extra - dimensional case , it is possible to distinguish between these models based on the angular analysis of the reconstructed longitudinal z bosons in the @xmath0 four charged leptons channel . . And you have already written the first three sentences of the full article: a clear signal of the new physics ( np ) that lies beyond the standard model ( sm ) would be the existence of heavy new particles or `` resonances '' . in tev mass range , the main arena of search for these objects will soon be the cern lhc experiments as these particles can be produced or exchanged in high energy collisions . the direct experimental evidence for a resonance is the peak in the energy dependence of the measured cross sections seen above the sm background .. Please generate the next two sentences of the article
once the resonance signal is observed , further analysis is needed to distinguish between scenarios that potentially may cause this effect . first step in this analysis would be the determination of the spin of the resonance which provides an important selection among different classes of non - standard interactions . in the second step ,
11,733
Suppose that you have an abstract for a scientific paper: a conjecture on the monotonicity of @xmath0-core partitions in an article of stanton [ open positivity conjectures for integer partitions , _ trends math_. , 2:19 - 25 , 1999 ] has been the catalyst for much recent research on @xmath0-core partitions . we conjecture stanton - like monotonicity results comparing self - conjugate @xmath1- and @xmath0-core partitions of @xmath2 . we obtain partial results toward these conjectures for values of @xmath0 that are large with respect to @xmath2 , and an application to the block theory of the symmetric and alternating groups . to this end we prove formulas for the number of self - conjugate @xmath0-core partitions of @xmath2 as a function of the number of self - conjugate partitions of smaller @xmath2 . additionally , we discuss the positivity of self - conjugate @xmath3-core partitions and introduce areas for future research in representation theory , asymptotic analysis , unimodality , and numerical identities and inequalities . . And you have already written the first three sentences of the full article: in this paper we address the structure of self - conjugate core partitions . a _ @xmath0-core partition _ ( more briefly _ @xmath0-core _ ) is a partition where no hook of size @xmath0 appears . we let @xmath4 be the number of @xmath0-core partitions of @xmath2 and let @xmath5 be the number of self - conjugate @xmath0-core partitions of @xmath2 .. Please generate the next two sentences of the article
the study of self - conjugate partitions arises from the representation theory of the symmetric group @xmath6 and the alternating group @xmath7 . at the turn of the century , young discovered that the irreducible characters of @xmath6 are labeled by partitions of @xmath2 , and in particular , the self - conjugate partitions label those that split into two conjugate irreducible representations of @xmath7 upon restriction . about the same time , frobenius discovered that the hook lengths on the diagonal of a self - conjugate partition determine the irrationalities that occur in the character table of @xmath7 .
11,734
Suppose that you have an abstract for a scientific paper: the degree of freedom of spin in quantum systems serves as an unparalleled laboratory where intriguing quantum physical properties can be observed , and the ability to control spin is a powerful tool in physics research . we propose a novel method for controlling spin in a system of rare isotopes which takes advantage of the mechanism of the projectile fragmentation reaction combined with the momentum - dispersion matching technique . the present method was verified in an experiment at the riken ri beam factory , in which a degree of alignment of 8% was achieved for the spin of a rare isotope @xmath0al . the figure of merit for the present method was found to be greater than that of the conventional method by a factor of more than 50 . the immense efforts expended to fully comprehend and control _ quantum systems _ since their discovery are now entering an intriguing stage , namely the controlling of the degree of freedom of spin @xcite . the case of nuclear systems is not an exception . in recent years , nuclear physicists have been focusing their efforts on expanding the domain of known species in the nuclear chart , which is a two - dimensional map spanned by the axes of @xmath1 ( number of neutrons ) in the east direction and @xmath2 ( number of protons ) in the north direction . the key technique used to explore the south eastern ( neutron - rich , or negative in isospin @xmath3 ) and north western ( proton - rich , or positive in @xmath3 ) fronts of the map has been the projectile fragmentation ( pf ) reaction , in which an accelerated stable nucleus is transmuted into an unstable one through abrasion upon collision with a target . several new facilities for providing rare - isotope ( ri ) beams by this technique , such as ribf @xcite in japan , frib @xcite in the united states , and fair @xcite in europe , have been completed or designed for exploration of the frontiers of the nuclear chart . beyond such efforts toward exploring the @xmath1 and @xmath2 axes , nuclear spin may.... And you have already written the first three sentences of the full article: time - differential perturbed angular distribution ( tdpad ) methods : the @xmath0al beam was stopped in a cu crystal stopper mounted on the experimental apparatus for tdpad measurements , which was placed in a focal plane after the achromatic focal plane f7 . the tdpad apparatus consists of a cu crystal stopper , a dipole magnet , ge detectors , a plastic scintillator and a collimator , as shown in the inset of fig . [ fig_exp ] .. Please generate the next two sentences of the article
the cu stopper is 3.0 mm in thickness and 30@xmath7030 mm@xmath71 in area , and the dipole magnet provides a static magnetic field @xmath72 t. @xmath21al are implanted into the cu crystal , and de - excitation @xmath4 rays are detected with four ge detectors located at a distance of 7.0 cm from the stopper and at angles of @xmath73 and @xmath74 with respect to the beam axis . the relative detection efficiency was 35% for one and 15@xmath7520% for the other three .
11,735
Suppose that you have an abstract for a scientific paper: let @xmath0 be a compact surface of negative euler characteristic and let @xmath1 be the deformation space of convex real projective structures on @xmath0 . for every choice of pants decomposition for @xmath0 , there is a well known parameterization of @xmath1 known as the goldman parameterization . in this paper , we study how some geometric properties of the real projective structure on @xmath0 degenerate as we deform it so that the internal parameters of the goldman parameterization leave every compact set while the boundary invariants remain bounded away from zero and infinity . . And you have already written the first three sentences of the full article: let @xmath0 be a closed orientable surface of genus @xmath2 , and consider the deformation space of convex @xmath3 structures @xmath1 on @xmath0 . topologically , we know that @xmath1 is a cell of dimension @xmath4 by goldman @xcite . moreover , in @xcite , goldman proved that , just as in the case of hyperbolic structures on @xmath0 , convex @xmath3 structures on @xmath0 are given by specifying convex @xmath3 structures on the pairs of pants in a pants decomposition of @xmath0 and then assembling them together . this allowed him to parameterize @xmath1 in a way similar to the fenchel - nielsen parameterization of teichmller space @xmath5 . to define the goldman parameterization of @xmath1 , we first need to choose a pants decomposition @xmath6 for @xmath0 .. Please generate the next two sentences of the article
there are then three kinds of parameters in the goldman parameterization ; the twist - bulge parameters , the boundary invariants and the internal parameters . the twist - bulge parameters describe how to glue pairs of pants together , and are the analog of the fenchel - nielsen twist coordinates .
11,736
Suppose that you have an abstract for a scientific paper: we present an analytical theory of thermonuclear x - ray burst atmosphere structure . newtonian gravity and diffusion approximation are assumed . hydrodynamic and thermodynamic profiles are obtained as a numerical solution of the cauchy problem for the first - order ordinary differential equation . we further elaborate a combined approach to the radiative transfer problem which yields the spectrum of the expansion stage of x - ray bursts in analytical form where comptonization and free - free absorption - emission processes are accounted for and @xmath0 opacity dependence is assumed . relaxation method on an energy opacity grid is used to simulate radiative diffusion process in order to match analytical form of spectrum , which contains free parameter , to energy axis . numerical and analytical results show high similarity . all spectra consist of a power - law soft component and diluted black - body hard tail . we derive simple approximation formulae usable for mass - radius determination by observational spectra fitting . . And you have already written the first three sentences of the full article: first discovered by @xcite , strong x - ray bursts are believed to occur due to thermonuclear explosions in the bottom helium - reach layers of the atmosphere accumulated by a neutron star during the accretion process in close binary system . since then dozens of burster - type x - ray sources were found . one of the distinctive feature of type i x - ray bursts is the sudden and abrupt ( @xmath1 s ) luminosity increase ( expansion stage ) followed by exponential decay ( contraction stage ) . energy released in x - ray radiation during the first seconds greatly exceeds the eddington limit for layers above the helium burning zone which are no longer dynamically stable .. Please generate the next two sentences of the article
super - critically irradiated shells of atmosphere start to move outward , producing an expanding wind - like envelope . the average lifetime of an x - ray bursts is sufficient for steady - state regime of mass loss to be established when the local luminosity throughout the most of the atmosphere is equal or slightly greater than the eddington limit . during the last two decades the problem of determining properties of radiatively driven winds during x - ray bursts
11,737
Suppose that you have an abstract for a scientific paper: we consider the prompt hadroproduction of @xmath0 and the @xmath1 states caused by the fusion of a symmetric colour - octet state , @xmath2 , and an additional gluon . the cross sections are calculated in leading - order perturbative qcd . we find a considerable enhancement in comparison with previous perturbative qcd predictions . indeed , the resulting cross sections are found to be consistent with the values measured at the tevatron and rhic , without the need to invoke non - perturbative ` colour - octet ' type of contributions . ippp/04/58 + dcpt/04/116 + 17th september 2004 + * inelastic @xmath3 and @xmath4 hadroproduction * v.a . khoze@xmath5 , a.d . martin@xmath6 , m.g . ryskin@xmath5 and w.j . stirling@xmath7 + @xmath6 department of physics and institute for particle physics phenomenology , + university of durham , dh1 3le , uk + @xmath8 petersburg nuclear physics institute , gatchina , st . petersburg , 188300 , russia + @xmath9 department of mathematical sciences , university of durham , dh1 3le , uk + . And you have already written the first three sentences of the full article: it is not easy to describe the hadroproduction of @xmath3 mesons within a perturbative qcd framework . the problem is that , due to the @xmath10 quantum numbers , it is not possible to directly form the colourless @xmath3 meson by gluon - gluon fusion . the simplest possibility is to produce a colour - octet quark - antiquark pair ( @xmath11 ) and then to emit an additional gluon , which carries away the colour , as shown in fig . [ fig : pqcd]a .. Please generate the next two sentences of the article
this is often referred to as the colour - singlet mechanism ( csm ) . however the corresponding cross section is suppressed by the small qcd coupling @xmath12 , and by the additional phase space factor associated with the extra gluon emission . as a result
11,738
Suppose that you have an abstract for a scientific paper: the subject of this paper is stochastic acceleration by plasma turbulence , a process akin to the original model proposed by fermi . we review the relative merits of different acceleration models , in particular the so called first order fermi acceleration by shocks and second order fermi by stochastic processes , and point out that plasma waves or turbulence play an important role in all mechanisms of acceleration . thus , stochastic acceleration by turbulence is active in most situations . we also show that it is the most efficient mechanism of acceleration of relatively cool non relativistic thermal background plasma particles . in addition , it can preferentially accelerate electrons relative to protons as is needed in many astrophysical radiating sources , where usually there are no indications of presence of shocks . we also point out that a hybrid acceleration mechanism consisting of initial acceleration by turbulence of background particles followed by a second stage acceleration by a shock has many attractive features . it is demonstrated that the above scenarios can account for many signatures of the accelerated electrons , protons and other ions , in particular @xmath0he and @xmath1he , seen directly as solar energetic particles and through the radiation they produce in solar flares . . And you have already written the first three sentences of the full article: the presence of energetic particles in the universe has been know for over a century as cosmic rays ( crs ) and for a comparable time as the agents producing non - thermal electromagnetic radiation from long wave radio to gamma - rays . in spite of accumulation of considerable data on spectral and other characteristics of these particles , the exact mechanisms of their production remain controversial . although the possible scenarios of acceleration have been narrowed down , there are many uncertainties about the details of individual mechanisms . nowadays the agents commonly used for acceleration of particles in astrophysical plasmas can be classified in three categories , namely static electric fields ( parallel to magnetic fields ) , shocks and turbulence . as we will try to show there are several lines of argument indicating that turbulence plays an important role in all these scenarios .. Please generate the next two sentences of the article
in addition , because of the large values of ordinary and magnetic reynolds numbers , most flows in astrophysical plasmas are expected to give rise to turbulence . the generation and evolution ( cascade and damping ) of plasma turbulence in astrophysical sources is an important aspect of particle acceleration that will be dealt with in other papers of this proceedings . similarly , electric field and shock accelerations will be discussed by other authors in this proceedings as well . in this paper
11,739
Suppose that you have an abstract for a scientific paper: it is shown that hyperon beta decay data can be well accommodated within the framework of cabbibo s su(3 ) symmetric description if one allows for a small su(3 ) symmetry breaking proportional to the mass difference between strange and nonstrange quarks . the @xmath0 ratio does not depend sensitively on the exact form of the symmetry - breaking , and the best fits are close to the value previously used in the analysis of deep inelastic scattering of electrons or muons on polarized nucleons . the total quark helicity and strange quark polarization in the nucleon are discussed . # 1@xmath1#1 . And you have already written the first three sentences of the full article: the spin - dependent ( gamow - teller ) matrix - elements , for transitions between members of the baryon octet @xcite acquired renewed interest after measurements were made of the deep inelastic scattering ( dis ) of polarized leptons by polarized protons and neutrons @xcite , which provided valuable information about the spin structure of the nucleon . one of the most important quantities measured in polarized dis is the longitudinal spin structure function @xmath2 . in the quark parton model , the spin structure function @xmath2 is directly related to the quark spin densities : @xmath3 , @xmath4 , @xmath5 etc .. Please generate the next two sentences of the article
where @xmath6 . to deduce the various quark spin densities from the @xmath2 data , one usually assumes that baryons may be assigned to a su(3 ) flavor octet and uses the relation between the quark spin densities and weak matrix elements @xmath7 and @xmath8 from hyperon semileptonic decays . by using the earlier @xmath0 value ,
11,740
Suppose that you have an abstract for a scientific paper: in a josephson phase qubit the coherent manipulations of the computational states are achieved by modulating an applied ac current , typically in the microwave range . in this work we show that it is possible to find optimal modulations of the bias current to achieve high - fidelity gates . we apply quantum optimal control theory to determine the form of the pulses and study in details the case of a not - gate . to test the efficiency of the optimized pulses in an experimental setup , we also address the effect of possible imperfections in the pulses shapes , the role of off - resonance elements in the hamiltonian , and the effect of capacitive interaction with a second qubit . . And you have already written the first three sentences of the full article: over the past decades , together with the development of the theory of quantum information @xcite there has been an increasing effort to find those physical systems where quantum information processing could be implemented . among the many different proposals , devices based on superconducting josephson junctions are promising candidates in the solid state realm ( see the reviews in ref . ) . josephson qubits can be categorized into three main classes : charge , phase and flux qubits , depending on which dynamical variable is most well defined and consequently which basis states are used as computational states @xmath0 and @xmath1 .. Please generate the next two sentences of the article
phase qubits @xcite , subject of the present investigation , in their simplest configuration can be realized with a single current biased josephson junction . for bias lower that the critical current , the two lowest eigenstates of the system form the computational space .
11,741
Suppose that you have an abstract for a scientific paper: direct - photon measurements in p+p and au+au collisions at @xmath0gev from the phenix experiment are presented . the p+p results are found to be in good agreement with next - to - leading - order ( nlo ) perturbative qcd calculations . direct - photon yields in au+au collisions scale with the number of inelastic nucleon - nucleon collisions and do nt exhibit the strong suppression observed for charged hadrons and neutral pions . this observation is consistent with models which attribute the suppression of high-@xmath1 hadrons to energy loss of quarks and gluons in the hot and dense medium produced in au+au collisions at rhic . . And you have already written the first three sentences of the full article: the virtue of direct photons is that they emerge directly from the interaction of point - like partons in processes like quark - gluon compton scattering ( @xmath2 ) and quark - anti - quark annihilation ( @xmath3 ) . measurements of direct photons in p+p collisions therefore allow to test perturbative qcd ( pqcd ) without the complication of phenomenological parton - to - hadron fragmentation functions which are needed in the description of high-@xmath1 hadron production . moreover , direct - photon measurements in p+p collisions provide information about the gluon distribution of the proton due to the large contribution from quark - gluon compton scattering .. Please generate the next two sentences of the article
this is especially interesting for fractional gluon momenta @xmath4 because this range is not well constrained by other processes like deep - inelastic scattering and drell - yan . however , direct - photon data from @xmath5 and @xmath6 collisions are not generally used in global qcd fits for the determination of parton distribution functions .
11,742
Suppose that you have an abstract for a scientific paper: we analyze the singularities of rational inner functions on the unit bidisk and study both when these functions belong to dirichlet - type spaces and when their partial derivatives belong to hardy spaces . we characterize derivative @xmath0 membership purely in terms of contact order , a measure of the rate at which the zero set of a rational inner function approaches the distinguished boundary of the bidisk . we also show that derivatives of rational inner functions with singularities fail to be in @xmath0 for @xmath1 and that higher non - tangential regularity of a rational inner function paradoxically reduces the @xmath0 integrability of its derivative . we derive inclusion results for dirichlet - type spaces from derivative inclusion for @xmath0 . using agler decompositions and local dirichlet integrals , we further prove that a restricted class of rational inner functions fails to belong to the unweighted dirichlet space . . And you have already written the first three sentences of the full article: the study of the boundary behavior of holomorphic functions on domains in @xmath2 is a deep and classical branch of complex analysis . one of the most natural problems in this context is to relate the properties of a function at a boundary point to those of its derivatives close to that point . in this paper , we address such questions for certain rational functions in two complex variables having additional algebraic structure . as we shall see , these investigations involve a rich and fascinating interplay betwen function theory , harmonic analysis of several complex variables , and classical algebraic geometry in the guise of algebraic curves . in several variables ,. Please generate the next two sentences of the article
a complicating phenomenon without counterpart in the one - variable setting manifests itself ; namely , a rational function can possess a `` non - essential singularity of the second kind , '' meaning its numerator and denominator vanish at the same point without sharing a common factor . from an analytic standpoint , such a boundary singularity affects the boundedness , smoothness , and integrability properties of both the rational function and its derivatives in very nontrivial ways . to understand this problem further
11,743
Suppose that you have an abstract for a scientific paper: labelled image datasets have played a critical role in high - level image understanding ; however the process of manual labelling is both time - consuming and labor intensive . to reduce the cost of manual labelling , there has been increased research interest in automatically constructing image datasets by exploiting web images . datasets constructed by existing methods tend to have a weak domain adaptation ability , which is known as the `` dataset bias problem '' . to address this issue , we present a novel image dataset construction framework which can be generalized well to unseen target domains . in specific , the given queries are first expanded by searching in the google books ngrams corpus to obtain a richer semantic description , from which the visually non - salient and less relevant expansions are filtered out . by treating each unfiltered expansion as a `` bag '' and the retrieved images as `` instances '' , image selection can be formulated as a multi - instance learning problem with constrained positive bags . we propose to solve the employed problems by the cutting - plane and concave - convex procedure ( cccp ) algorithm . using this approach , images from different distributions will be retained while noisy images will be filtered out . to verify the effectiveness of our proposed approach , we build a domain - robust image dataset with 20 categories , which we refer to as drid-20 . we compare drid-20 with three publicly available datasets stl-10 , cifar-10 and imagenet . the experimental results confirm the effectiveness of our dataset in terms of image classification ability , cross - dataset generalization ability and dataset diversity . we further run object detection on pascal voc 2007 using our data , and the results demonstrate the superiority of our method to the weakly supervised and web - supervised state - of - the - art detection methods . shell : bare demo of ieeetran.cls for ieee journals domain robust , multiple query expansions , image dataset.... And you have already written the first three sentences of the full article: with the development of internet , we have entered the era of big data . it is consequently a natural idea to leverage the large scale yet noisy data on the web for various vision tasks @xcite . methods of exploiting web images for automatically image dataset construction have recently become a hot topic @xcite in the field of multimedia processing .. Please generate the next two sentences of the article
existing methods @xcite usually use an [ fig : figure1 ] iterative mechanism in the process of image selection , but these datasets tend to be statistically problematic because of the visual feature distribution of images selected in this way , which is known as the dataset bias problem @xcite . 1 shows the airplane " images from four different image datasets .
11,744
Suppose that you have an abstract for a scientific paper: pair correlations between large transverse momentum neutral pion triggers ( @xmath0 = 47 gev/@xmath1 ) and charged hadron partners ( @xmath0 = 37 gev/@xmath1 ) in central ( 020% ) and midcentral ( 2060% ) au+au collisions are presented as a function of trigger orientation with respect to the reaction plane . the particles are at larger momentum than where jet shape modifications have been observed , and the correlations are sensitive to the energy loss of partons traveling through hot dense matter . an out - of - plane trigger particle produces only @xmath2 of the away - side pairs that are observed opposite of an in - plane trigger particle . in contrast , near - side jet fragments are consistent with no suppression or dependence on trigger orientation with respect to the reaction plane . these observations are qualitatively consistent with a picture of little near - side parton energy loss either due to surface bias or fluctuations and increased away - side parton energy loss due to a long path through the medium . the away - side suppression as a function of reaction - plane angle is shown to be sensitive to both the energy loss mechanism in and the space - time evolution of heavy - ion collisions . . And you have already written the first three sentences of the full article: collisions of heavy nuclei at the relativistic heavy ion collider have created matter with energy densities exceeding the predicted threshold for deconfinement of color charge into a hot dense plasma @xcite . in this quark gluon plasma(qgp ) , quarks and gluons are not bound within hadronic states and the matter behaves collectively . comparisons with hydrodynamic simulations indicate rapid thermalization of the colliding system into a hot dense nuclear medium . the produced medium affords an opportunity to study the properties of a new phase of quantum chromodynamics ( qcd ) in an extreme environment .. Please generate the next two sentences of the article
hard scattering with large momentum exchange between partons in the incoming nuclei is well - described by perturbative qcd ( pqcd ) . the scattered partons emerge back - to - back in azimuth in the plane transverse to the beam direction , and fragment into a pair of correlated cones of high momentum particles , referred to as jets .
11,745
Suppose that you have an abstract for a scientific paper: a versatile x - ray / neutron reflectivity ( specular ) simulator using labview ( national instruments corp . ) for structural study of a multi - layer thin film having any combination , including the repetitions , of nano - scale layers of different materials is presented here ( available to download from the link provided at the end ) . inclusion of absorption of individual layers , inter - layer roughnesses , background counts , beam width , instrumental resolution and footprint effect due to finite size of the sample makes the simulated reflectivity close to practical one . the effect of multiple reflection is compared with simulated curves following the exact dynamical theory and approximated kinematical theory . the applicability of further approximation ( born theory ) that the incident angle does not change significantly from one layer to another due to refraction is also considered . brief discussion about reflection from liquid surface / interface and reflectivity study using polarized neutron are also included as a part of the review . auto - correlation function in connection with the data inversion technique is discussed with possible artifacts for phase - loss problem . an experimental specular reflectivity data of multi - layer erbium stearate langmuir - blodgett ( lb ) film is considered to estimate the parameters by simulating the reflectivity curve . . And you have already written the first three sentences of the full article: the reflectivity study is a very powerful scattering technique @xcite performed at grazing angle of incidence to study the structure of surface and interface of layered materials , thin films when the length scale of interest is in nm regime . utilizing the intrinsic magnetic dipole moment of neutron , the neutron reflectivity study provides magnetic structure in addition with the structural information . in reflectivity analysis , the electron density ( ed)/scattering length density ( sld ) ( in case of x - ray / neutron ) of different layers along the depth is estimated by model - fitting the experimental data . text - based language like _ fortran _ is commonly used to write a simulation and data analysis code . however , recently a graphical language labview ( ( _ _ lab__oratory _ _ v__irtual _ _ i__nstrument _ _ e__ngineering _ _ w__orkbench , from national instruments corp . ). Please generate the next two sentences of the article
has emerged as a powerful programming tool for instrument control , data acquisition and analysis . it offers an ingenious graphical interface and code flexibility thereby significantly reducing the programming time .
11,746
Suppose that you have an abstract for a scientific paper: the performance of hybrid superconducting electronic coolers is usually limited by the accumulation of hot quasi - particles in their superconducting leads . this issue is all the more stringent in large - scale and high - power devices , as required by applications . introducing a metallic drain connected to the superconducting electrodes via a fine - tuned tunnel barrier , we efficiently remove quasi - particles and obtain electronic cooling from 300 mk down to 130 mk with a 400 pw cooling power . a simple thermal model accounts for the experimental observations . . And you have already written the first three sentences of the full article: on - chip solid - state refrigeration has long been sought for various applications in the sub - kelvin temperature regime , such as cooling astronomical detectors @xcite . in a normal metal - insulator - superconductor ( nis ) junction @xcite , the superconductor density of states gap makes that only high - energy electrons are allowed to tunnel out of the normal metal or , depending on the bias , low - energy ones to tunnel in , so that the electronic bath as a whole is cooled . in sinis devices based on aluminum , the electronic temperature can drop from 300 mk down to below 100 mk at the optimum bias point .. Please generate the next two sentences of the article
while this level of performance has been demonstrated in micron - scale devices @xcite with a cooling power in the picowatt range , a difficulty arises in devices with large - area junctions needed for a sizable cooling power approaching the nanowatt range . for instance , a high - power refrigerator has been shown to cool an external object from 290 mk down to about 250 mk @xcite . one of the main limitation to nis coolers full performance is the presence in the superconducting leads of non - equilibrium quasi - particles arising from the high current running through the device .
11,747
Suppose that you have an abstract for a scientific paper: in this paper , we investigate the influences of two continuum radiation pressures of the central engines on the black hole mass estimates for 40 active galactic nuclei ( agns ) with high accretion rates . the continuum radiation pressure forces , usually believed negligible or not considered , are from two sources : the free electron thomson scattering , and the recombination and re - ionization of hydrogen ions that continue to absorb ionizing photons to compensate for the recombination . the masses counteracted by the two radiation pressures @xmath0 depend sensitively on the percent of ionized hydrogen in the clouds @xmath1 , and are not ignorable compared to the black hole virial masses @xmath2 , estimated from the reverberation mapping method , for these agns . as @xmath1 increases , @xmath0 also does . the black hole masses @xmath3 could be underestimated at least by a factor of 3040 percent for some agns accreting around the eddington limit , regardless of redshifts of sources @xmath4 . some agns at @xmath5 and quasars at @xmath6 have the same behaviors in the plots of @xmath0 versus @xmath2 . the complete radiation pressures will be added as agns match @xmath7 due to the two continuum radiation pressures . compared to @xmath2 , @xmath3 might be extremely underestimated if considering the complete radiation pressures for the agns accreting around the eddington limit . . And you have already written the first three sentences of the full article: active galactic nuclei ( agns ) , such as quasars and seyfert galaxies , are powered by gravitational accretion of matter onto supermassive black holes in the central engines . the energy conversion in agns is more efficient as implied by high flux variability and short variability timescales @xcite . accretion of matter onto black holes can have high energy release efficiency @xcite .. Please generate the next two sentences of the article
broad - line regions ( blrs ) in agns are photoionized by the central radiation of accretion disks . the broad emission line variations will follow the ionizing continuum variations due to the photoionization process ( e.g. * ? ? ? * ; * ? ? ? * ) .
11,748
Suppose that you have an abstract for a scientific paper: in the present paper , we describe how the band structure and the fermi surface of the iron - based superconductors vary as the fe - as - fe bond angle changes . we discuss how these fermi surface configurations affect the superconductivity mediated by spin fluctuations , and show that in several situations , frustration in the sign of the gap function arises due to the repulsive pairing interactions that requires sign change of the order parameter . such a frustration can result in nodes or very small gaps , and generally works destructively against superconductivity . conversely , we propose that the optimal condition for superconductivity is realized for the fermi surface configuration that gives the least frustration while maximizing the fermi surface multiplicity . this is realized when there are three hole fermi surfaces , where two of them have @xmath0 orbital character and one has @xmath1 _ for all @xmath2 _ in the three dimensional brillouin zone . looking at the band structures of various iron - based superconductors , the occurrence of such a `` sweet spot '' situation is limited to a narrow window . . And you have already written the first three sentences of the full article: the discovery of high temperature superconductivity in the iron - based superconductors@xcite has attracted much attention in many aspects . not only the high @xmath3 itself , but also a number of experiments indicating non - universality of the superconducting gap function , such as sign reversing , anisotropy , or the presence of nodes , suggest an unconventional pairing mechanism . most probable candidate for such an unconventional mechanism is the pairing mediated by spin fluctuations , where the superconducting gap changes sign between the disconnected fermi surfaces , namely , the so - called @xmath4 pairing@xcite .. Please generate the next two sentences of the article
back in 2001 , one of the present authors proposed that spin fluctuation mediated pairing in systems with nested disconnected fermi surfaces may give rise to a very high @xmath3 superconductivity@xcite . the idea is that the repulsive pairing interaction mediated by spin fluctuations can be fully exploited without introducing nodes of the superconducting gap on the fermi surfaces .
11,749
Suppose that you have an abstract for a scientific paper: in late 2006 , ground - based photometry of @xmath0 car plus the homunculus showed an unexpected decrease in its integrated apparent brightness , an apparent reversal of its long - term brightening . subsequent hst / wfpc2 photometry of the central star in the near - uv showed that this was not a simple reversal . this multi - wavelength photometry did not support increased extinction by dust as the explanation for the decrease in brightness . a spectrum obtained with gmos on the gemini - south telescope , revealed subtle changes mid - way in @xmath0 car s 5.5 yr spectroscopic cycle when compared with hst / stis spectra at the same phase in the cycle . at mid - cycle the secondary star is 2030 au from the primary . we suggest that the spectroscopic changes are consistent with fluctuations in the density and velocity of the primary star s wind , unrelated to the 5.5 yr cycle but possibly related to its latitude - dependent morphology . we also discuss subtle effects that must be taken into account when comparing ground - based and hst / stis spectra . . And you have already written the first three sentences of the full article: eta carinae has a peculiar habit of settling into a photometric trend and spectroscopic state long enough for observers to grow accustomed to it , and then abruptly changing . dramatic transitions occurred around 1840 , 1890 , and 1945 , see @xcite and refs . therein ; and there are hints that another may have begun in the late 1990 s . for almost 50 years after the 1940 - 1950 change , the star plus ejecta brightened at an average rate of 0.025 visual magnitude per year , largely due to expansion of the dusty homunculus nebula which had been ejected around 1840 . essentially a bipolar reflection or scattering nebula , the homunculus dominated @xmath0 car s total brightness throughout the 20th century . after 1997 , however , observations with the hubble space telescope ( hst ) revealed that _ the central star _ was now brightening far more rapidly @xcite . despite brief declines during the 1998.0 and 2003.5 spectroscopic events , its average rate from 1998 to 2006 was about 0.15 mag / yr . this was a recent development , since earlier speckle data @xcite show that the star had not brightened that quickly in the fifteen years before 1997 .. Please generate the next two sentences of the article
ground - based photometry of the homunculus after 1997 showed an increase that was less dramatic , but which exceeded any of the fluctuations seen between 1955 and 1995 . most likely the circumstellar extinction began to decrease rapidly in the mid-1990 s ; perhaps the rate of dust formation near the star had decreased , or dust was being destroyed , or both .
11,750
Suppose that you have an abstract for a scientific paper: we present a discovery of a giant stellar halo in ngc 6822 , a dwarf irregular galaxy in the local group . this halo is mostly made of old red giants , showing striking features : 1 ) it is several times larger than the main body of the galaxy seen in the optical images , and 2 ) it is elongated in the direction almost perpendicular to the hi disk of ngc 6822 . the structure of this stellar halo looks similar to the shape of dwarf elliptical galaxies , indicating that the halos of dwarf irregular galaxies share the same origin with those of the dwarf elliptical galaxies . . And you have already written the first three sentences of the full article: it has been known long that old stellar populations exist in the dwarf irregular galaxies and dwarf elliptical galaxies and the @xmath0-band magnitude of the tip of these old populations ( red giant branch ) has been often used to estimate their distances ( lee et al . however , little is known about the size and morphological structure of the halos of these old stellar populations in the dwarf galaxies . a study of the morphological structure of the halos in dwarf galaxies may provide some clues to understanding the origin of the dwarf irregular galaxies and the dwarf elliptical galaxies .. Please generate the next two sentences of the article
we have carried out a project to investigate the halos in dwarf irregular galaxies using the megaprime camera at the cfht . here we present a case study of ngc 6822 which is a famous dwarf irregular galaxy in the local group .
11,751
Suppose that you have an abstract for a scientific paper: the ground states of klein type spin models on the pyrochlore and checkerboard lattice are spanned by the set of singlet dimer coverings , and thus possess an extensive ground state degeneracy . among the many exotic consequences is the presence of deconfined fractional excitations ( spinons ) which propagate through the entire system . while a realistic electronic model on the pyrochlore lattice is close to the klein point , this point is in fact inherently unstable because any perturbation @xmath0 restores spinon confinement at @xmath1 . we demonstrate that deconfinement is recovered in the finite temperature region @xmath2 , where the deconfined phase can be characterized as a dilute coulomb gas of thermally excited spinons . we investigate the zero temperature phase diagram away from the klein point by means of a variational approach based on the singlet dimer coverings of the pyrochlore lattices and taking into account their non orthogonality . we find that in these systems , nearest neighbor exchange interactions do not lead to rokhsar - kivelson type processes . . And you have already written the first three sentences of the full article: discovering new phases of matter is a primary objective of physics . the fractionalized spin liquid in two spatial dimensions ( @xmath3 ) has provided a popular candidate framework for models describing the exotic properties observed in many strongly correlated electronic materials , including frustrated quantum magnets and high@xmath4 superconductors @xcite . the spin liquid state is characterized by spinon excitations carrying unit charge under a compact u(1 ) gauge field . however. Please generate the next two sentences of the article
, polyakov has argued @xcite that a pure compact u(1 ) gauge theory is always confining at zero temperature for @xmath3 , confinement between test particles with opposite charges being produced by the proliferation of instanton tunneling events . by contrast , the case for confinement by instanton proliferation in spin systems is rather more involved @xcite , and the situation at a critical point offers additional possibilities .
11,752
Suppose that you have an abstract for a scientific paper: a new air - shower core - detector array ( yac : yangbajing air - shower core - detector array ) has been developed to measure the primary cosmic - ray composition at the knee " energies in tibet , china , focusing mainly on the light components . the prototype experiment ( yac - i ) consisting of 16 detectors has been constructed and operated at yangbajing ( 4300 m a.s.l . ) in tibet since may 2009 . yac - i is installed in the tibet - iii as array and operates together . in this paper , we performed a monte carlo simulation to check the sensitivity of yac - i+tibet - iii array to the cosmic - ray light component of cosmic rays around the knee energies , taking account of the observation conditions of actual yac - i+tibet - iii array . the selection of light component from others was made by use of an artificial neural network ( ann ) . the simulation shows that the light - component spectrum estimated by our methods can well reproduce the input ones within 10% error , and there will be about 30% systematic errors mostly induced by the primary and interaction models used . it is found that the full - scale yac and the tibet - iii array is powerful to study the cosmic - ray composition , in particular , to obtain the energy spectra of protons and helium nuclei around the knee energies . _ keywords _ : cosmic rays , hadronic interaction , knee , composition , energy spectrum . And you have already written the first three sentences of the full article: it is well known that the all - particle spectrum of primary cosmic rays follows a power law of dj / de @xmath0 e@xmath1 , but steepens at energies around 4@xmath210@xmath3 ev where the power index @xmath4 changes sharply from @xmath52.7 to @xmath53.1 @xcite . such structure of the all - particle energy spectrum is called the knee " , which is considered to be closely related to the origin , acceleration and propagation mechanism of cosmic rays . in order to explain the existence of the knee , many hypotheses and mechanisms @xcite have been proposed .. Please generate the next two sentences of the article
although all these approaches can well describe the knee structure , there are much discrepancies in the prediction of the individual components at the knee region . thus , it is critical to measure the primary chemical composition or mass group at energies 50 - 10000 tev , especially , to measure the primary energy spectra of individual component and determine a break energy of the spectral index for individual component .
11,753
Suppose that you have an abstract for a scientific paper: we will perform a detailed study of the matter - ekpyrotic bouncing scenario in loop quantum cosmology using the methods of the dynamical systems theory . we will show that when the background is driven by a single scalar field , at very late times , in the contracting phase , all orbits depict a matter dominated universe , which evolves to an ekpyrotic phase . after the bounce the universe enters in the expanding phase , where the orbits leave the ekpyrotic regime going to a kination ( also named deflationary ) regime . moreover , this scenario supports the production of heavy massive particles conformally coupled with gravity , which reheats the universe at temperatures compatible with the nucleosynthesis bounds and also the production of massless particles non - conformally coupled with gravity leading to very high reheating temperatures but ensuring the nucleosynthesis success . dealing with cosmological perturbations , these background dynamics produce a nearly scale invariant power spectrum for the modes that leave the hubble radius , in the contracting phase , when the universe is quasi - matter dominated , whose spectral index and corresponding running is compatible with the recent experimental data obtained by planck s team . @xmath0departament de matemtica aplicada , universitat politcnica de catalunya + diagonal 647 , 08028 barcelona , spain + * pacs numbers : * . And you have already written the first three sentences of the full article: bouncing cosmologies and more specifically the matter bounce scenario ( see @xcite for a review of this kind of cosmologies ) is the most promising alternative to the inflationary paradigm . there are many ways to obtain cosmologies without the big bang singularity : for example violating the null energy condition in general relativity by incorporating new forms of matter such as phantom @xcite or quintom fields @xcite , galileons @xcite or phantom condensates @xcite , or by adding terms to einstein - hilbert action @xcite , but the simplest one is to go beyond general relativity and consider holonomy corrected loop quantum cosmology ( lqc ) , where a big bounce replaces the big bang singularity @xcite . in fact , other future singularities such as type i ( big rip ) and type iii ( big freeze ) are also forbbiden in holonomy corrected lqc @xcite .. Please generate the next two sentences of the article
on the other hand , it is well known that a matter domination period in the contracting phase is dual to the de sitter regime in the expanding phase @xcite , which provides a flat power spectrum of cosmological perturbations in the matter bounce scenario . moreover , an abrupt ekpyrotic phase transition is needed , in the contracting phase , in order to solve the famous belinsky - khalatnikov - lifshitz ( bkl ) instability : the effective energy density of primordial anisotropy scales as @xmath1 in the contracting phase @xcite and , more important , to produce enough particles which will be responsible for thermalizing the universe in the expanding phase @xcite .
11,754
Suppose that you have an abstract for a scientific paper: microwave kinetic inductance detectors , or mkids , have proven to be a powerful cryogenic detector technology due to their sensitivity and the ease with which they can be multiplexed into large arrays . a mkid is an energy sensor based on a photon - variable superconducting inductance in a lithographed microresonator , and is capable of functioning as a photon detector across the electromagnetic spectrum as well as a particle detector . here we describe the first successful effort to create a photon - counting , energy - resolving ultraviolet , optical , and near infrared mkid focal plane array . these new optical lumped element ( ole ) mkid arrays have significant advantages over semiconductor detectors like charge coupled devices ( ccds ) . they can count individual photons with essentially no false counts and determine the energy and arrival time of every photon with good quantum efficiency . their physical pixel size and maximum count rate is well matched with large telescopes . these capabilities enable powerful new astrophysical instruments usable from the ground and space . mkids could eventually supplant semiconductor detectors for most astronomical instrumentation , and will be useful for other disciplines such as quantum optics and biological imaging . 10 d. bintley , m. j. macintosh , w. s. holland , p. friberg , c. walther , d. atkinson , d. kelly , x. gao , p. a. r. ade , w. grainger , j. house , l. moncelsi , m. i. hollister , a. woodcraft , c. dunare , w. parkes , a. j. walton , k. d. irwin , g. c. hilton , m. niemack , c. d. reintsema , m. amiri , b. burger , m. halpern , m. hasselfield , j. hill , j. b. kycia , c. g. a. mugford , and l. persaud , `` characterising the scuba-2 superconducting bolometer arrays , '' proc . spie * 7741 * , 2 ( 2010 ) . m. d. niemack , y. zhao , e. wollack , r. thornton , e. r. switzer , d. s. swetz , s. t. staggs , l. page , o. stryzak , h. moseley , t. a. marriage , m. limon , j. m. lau , j. .... And you have already written the first three sentences of the full article: cryogenic detectors are currently the preferred technology for astronomical observations over most of the electromagnetic spectrum , notably in the far infrared through millimeter ( 0.13 mm ) @xcite , x - ray @xcite , and gamma - ray @xcite wavelength ranges . in the important ultraviolet , optical , and near infrared ( 0.15 @xmath0 m ) wavelength range a variety of detector technologies based on semiconductors , backed by large investment from both consumer and military customers , has resulted in detectors for astronomy with large formats , high quantum efficiency , and low readout noise . however , these detectors are fundamentally limited by the band gap of the semiconductor ( 1.1 ev for silicon ) and thermal noise sources from their high ( @xmath1100 k ) operating temperatures @xcite . cryogenic detectors , with operating temperatures on the order of 100 mk , allow the use of superconductors with gap parameters over 1000 times lower than typical semiconductors . this difference allows new capabilities . a superconducting detector can count single photons with no false counts while determining the energy ( to several percent or better ) and arrival time ( to a microsecond ) of the photon .. Please generate the next two sentences of the article
it can also have much broader wavelength coverage since the photon energy is always much greater than the gap energy . while a ccd is limited to about 0.31 @xmath0 m , the new arrays described here are sensitive from 0.1 @xmath0 m in the uv to greater than 5 @xmath0 m in the mid - ir , enabling observations at infrared wavelengths vital to understanding the high redshift universe .
11,755
Suppose that you have an abstract for a scientific paper: quasielastic @xmath0-nucleus scattering data at @xmath1 , @xmath2 and @xmath3 mev / c are analyzed in a finite nucleus continuum random phase approximation framework , using a density - dependent particle - hole interaction . the reaction mechanism is consistently treated according to glauber theory , keeping up to two - step inelastic processes . a good description of the data is achieved , also providing a useful constraint on the strength of the effective particle - hole interaction in the scalar - isoscalar channel at intermediate momentum transfers . we find no evidence for the increase in the effective number of nucleons participating in the reaction which has been reported in the literature . . And you have already written the first three sentences of the full article: quasielastic studies are traditionally a good source of information about nuclear and nucleon structure . the main tool has been usually represented by electron scattering experiments , since in this case the elementary probe - nucleon reaction mechanism is regarded to be under better control . however , not all the possible nuclear response functions can be accessed in this way and , furthermore , the extraction from the data of the response functions that enter electron scattering can be model dependent . hadronic probes are an alternative source of quasielastic data . among them. Please generate the next two sentences of the article
it is particularly interesting the case of @xmath0-nucleus scattering , since , for kaon laboratory momenta below 800 mev / c , the elementary @xmath0-nucleon ( @xmath4 ) interaction is much weaker than other hadron - nucleon interactions . in fact , a weaker elementary interaction allows the projectile to penetrate deeper inside the nucleus , thus probing regions of higher density and , consequently , being more sensitive to collective phenomena .
11,756
Suppose that you have an abstract for a scientific paper: discovered in 1909 the evershed effect represents strong mass outflows in sunspot penumbra , where the magnetic field of sunspots is filamentary and almost horizontal . these flows play important role in sunspots and have been studied in detail using large ground - based and space telescopes , but the basic understanding of its mechanism is still missing . we present results of realistic numerical simulations of the sun s subsurface dynamics , and argue that the key mechanism of this effect is in non - linear magnetoconvection that has properties of traveling waves in the presence of strong , highly inclined magnetic field . the simulations reproduce many observed features of the evershed effect , including the high - speed evershed clouds " , the filamentary structure of the flows , and the non - stationary quasi - periodic behavior . the results provide a synergy of previous theoretical models and lead to an interesting prediction of a large - scale organization of the outflows . . And you have already written the first three sentences of the full article: in the spring of 1909 evershed published a remarkable discovery of strong horizontal mass flows in sunspots penumbra , the outer part of sunspots characterized by filamentary magnetic field structures @xcite . the flows , with typical speed of 14 km / s , start at the boundary between the umbra and penumbra and expand radially , accelerating with distance and suddenly stopping at the outer sunspot boundary . the evershed effect may play a significant role in the formation , stability and dynamics of sunspots and is considered as one of the fundamental process in solar physics .. Please generate the next two sentences of the article
this phenomenon caused significant interest and detailed observational and theoretical investigations , but the understanding of the physical mechanism is still missing ( a recent review is published by @xcite ) . high - resolution observations from large ground - based telescopes and the hinode space mission revealed a complicated filamentary structure of these flows @xcite and their non - stationary dynamics in the form of quasiperiodic evershed clouds " @xcite . in some cases
11,757
Suppose that you have an abstract for a scientific paper: the field of charged impurities in narrow - band gap semiconductors and weyl semimetals can create electron - hole pairs when the total charge @xmath0 of the impurity exceeds a value @xmath1 the particles of one charge escape to infinity , leaving a screening space charge . the result is that the observable dimensionless impurity charge @xmath2 is less than @xmath3 but greater than @xmath4 . there is a corresponding effect for nuclei with @xmath5 , however in the condensed matter setting we find @xmath6 . thomas - fermi theory indicates that @xmath7 for the weyl semimetal , but we argue that this is a defect of the theory . for the case of a highly - charged recombination center in a narrow band - gap semiconductor ( or of a supercharged nucleus ) , the observable charge takes on a nearly universal value . in weyl semimetals the observable charge takes on the universal value @xmath8 set by the reciprocal of material s fine structure constant . . And you have already written the first three sentences of the full article: the experimental success of quantum electrodynamics ( qed ) lies in the domain of small fields where observations impressively match the theoretical calculations based on perturbation theory in the fine structure constant @xmath9 . in calculations involving bound states of a nucleus of charge @xmath0 the fine structure @xmath10 constant additionally appears in the combination @xmath11 . even though @xmath10 is small , @xmath11 may not be , so that perturbative analysis can fail when @xmath12 .. Please generate the next two sentences of the article
one of the most profound physical effects predicted to take place in this regime is the instability of the ground state ( the vacuum ) against creation of electron - positron pairs , resulting in a screening space charge of electrons with positrons leaving physical picture @xcite . experimental study of this effect has not been possible , as stable nuclei with @xmath13 have not been created , and attempts to look for positron production in a temporarily created overcritical system of slowly colliding uranium nuclei have not been successful @xcite .
11,758
Suppose that you have an abstract for a scientific paper: we consider black hole solutions with a dilaton field possessing a nontrivial potential approaching a constant negative value at infinity . the asymptotic behaviour of the dilaton field is assumed to be slower than that of a localized distribution of matter . a nonabelian su(2 ) gauge field is also included in the total action . the mass of the solutions admitting a power series expansion in @xmath0 at infinity and preserving the asymptotic anti - de sitter geometry is computed by using a counterterm subtraction method . numerical arguments are presented for the existence of hairy black hole solutions for a dilaton potential of the form @xmath1 , special attention being paid to the case of @xmath2 gauged supergravity model of gates and zwiebach . * new hairy black hole solutions + with a dilaton potential * + * eugen radu@xmath3 * * and d. h. tchrakian @xmath4 * _ @xmath5school of theoretical physics dias , 10 burlington road , dublin 4 , ireland _ . And you have already written the first three sentences of the full article: according to the so called `` no - hair '' conjecture , a stationary black hole is uniquely described in terms of a small set of asymptotically measurable quantities . this hypothesis was disproved more than ten years ago , when several authors presented a counterexample within the framework of su(2 ) einstein - yang - mills ( eym ) theory @xcite . although the new solution was static with vanishing yang - mills ( ym ) charges , it was different from the schwarzschild black hole and , therefore , not characterized by its total mass ( see @xcite for a comprehensive review of this topic and an extensive bibliography ) .. Please generate the next two sentences of the article
however , much on the literature on hairy black hole solutions is restricted to the case of an asymptotically flat spacetime . since asymptotic flatness is not always an appropriate theoretical idealisation , and is never satisfied in reality , it may be important to consider other types of asymptotics , in particular solutions with a cosmological constant @xmath6 .
11,759
Suppose that you have an abstract for a scientific paper: a fermion model with random on - site potential defined on a two - dimensional square lattice with @xmath0-flux is studied . the continuum limit of the model near the zero energy yields dirac fermions with random potentials specified by four independent coupling constants . the basic symmetry of the model is time - reversal invariance . moreover , it turns out that the model has enhanced ( chiral ) symmetry on several surfaces in the four - dimensional space of the coupling constants . it is shown that one of the surfaces with chiral symmetry has sp(@xmath1sp(@xmath2 ) symmety whereas others have u(@xmath3 ) symmetry , both of which are broken to sp(@xmath2 ) , and the fluctuation around a saddle point is described , respectively , by sp(@xmath4 wzw model and u(@xmath3)/sp(@xmath2 ) nonlinear sigma model . based on these results , we propose a phase diagram of the model . . And you have already written the first three sentences of the full article: anderson localization has attracted renewed interest since a plenty of new universality classes were discoverd @xcite . one is a system with particle - hole symmetry studied by gade @xcite . although generic disorderd systems in two dimensions are believed to be insulators @xcite , she showed that such a system has a random critical point at the band center .. Please generate the next two sentences of the article
typical examples studied so far are the random flux model @xcite and the random - hopping model with @xmath0-flux @xcite . this class is often referred to as @xmath5iii and @xmath6i in the case with broken and unbroken time reversal symmetry , respactively , which corresponds to the chiral gaussian unitary and chiral gaussian orthogonal ensamble of the random matirx theory in the zero dimensional limit @xcite .
11,760
Suppose that you have an abstract for a scientific paper: understood in terms of pure states evolving into mixed states , the possibility of information loss in black holes is closely related to the global causal structure of spacetime , as is the existence of event horizons . however , black holes need not be defined by event horizons , and in fact we argue that in order to have a fully unitary evolution for black holes , they should be defined in terms of something else , such as a trapping horizon . the misner - sharp mass in spherical symmetry shows very simply how trapping horizons can give rise to black hole thermodynamics , hawking radiation and singularities . we show how the misner - sharp mass can also be used to give insights into the process of collapse and evaporation of locally defined black holes . . And you have already written the first three sentences of the full article: the possibility of information loss in black hole evaporation has been with us for many years . there have been many attempts over the years to resolve the issue one way or another , but it is probably fair to say that there is still no consensus on what the correct picture of a black hole and its evolution should be . indeed , the possibility that the evolution of a black hole may be a non - unitary process has many disturbing consequences . we will not attempt here to provide a definitive answer to the question of whether black hole evaporation is unitary or not . we believe that this is a question that can only be answered in the context of a theory that we do not yet possess .. Please generate the next two sentences of the article
instead what we want to do here is to highlight some of the semi - classical issues involved in the problem and the implicit assumptions that are often under - appreciated in the analysis . specifically we wish to examine the role that the event horizon plays and its connection to the global causal structure of spacetime .
11,761
Suppose that you have an abstract for a scientific paper: we report results of a systematic study of one - dimensional four - wave moving solitons in a recently proposed model of the bragg cross - grating in planar optical waveguides with the kerr nonlinearity ; the same model applies to a fiber bragg grating ( bg ) carrying two polarizations of light . we concentrate on the case when the system s spectrum contains no true bandgap , but only semi - gaps ( which are gaps only with respect to one branch of the dispersion relation ) , that nevertheless support soliton families . solely zero - velocity solitons were previously studied in this system , while current experiments can not generate solitons with the velocity smaller than half the maximum group velocity . we find the semi - gaps for the moving solitons in an analytical form , and demonstrated that they are completely filled with ( numerically found ) solitons . stability of the moving solitons is identified in direct simulations . the stability region strongly depends on the frustration parameter , which controls the difference of the present system from the usual model for the single bg . a completely new situation is possible , when the velocity interval for stable solitons is limited not only from above , but also from below . collisions between stable solitons may be both elastic and strongly inelastic . close to their instability border , the solitons collide elastically only if their velocities @xmath0 and @xmath1 are small ; however , collisions between more robust solitons are elastic in a strip around @xmath2 . . And you have already written the first three sentences of the full article: bragg gratings ( bgs ) are broadly used as a basis for spectral filters are other elements of optical telecommunication networks @xcite . simultaneously , bgs offer a unique medium for the theoretical @xcite @xcite and experimental @xcite study of _ gap solitons _ ( gss ) , supported by the balance between the bg - induced dispersion and kerr ( cubic ) nonlinearity dispersion . the latter topic was surveyed in an earlier review @xcite and recent one @xcite .. Please generate the next two sentences of the article
matter - wave pulses similar to the optical bg solitons were very recently observed in bose - einstein condensates @xcite . the bg solitons may exist not only as temporal pulses in fiber gratings , but also as spatial solitons in 2d ( two - dimensional ) waveguides @xcite . in the latter case ,
11,762
Suppose that you have an abstract for a scientific paper: the thermal conductivity @xmath0 is measured in a series of cu@xmath1mg@xmath2geo@xmath3 single crystals in magnetic fields up to 16 t. it has turned out that heat transport by spin excitations is coherent for lightly doped samples , in which the spin - peierls ( sp ) transition exists , at temperatures well above the sp transition temperature @xmath4 . depression of this spin heat transport appears below @xmath5 , at which the spin gap _ locally _ opens . @xmath5 is not modified with mg - doping and that @xmath5 for each mg - doped sample remains as high as @xmath4 of pure cugeo@xmath3 , in contrast to @xmath4 which is strongly suppressed with mg - doping . the spin - gap opening enhances phonon part of the heat transport because of reduced scattering by the spin excitations , producing an unusual peak . this peak diminishes when the spin gap is suppressed both in magnetic fields and with the mg - doping . 2 . And you have already written the first three sentences of the full article: the discovery of the first inorganic spin - peierls ( sp ) compound cugeo@xmath3 ( ref . ) has triggered subsequent studies of the impurity - substitution effect on the spin - singlet states , @xcite and the existence of the disorder - induced transition into three - dimensional antiferromagnetic ( 3d - af ) state has been established . @xcite since the relevant exchange energies , i.e. , intra- and interchain coupling @xmath6 and @xmath7 , do not change with doping except at the impurity sites , it is difficult to understand the impurity - induced transition from the sp to 3d - af state in the framework of the conventional " competition between dimensionalities ( where @xmath8 is an essential parameter @xcite ) and therefore such impurity - substitution effect is a matter of current interest . in cugeo@xmath3 , a small amount of impurity leads to an exotic low - temperature phase where the lattice dimerization and antiferromagnetic staggered moments simultaneously appear [ dimerized af ( d - af ) phase ] . @xcite moreover , when the impurity concentration @xmath9 exceeds a critical concentration @xmath10 , the sp transition measured by dc susceptibility disappears @xcite and a uniform af ( u - af ) phase appears below the nel temperature @xmath11 @xmath12 4 k. @xcite the d - af ground state can be understood as a state of spatially modulated staggered moments accompanied with the lattice distortion .. Please generate the next two sentences of the article
@xcite however , the mechanism of the depression of the sp phase and the establishment of the disorder - induced antiferromagnetism are still to be elucitated . a transport measurement could be a desirable tool in dealing with such a problem , because the mobility of spin excitations or spin diffusivity is often sensitive to the impurities .
11,763
Suppose that you have an abstract for a scientific paper: von neumann entropy production rates of the quantised kicked rotor interacting with an environment are calculated . a significant correspondence is found between the entropy contours of the classical and quantised systems . this is a quantitative tool for describing quantum - classical correspondence in the transition to chaos . 2 . And you have already written the first three sentences of the full article: the fundamental split between integrable and non - integrable systems in classical mechanics has not been comprehensively mirrored in quantum mechanics @xcite . the issue seems to hinge on finding a suitable definition for _ quantum _ chaos . the sensitive dependence to initial conditions that characterises classical chaos is wholly understood in terms of trajectories of classical phase space points which have no direct quantum analog @xcite .. Please generate the next two sentences of the article
this has led to two distinct ways of identifying variables to measure quantum chaos . one involves investigating various quantum variables that can act as signatures of chaos by clearly distinguishing between quantum systems whose classical counterparts are integrable and those which are non - integrable .
11,764
Suppose that you have an abstract for a scientific paper: prompted by results that showed that a simple protein model , the frustrated g model , appears to exhibit a transition reminiscent of the protein dynamical transition , we examine the validity of this model to describe the low - temperature properties of proteins . first , we examine equilibrium fluctuations . we calculate its incoherent neutron - scattering structure factor and show that it can be well described by a theory using the one - phonon approximation . by performing an inherent structure analysis , we assess the transitions among energy states at low temperatures . then , we examine non - equilibrium fluctuations after a sudden cooling of the protein . we investigate the violation of the fluctuation dissipation theorem in order to analyze the protein glass transition . we find that the effective temperature of the quenched protein deviates from the temperature of the thermostat , however it relaxes towards the actual temperature with an arrhenius behavior as the waiting time increases . these results of the equilibrium and non - equilibrium studies converge to the conclusion that the apparent dynamical transition of this coarse - grained model can not be attributed to a glassy behavior . . And you have already written the first three sentences of the full article: proteins are fascinating molecules due to their ability to play many roles in biological systems . their functions often involve complex configurational changes . therefore the familiar aphorism that `` form is function '' should rather be replaced by a view of the `` dynamic personalities of proteins''@xcite .. Please generate the next two sentences of the article
this is why proteins are also intriguing for theoreticians because they provide a variety of yet unsolved questions . besides the dynamics of protein folding , the rise in the time averaged mean square fluctuation @xmath0 occurring at temperatures around @xmath1 , sometimes called the `` protein dynamic transition '' @xcite is arguably the most considerable candidate in the search of unifying principles in protein dynamics .
11,765
Suppose that you have an abstract for a scientific paper: aligning science instruction with authentic scientific practices is a national priority in education . in particular , undergraduate laboratory courses have been criticized as employing recipe - style activities with little emphasis on inquiry and design . this paper offers an alternative laboratory - style course , offered via the compass project at uc berkeley . using a model - based approach , undergraduate physics students engaged in authentic research practices by iteratively refining their experimental and theoretical designs . the course also promoted lifelong learning skills , such as persistence and organization , through a cycle of student self - reflection and personalized instructor feedback . this cycle is a strategy for providing students with sociocultural support , which is particularly important for students from underrepresented groups in the sciences . we document growth in students understanding of scientific measurement and , drawing on student reflections , we suggest areas for future research focused on improving students lifelong learning skills . . And you have already written the first three sentences of the full article: physicists conduct experiments through iterative cycles of design , testing , and refinement , yet undergraduate physics students rarely have opportunities to do the same @xcite.while laboratory ( lab ) courses can provide opportunities for authentic practice , they often employ uninspiring recipe - style activities with little emphasis on designing , building , or troubleshooting experimental approaches @xcite . this has prompted numerous calls to reform physics lab experiences @xcite , with some arguing authentic inquiry should begin as early as students freshman year @xcite to improve student persistence in physical science majors @xcite . moreover , there is growing recognition that physics education must focus on self - learning , critical thinking , collaboration and other such skills to help students succeed @xcite .. Please generate the next two sentences of the article
this paper describes a transformed lab course called _ intro to measurement _ , the third and final course in the berkeley compass project s three - part sequence for first - year undergraduate students @xcite . the course borrows elements from other transformed lab courses known to be effective @xcite and enhances them with attention to the sociocultural aspects of learning physics @xcite .
11,766
Suppose that you have an abstract for a scientific paper: to better understand a preferred magnetic field configuration and its evolution during coronal mass ejection events , we investigated the spatial and temporal evolution of photospheric magnetic fields in the active region noaa 9236 that produced eight flare - associated cmes during the time period of 2000 november 2326 . the time variations of the total magnetic helicity injection rate and the total unsigned magnetic flux are determined and examined not only in the entire active region but also in some local regions such as the main sunspots and the cme - associated flaring regions using @xmath0/mdi magnetogram data . as a result , we found that : ( 1 ) in the sunspots , a large amount of postive ( right - handed ) magnetic helicity was injected during most of the examined time period , ( 2 ) in the flare region , there was a continuous injection of negative ( left - handed ) magnetic helicity during the entire period , accompanied by a large increase of the unsigned magnetic flux , and ( 3 ) the flaring regions were mainly composed of emerging bipoles of magnetic fragments in which magnetic field lines have substantially favorable conditions for making reconnection with large - scale , overlying , and oppositely directed magnetic field lines connecting the main sunspots . these observational findings can also be well explained by some mhd numerical simulations for cme initiation ( e.g. , reconnection - favored emerging flux models ) . we therefore conclude that reconnection - favored magnetic fields in the flaring emerging flux regions play a crucial role in producing the multiple flare - associated cmes in noaa 9236 . . And you have already written the first three sentences of the full article: coronal mass ejections ( cmes ) are large - scale transient eruptions of magnetized plasma from the solar corona that propagate outward into interplanetary space . their statistical , physical , and morphological properties have been well studied using satellite- and ground - based observational data , especially by using white - light coronagraph data from the large angle and spectrometric coronagraph ( lasco , * ? ? ? * ) aboard the @xmath0 spacecraft ( e.g. , * ? ? ?. Please generate the next two sentences of the article
* ; * ? ? ? * ; * ? ? ?
11,767
Suppose that you have an abstract for a scientific paper: the difficulty of explaining non - local correlations in a fixed causal structure sheds new light on the old debate on whether space and time are to be seen as fundamental . refraining from assuming space - time as given _ a priori _ has a number of consequences . first , the usual definitions of _ randomness _ depend on a causal structure and turn meaningless . so motivated , we propose an _ intrinsic _ , physically motivated measure for the randomness of a string of bits : its length minus its normalized work value , a quantity we closely relate to its _ kolmogorov complexity _ ( the length of the shortest program making a universal turing machine output this string ) . we test this alternative concept of randomness for the example of non - local correlations , and we end up with a reasoning that leads to similar conclusions as , but is more direct than , in the probabilistic view since only the outcomes of measurements that can _ actually all be carried out together _ are put into relation to each other . in the same context - free spirit , we connect the logical reversibility of an evolution to the second law of thermodynamics and the arrow of time . refining this , we end up with a speculation on the emergence of a space - time structure on bit strings in terms of data - compressibility relations . finally , we show that logical consistency , by which we replace the abandoned causality , it strictly weaker a constraint than the latter in the multi - party case . . And you have already written the first three sentences of the full article: what is _ causality _ ? | the notion has been defined in different ways and turned out to be highly problematic , both in physics and philosophy . this observation is not new , as is nicely shown by _. Please generate the next two sentences of the article
bertrand russell _ s quote @xcite from more than a century ago : + _ `` the law of causality [ ] is a relic of a bygone age , surviving , like the monarchy , only because it is erroneously supposed to do no harm . '' _ + indeed , a number of attempts have been made to abandon causality and replace global by only local assumptions ( see , _ e.g. _ , @xcite ) .
11,768
Suppose that you have an abstract for a scientific paper: we compute the one - loop corrections to the tachyon potentials in both , bosonic and supersymmetric , cases by worldsheet methods . in the process , we also get some insight on tachyon condensation from the viewpoint of closed string modes and show that there is an instability due to the closed string tachyon in the bosonic case . + pacs : 11.25.-w + keywords : string theory , unstable d - branes hep - th/0109187 + hu berlin - ep-01/32 . And you have already written the first three sentences of the full article: it is a big problem to better understand the vacuum structure of string / m theory . even in the case of bosonic string theory which contains the tachyon near its perturbative vacuum , it turned out not to be a simple deal to find a stable vacuum . note that hopes for the existence of such a vacuum have their origins already in the seventies @xcite .. Please generate the next two sentences of the article
however , for open strings a real breakthrough has been achieved in recent years when the open string tachyons were related with annihilation or decay of d - branes via the process of tachyon condensation @xcite ( see @xcite for a review and a list of references ) . in studying the phenomenon of tachyon condensation , string field theory methods are the most appropriate ones ( see @xcite for the latest developments ) .
11,769
Suppose that you have an abstract for a scientific paper: a fully regulated definition of feynman s path integral is presented here . the proposed re - formulation of the path integral coincides with the familiar formulation whenever the path integral is well - defined . in particular , it is consistent with respect to lattice formulations and wick rotations , i.e. , it can be used in euclidean and minkowskian space - time . the path integral regularization is introduced through the generalized kontsevich - vishik trace , that is , the extension of the classical trace to fourier integral operators . physically , we are replacing the time - evolution semi - group by a holomorphic family of operator families such that the corresponding path integrals are well - defined in some half space of @xmath0 . the regularized path integral is , thus , defined through analytic continuation . this regularization can be performed by means of stationary phase approximation or computed analytically depending only on the hamiltonian and the observable ( i.e. , known a priori ) . in either case , the computational effort to evaluate path integrals or expectations of observables reduces to the evaluation of integrals over spheres . furthermore , computations can be performed directly in the continuum and applications ( analytic computations and their implementations ) to a number of models including the non - trivial cases of the massive schwinger model and a @xmath1 theory . . And you have already written the first three sentences of the full article: in his original work on path integrals , feynman @xcite noted that recognizing known facts from different perspectives can lead to new and interesting insights . quantum mechanics in particular has been an important example of this observation , having schrdinger s differential equation and heisenberg s matrix algebra . while the two theories mathematical descriptions are seemingly distinct , dirac s transformation theory proved their equivalence . in 1948 , feynman @xcite added a third important mathematical formulation of quantum mechanics based on some of dirac s observations about the role of the classical action in quantum mechanics .. Please generate the next two sentences of the article
this third description is also known as feynman s path integral formalism and , in combination with feynman diagrams , proved to be fundamental for the development and study of quantum field theories ( qfts ) . unfortunately , the path integral is a very elusive object .
11,770
Suppose that you have an abstract for a scientific paper: in this paper we study the gravitational dielectric phenomena of a d2-brane in the background of kaluza - klein monopoles and d6-branes . in both cases the spherical d2-brane with nonzero radius becomes classical solution of the d2-brane action . we also investigate the gravitational myers effect in the background of d6-branes . this phenomenon occurs since the tension of the d2-brane balances with the repulsive force between d0-branes and d6-branes . ou - het 462 + hep - th/0401026 + january 2004 + yoshifumi hyakutake + . And you have already written the first three sentences of the full article: in the electromagnetism , oppositely charged particles under the uniform electric field separate from each other to cancel out the background flux . this is the well - known dielectric effect , and analogous phenomena can be observed in the type ii superstring theories and the m - theory@xcite . let us illustrate the dielectric phenomenon in the type iia superstring theory with the case where a spherical d2-brane is located in the background of coincident ns5-branes@xcite .. Please generate the next two sentences of the article
ns5-branes carry the magnetic charge of the ns - ns 2-form potential , and its 3-from flux vertically penetrates three dimensional sphere @xmath0 which encloses ns5-branes in the transverse space . the d2-brane is assumed to be spherical in the @xmath0 . in case
11,771
Suppose that you have an abstract for a scientific paper: in a molecular communication network , transmitters and receivers communicate by using signalling molecules . at the receivers , the signalling molecules react , via a chain of chemical reactions , to produce output molecules . the counts of output molecules over time is considered to be the output signal of the receiver . this output signal is used to detect the presence of signalling molecules at the receiver . the output signal is noisy due to the stochastic nature of diffusion and chemical reactions . the aim of this paper is to characterise the properties of the output signals for two types of receivers , which are based on two different types of reaction mechanisms . we derive analytical expressions for the mean , variance and frequency properties of these two types of receivers . these expressions allow us to study the properties of these two types of receivers . in addition , our model allows us to study the effect of the diffusibility of the receiver membrane on the performance of the receivers . * keywords : * molecular communication networks ; molecular receivers ; performance analysis ; stochastic models ; master equations ; receiver membrane ; noise . And you have already written the first three sentences of the full article: a molecular communication network @xcite consists of multiple transmitters and receivers in a fluid medium . the transmitters encode the messages by types , concentration or emission frequency of signalling molecules . the signalling molecules diffuse freely in the medium .. Please generate the next two sentences of the article
when these signalling molecules reach the receiver , they trigger chemical reactions in the receiver to allow their presence to be detected . there are a number of reasons why the study of molecular communication networks , both natural and synthetic , are important .
11,772
Suppose that you have an abstract for a scientific paper: v1387 aql ( the donor in the microquasar grs 1915 + 105 ) is a low - mass giant . such a star consists of a degenerate helium core and a hydrogen - rich envelope . both components are separated by an hydrogen burning shell . the structure of such an object is relatively simple and easy to model . making use of the observational constraints on the luminosity and the radius of v1387 aql , we constrain the mass of this star with evolutionary models . we find a very good agreement between the constraints from those models and from the observed rotational broadening and the nir magnitude . combining the constraints , we find solutions with stripped giants of the mass of @xmath0 and of the spectral class k5 iii , independent of the distance to the system , and a distance - dependent upper limit , @xmath1 . we also calculate the average mass transfer rate and the duty cycle of the system as a function of the donor mass . [ firstpage ] binaries : general stars : evolution stars : individual : v1387 aql x - rays : binaries x - rays : individual : grs 1915 + 105 . . And you have already written the first three sentences of the full article: grs 1915 + 105 is a low mass x - ray binary , which appears to be the most distinct galactic microquasar @xcite . the system contains a black hole and a low mass k m iii giant donor @xcite , and it has a long period of @xmath2 d @xcite .. Please generate the next two sentences of the article
the donor , which got a variable - star name v1387 aql , fills its roche lobe and supplies the matter accreted by the black hole . the first determination of the mass of the black hole based on a measurement of the radial velocity amplitude , @xmath3 , was by @xcite , who , observing with the vlt , obtained @xmath4 km s@xmath5 .
11,773
Suppose that you have an abstract for a scientific paper: complex networks such as the sexual partnership web or the internet often show a high degree of redundancy and heterogeneity in their connectivity properties . this peculiar connectivity provides an ideal environment for the spreading of infective agents . here we show that the random uniform immunization of individuals does not lead to the eradication of infections in all complex networks . namely , networks with scale - free properties do not acquire global immunity from major epidemic outbreaks even in the presence of unrealistically high densities of randomly immunized individuals . the absence of any critical immunization threshold is due to the unbounded connectivity fluctuations of scale - free networks . successful immunization strategies can be developed only by taking into account the inhomogeneous connectivity properties of scale - free networks . in particular , _ targeted _ immunization schemes , based on the nodes connectivity hierarchy , sharply lower the network s vulnerability to epidemic attacks . . And you have already written the first three sentences of the full article: the relevance of spatial and other kinds of heterogeneity in the design of immunization strategies has been widely addressed in the epidemic modeling of infectious diseases @xcite . in particular , it has been pointed out that population inhomogeneities can substantially enhance the spread of diseases , making them harder to eradicate and calling for specific immunization strategies . this issue assumes the greatest importance in a wide range of natural interconnected systems such as food - webs , communication and social networks , metabolic and neural systems @xcite . the complexity of these networks resides in the small average path lengths among any two nodes ( small - world property ) , along with a large degree of local clustering . in other words , some special nodes of the structure. Please generate the next two sentences of the article
develop a larger probability to establish connections pointing to other nodes . this feature has dramatic consequences in the topology of scale - free ( sf ) networks @xcite which exhibit a power - law distribution @xmath0 for the probability that any node has @xmath1 connections to other nodes . for exponents in the range
11,774
Suppose that you have an abstract for a scientific paper: astrod i is the first planned space mission in a series of astrod missions for testing relativity in space using optical devices . the main aims are : ( i ) to test general relativity with an improvement of three orders of magnitude compared to current results , ( ii ) to measure solar and solar system parameters with improved accuracy , ( iii ) to test the constancy of the gravitational constant and in general to get a deeper understanding of gravity . the first ideas for the astrod missions go back to the last century when new technologies in the area of laser physics and time measurement began to appear on the horizon . astrod is a mission concept that is supported by a broad international community covering the areas of space technology , fundamental physics , high performance laser and clock technology and drag free control . while astrod i is a single - spacecraft concept that performes measurements with pulsed laser ranging between the spacecraft and earthbound laser ranging stations , astrod - gw is planned to be a three spacecraft mission with inter - spacecraft laser ranging . astrod - gw would be able to detect gravitational waves at frequencies below the elisa / ngo bandwidth . as a third step super - astrod with larger orbits could even probe primordial gravitational waves . this article gives an overview on the basic principles especially for astrod i. . And you have already written the first three sentences of the full article: the astrod ( astrodynamical space test of relativity using optical devices ) mission series aims at high precision measurements in the interplanetary space for the determination of several quantities in the areas of general relativity and fundamental physics as well as solar and solar system research @xcite . the main goals are improvements in the determination of the relativistic parameterized - post - newtonian ( ppn ) parameters @xmath0 and @xmath1 ( eddington parameter ) , the solar and solar system parameters and the variation of the gravitational constant . gravitational wave detection is a further goal for the multi - spacecraft missions astrod - gw @xcite and super - astrod @xcite . in the frame of this overview article on the single - spacecraft astrod. Please generate the next two sentences of the article
i mission concept the specific mission goals , the basic mission concept as well as several technical aspects of the mission are presented . more detailed descriptions of the different mission aspects are given in refs . .
11,775
Suppose that you have an abstract for a scientific paper: we describe the double wall tube with cylindrical dislocation in the framework of the geometric theory of defects . the induced metric is found . the dispersion relation is obtained for the propagation of torsional elastic waves in the double wall tube . . And you have already written the first three sentences of the full article: ideal crystals are absent in nature , and most of their physical properties , such as plasticity , melting , growth , etc . , are defined by defects of the crystalline structure . therefore , a study of defects is a topical scientific question of importance for applications in the first place . at present , a fundamental theory of defects is absent in spite of the existence of dozens of monographs and thousands of articles .. Please generate the next two sentences of the article
one of the most promising approaches to the theory of defects is based on riemann cartan geometry , which involves nontrivial metric and torsion . in this approach , a crystal is considered as a continuous elastic medium with a spin structure .
11,776
Suppose that you have an abstract for a scientific paper: we report the discovery of a @xmath0 counterpart to the anomalous x - ray pulsar ( magnetar ) 1e 2259 + 586 with the _ spitzer space telescope_. the mid - infrared flux density is @xmath1 at @xmath0 and @xmath220@xmath3 ( at 95% confidence ) at @xmath4 , or @xmath5% of the 210kev x - ray flux ( corrected for extinction ) . combining our _ spitzer _ measurements with previously published near - infrared data , we show that the overall infrared emission from 1e 2259 + 586 is qualitatively similar to that from the magnetar 4u 0142 + 61 . therefore , the passive x - ray - heated dust disk model originally developed for 4u 0142 + 61 might also apply to 1e 2259 + 586 . however , the ir data from this source can also be fitted by a simple power - law spectrum as might be expected from magnetospheric emission . . And you have already written the first three sentences of the full article: the notion of supernova fallback , where some of the ejecta from a core - collapse supernova ends up captured by the newly formed neutron star @xcite and may have sufficient angular momentum to form a disk @xcite , has been a general prediction of supernova models ( e.g. , * ? ? ? fallback can have profound effects on the final state of the neutron star , forming disks that can give rise to planets @xcite , and possibly even causing the neutron star to collapse into a black hole . fallback disks should manifest as a thermal infrared excess @xcite , but previous searches for such excesses around neutron stars were unsuccessful ( * ? ? ? * and references therein ) .. Please generate the next two sentences of the article
several years ago , though , we discovered the mid - infrared ( mid - ir ; here , @xmath6 and @xmath7 ) counterpart to the magnetar 4u 0142 + 61 ( * ? ? ? * hereafter ) . the combined optical / ir spectrum of this magnetar suggests that the optical and ir data arise from two different spectral components . while the optical component is demonstrably of magnetospheric origin
11,777
Suppose that you have an abstract for a scientific paper: micrometer - sized al / alo@xmath0/y tunnel junctions were fabricated by the electron - beam lithography technique . the thin ( @xmath1 1.52 nm thickness ) insulating alo@xmath0 layer was grown on top of the al base electrode by o@xmath2 glow discharge . the zero - bias conductances @xmath3 and the current - voltage characteristics of the junctions were measured in a wide temperature range 1.5300 k. in addition to the direct tunneling conduction mechanism observed in low-@xmath4 junctions , high-@xmath4 junctions reveal a distinct charge transport process which manifests the thermally fluctuation - induced tunneling conduction ( fitc ) through short nanoconstrictions . we ascribe the experimental realization of the fitc mechanism to originating from the formations of hot spots " ( incomplete pinholes ) in the alo@xmath0 layer owing to large junction - barrier interfacial roughness . . And you have already written the first three sentences of the full article: electron tunneling through a thin insulating layer ( a potential barrier ) in a metal - insulator - metal ( mim ) multilayered structure is one of the most fundamental research topics in condensed matter physics . it also lies at the heart of numerous solid - state devices , such as tunnel diodes , @xcite coulomb blockade thermometers , @xcite josephson junctions , @xcite and memory elements based on magnetic tunnel junctions . @xcite the functionality of a tunnel device relies heavily on the material properties of the intermediate thin insulating layer .. Please generate the next two sentences of the article
usually , a weak insulating - like temperature dependence of the zero - bias junction conductance , @xmath5_{v\,\rightarrow\,0}$ ] , as described by the simmons model @xcite is used to ascertain the quality and reliability of the insulating layer . @xcite in the absence of any pinholes in the insulating layer [ see the schematic diagram depicted in fig .
11,778
Suppose that you have an abstract for a scientific paper: we compute the anomalous dimensions of a set of composite operators which involve derivatives at four loops in @xmath0 in @xmath1 theory as a function of the operator moment @xmath2 . these operators are similar to the twist-@xmath3 operators which arise in qcd in the operator product expansion in deep inelastic scattering . by regarding their inverse mellin transform as being equivalent to the dglap splitting functions we explore to what extent taking a restricted set of operator moments can give a good approximation to the _ exact _ four loop result . spbu ip9707 lth391 + [ 8 mm ] s. . derkachov@xmath4 , j.a . gracey@xmath5 & a.n . manashov@xmath6 + [ 3 mm ] * department of mathematics , st petersburg technology institute , + sankt petersburg , russia . * theoretical physics division , department of mathematical sciences , + university of liverpool , liverpool , l69 7zf , united kingdom . * department of theoretical physics , state university of st petersburg , + sankt petersburg , 198904 russia . . And you have already written the first three sentences of the full article: the scalar field theory with a @xmath1 interaction has been widely studied for a variety of problems . for instance , it underlies the physics of the anti - ferromagnetic phase transition in statistical physics and also is the starting point for the higgs mechanism of the standard model in particle physics . from another viewpoint it has been used as a toy model in four dimensions to examine fundamental ideas in quantum field theory .. Please generate the next two sentences of the article
one such example is understanding perturbation theory at high orders . ( for a review of these points see , for example , @xcite . ) in particular the fundamental functions of the renormalization group like the @xmath7-function are known to five loops in the @xmath0 scheme , @xcite . in other field theories of interest in particle physics such as gauge theories
11,779
Suppose that you have an abstract for a scientific paper: we establish new dynamical constraints on the mass and abundance of compact objects in the halo of dwarf spheroidal galaxies . in order to preserve kinematically cold the second peak of the ursa minor dwarf spheroidal ( umi dsph ) against gravitational scattering , we place upper limits on the density of compact objects as a function of their assumed mass . the mass of the dark matter constituents can not be larger than @xmath0 m@xmath1 at a halo density in umi s core of @xmath2 m@xmath1 pc@xmath3 . this constraint rules out a scenario in which dark halo cores are formed by two - body relaxation processes . our bounds on the fraction of dark matter in compact objects with masses @xmath4 m@xmath1 improve those based on dynamical arguments in the galactic halo . in particular , objects with masses @xmath5 m@xmath1 can comprise no more than a halo mass fraction @xmath6 . better determinations of the velocity dispersion of old overdense regions in dsphs may result in more stringent constraints on the mass of halo objects . for illustration , if the preliminary value of @xmath7 km / s for the secondary peak of umi is confirmed , compact objects with masses above @xmath8 m@xmath1 could be excluded from comprising all its dark matter halo . . And you have already written the first three sentences of the full article: the composition of dark halos around galaxies is a difficult problem . many of the baryons in the universe are dark and at least some of these dark baryons could be in galactic halos in the form of very massive objects ( vmos ) , with masses above @xmath9 m@xmath1 . astrophysically motivated candidates include massive compact halo objects ( machos ) and black holes either of intermediate mass ( imbhs ; @xmath10 to @xmath11 m@xmath1 ) or massive ( @xmath12 m@xmath1 ) . imbhs are an intringuing possibility as they could contribute , in principle , to all the baryonic dark matter and may be the engines behind ultraluminous x - ray sources recently discovered in nearby galaxies . a successful model in which vmos are the dominant component of dark matter halos could resolve some long - standing problems @xcite . if the halos of dsphs are comprised by black holes of masses between @xmath5 and @xmath13 m@xmath1 , they evolve towards a shallower inner profile in less than a hubble time , providing an explanation for the origin of dark matter cores in dwarf galaxies , and the orbits of globular clusters ( gcs ) do not shrink to the center by dynamical friction @xcite .. Please generate the next two sentences of the article
very few observational limits on vmos in dsph halos have been derived so far . this letter is aimed at constraining the mass and abundance of vmos in the halos of dsphs by the disruptive effects they would have on gcs and cold long - lived substructures .
11,780
Suppose that you have an abstract for a scientific paper: the ade classification scheme is encountered in many areas of mathematics , most notably in the study of lie algebras . here such a scheme is shown to describe families of two - dimensional conformal field theories . 1.5 cm * a - d - e classification * .3 cm * of conformal field theories * .5 cm andrea cappelli + _ infn + via g. sansone 1 , 50019 sesto fiorentino ( firenze ) , italy _ + .3 cm jean - bernard zuber + _ lpthe , tour 24 - 25 , 5me tage , + universit pierre et marie curie - paris 6 , + 4 place jussieu , f 75252 paris cedex 5 , france _ .5 cm .5 cm _ review article to appear in scholapedia , http://www.scholarpedia.org/_ . And you have already written the first three sentences of the full article: conformal field theories ( cft ) in two space - time dimensions have been the object of intensive work since the mid eighties , after the fundamental paper @xcite . these theories have very diverse and important physical applications , from the description of critical behavior in statistical mechanics and solid state physics in low dimension , to the worldsheet description of string theory . furthermore , their remarkable analytical and algebraic structures and their connections with many other domains of mathematics make them outstanding laboratories of new techniques and ideas .. Please generate the next two sentences of the article
a particularly striking feature is the possibility of classifying large classes of cft through the study of representations of the virasoro algebra of conformal transformations @xcite . the classification of cft partition functions , leading to the ade scheme , follows from exploiting the additional symmetry under modular transformations , the discrete coordinate changes that leave invariant the double - periodic finite - size geometry of the torus .
11,781
Suppose that you have an abstract for a scientific paper: we present a study on the clustering of a stellar mass selected sample of 18,482 galaxies with stellar masses @xmath0at redshifts @xmath1 , taken from the palomar observatory wide - field infrared survey . we examine the clustering properties of these stellar mass selected samples as a function of redshift and stellar mass , and discuss the implications of measured clustering strengths in terms of their likely halo masses . we find that galaxies with high stellar masses have a progressively higher clustering strength , and amplitude , than galaxies with lower stellar masses . we also find that galaxies within a fixed stellar mass range have a higher clustering strength at higher redshifts . we furthermore use our measured clustering strengths , combined with models from @xcite , to determine the average total masses of the dark matter haloes hosting these galaxies . we conclude that for all galaxies in our sample the stellar - mass - to - total - mass ratio is always lower than the universal baryonic mass fraction . using our results , and a compilation from the literature , we furthermore show that there is a strong correlation between stellar - mass - to - total - mass ratio and derived halo masses for central galaxies , such that more massive haloes contain a lower fraction of their mass in the form of stars over our entire redshift range . for central galaxies in haloes with masses @xmath2we find that this ratio is @xmath3 , much lower than the universal baryonic mass fraction . we show that the remaining baryonic mass is included partially in stars within satellite galaxies in these haloes , and as diffuse hot and warm gas . we also find that , at a fixed stellar mass , the stellar - to - total - mass ratio increases at lower redshifts . this suggests that galaxies at a fixed stellar mass form later in lower mass dark matter haloes , and earlier in massive haloes . we interpret this as a `` halo downsizing '' effect , however some of this evolution could be attributed to halo.... And you have already written the first three sentences of the full article: astronomers in the last decade have made major progress in understanding the properties and evolution of galaxies in the distant universe . galaxies up to redshifts @xmath4 , and perhaps even higher , have been discovered in large numbers allowing statistically significant population characteristics to be derived ( e.g. * ? ? ? * ; * ? ? ?. Please generate the next two sentences of the article
we in fact now have a good understanding of basic galaxy quantities , such as the luminosity function , as well as how scaling relations , such as the tully - fisher relation evolve up to , at least , @xmath5 . we have also begun to trace the stellar mass evolution of galaxies , as well as the star formation rate , determining when stars and stellar mass was put into place in the modern galaxy population .
11,782
Suppose that you have an abstract for a scientific paper: radio - wave scintillation observations reveal a nearly kolmogorov spectrum of density fluctuations in the ionized interstellar medium . although this density spectrum is suggestive of turbulence , no theory relevant to its interpretation exists . we calculate the density spectrum in turbulent magnetized plasmas by extending the theory of incompressible magnetohydrodynamic ( mhd ) turbulence given by @xcite to include the effects of compressibility and particle transport . our most important results are as follows . \(1 ) density fluctuations are due to the slow mode and the entropy mode . both modes are passively mixed by the cascade of shear alfvn waves . since the shear alfvn waves have a kolmogorov spectrum , so do the density fluctuations . \(2 ) observed density fluctuation amplitudes constrain the nature of mhd turbulence in the interstellar medium . slow mode density fluctuations are suppressed when the magnetic pressure is less than the gas pressure . entropy mode density fluctuations are suppressed by cooling when the cascade timescale is longer than the cooling timescale . these constraints imply either that the magnetic and gas pressures are comparable , or that the outer scale of the turbulence is very small . \(3 ) a high degree of ionization is required for the cascade to survive damping by neutrals and thereby to extend to small lengthscales . regions that are insufficiently ionized produce density fluctuations only on lengthscales larger than the neutral damping scale . these regions may account for the excess of power that is found on large scales . \(4 ) provided that the thermal pressure exceeds the magnetic pressure , both the entropy mode and the slow mode are damped on lengthscales below that at which protons can diffuse across an eddy during the eddy s turnover time . consequently , eddies whose extents _ along the magnetic field _ are smaller than the proton collisional mean free path do not contribute to the density spectrum . however , in mhd.... And you have already written the first three sentences of the full article: diffractive scintillations of small angular - diameter radio sources indicate that the interstellar electron density spectrum on lengthscales @xmath0 is nearly kolmogorov ; i.e. r.m.s . density fluctuations across a lengthscale @xmath1 are nearly proportional to @xmath2 . they also establish that there are large variations in the amplitude of the density spectrum along different lines of sight .. Please generate the next two sentences of the article
rickett ( 1977 , 1990 ) and @xcite review the observations of diffractive scintillation and their interpretation . they also discuss refractive scintillations and dispersion measure fluctuations , which probe density fluctuations on scales larger than the diffractive scales .
11,783
Suppose that you have an abstract for a scientific paper: visser has suggested traversable 3-dimensional wormholes that could plausibly form naturally during big bang inflation . a wormhole mouth embedded in high mass density might accrete mass , giving the other mouth a net _ negative _ mass of unusual gravitational properties . the lensing of such a gravitationally negative anomalous compact halo object ( gnacho ) will enhance background stars with a time profile that is observable and qualitatively different from that recently observed for massive compact halo objects ( machos ) of positive mass . we recommend that macho search data be analyzed for gnachos . . And you have already written the first three sentences of the full article: the work of morris and thorne@xcite has led to a great deal of interest in the formation and properties of three - dimensional wormholes ( topological connections between separated regions of space - time ) that are solutions of the einstein s equations of general relativity . subsequently visser@xcite suggested a wormhole configuration , a flat - space wormhole that is framed by `` struts '' of an exotic material , a variant of the cosmic string solutions of einstein s equations@xcite . to satisfy the einstein field equations the cosmic string framing visser wormholes must have a _ negative _ string tension@xcite of @xmath0 and therefore a negative mass density . however , for the total mass of the wormhole system , the negative mass density of the struts should be combined with the effective positive mass density of the wormhole s gravitational field .. Please generate the next two sentences of the article
the overall object could , depending on the details of the model , have positive , zero , or negative net external mass . note that in hypothesizing the existence of such a wormhole , one has to abandon the averaged null energy condition@xcite .
11,784
Suppose that you have an abstract for a scientific paper: we present a non - perturbative canonical analysis of the @xmath0 quadratic - curvature , yet ghost - free , model to exemplify a novel , `` constraint bifurcation '' , effect . consequences include a jump in excitation count : a linearized level gauge variable is promoted to a dynamical one in the full theory . we illustrate these results with their concrete perturbative counterparts . they are of course mutually consistent , as are perturbative findings in related models . a geometrical interpretation in terms of propagating torsion reveals the model s relation to an ( improved ) version of einstein weyl gravity at the linearized level . finally , we list some necessary conditions for triggering the bifurcation phenomenon in general interacting gauge systems . . And you have already written the first three sentences of the full article: canonical analysis la dirac is a straightforward , if sometimes labyrinthine , approach to counting a system s physical degrees of freedom ( dof ) in the presence of gauge symmetries and non - linear interactions . however , this approach can uncover unexpected subtleties , as already exemplified by some toy models in @xcite . in this paper , we show that more physically motivated theories can also contain similar subtleties , with qualitatively important consequences . our focus will be on a specific self - interacting spin-2 gravity model , but similar effects may well arise in other interacting theories with higher - spin gauge symmetries , at least if some ( listed ) necessary conditions are met .. Please generate the next two sentences of the article
the theory we shall study in detail is the truncation of @xmath0 `` nmg '' @xcite to its pure quadratic curvature , yet ghost - free , part @xcite , @xmath1= \frac{1}{16}\int { \operatorname{d}\!}^3x\sqrt{-g}\ , \left[g^{\mu\nu}g_{\mu\nu}-\frac12\,g^2 \right ] = \frac{1}{16}\int { \operatorname{d}\!}^3x\sqrt{-g}\ , g_{\mu\nu}s^{\mu\nu } , \quad s_{\mu\nu } : = r_{\mu\nu}-\frac14 g_{\mu\nu}r\ , ; \label{eq : action2}\ ] ] here @xmath2 is the einstein tensor , @xmath3 is its trace and @xmath4 is the @xmath0 schouten tensor . we have set an overall dimensional constant to unity and used mostly plus signature .
11,785
Suppose that you have an abstract for a scientific paper: we propose a non - hermitian generalization of stimulated raman adiabatic passage ( stirap ) , which allows one to increase speed and fidelity of the adiabatic passage . this is done by adding balanced imaginary ( gain / loss ) terms in the diagonal ( bare energy ) terms of the hamiltonian and choosing them such that they cancel exactly the nonadiabatic couplings , providing in this way an effective shortcut to adiabaticity . remarkably , for a stirap using delayed gaussian - shaped pulses in the counter - intuitive scheme the imaginary terms of the hamiltonian turn out to be time independent . a possible physical realization of non - hermitian stirap , based on light transfer in three evanescently - coupled optical waveguides , is proposed . . And you have already written the first three sentences of the full article: stimulated raman adiabatic passage ( stirap ) is one of the most popular tools , used for manipulation of quantum structures @xcite . this method transfers population adiabatically between two states @xmath0 and @xmath1 , in a three - level quantum system , without populating the intermediate state @xmath2 . the applications of stirap cover a huge part of contemporary physics : coherent atomic excitation @xcite , control of chemical reactions @xcite , quantum information processing @xcite , coherent quantum state transfer and spatial adiabatic passage @xcite , waveguide optics @xcite , to name a few .. Please generate the next two sentences of the article
the technique of stirap is based on the existence of a dark state , which is an eigenstate of the hamiltonian and is a time - dependent superposition of the initial and target states . because stirap is an adiabatic technique , it achieves high fidelity only in the limit of adiabatic evolution , which requires large temporal pulse areas .
11,786
Suppose that you have an abstract for a scientific paper: it has recently been suggested that the formation of horizon - size primordial black hole ( pbh ) from pre - existing density fluctuations is effective during the cosmic qcd phase transition . in this letter we discuss the dependence of pbh formation on effective relativistic degrees of freedom , @xmath0 during the cosmic qcd phase transition . our finding is important in the light of recent cosmological arguments of several new classes of cosmic energy that appear from universal neutrino degeneracy , quintessential inflation , and dark radiation in brane world cosmology . extra - energy component from the standard value in these new cosmological theories is represented as an effective radiation in terms of @xmath0 . we conclude that the pbh formation during qcd phase transition becomes more efficient if negative extra - component of the cosmic energy is allowed because of the increase of the duration of the qcd phase transition , which leads to smaller mass scale of pbhs . this suggests larger probability of finding more pbhs if the dark radiation exists as allowed in the brane world cosmology . cosmology , quark - gluon plasma , black holes 98.80 , 12.38.m , 97.60.l . And you have already written the first three sentences of the full article: first argument of the pbh formation was presented by carr and hawking @xcite , in which the key concept of the pbh formation is that the three different length scales of the particle horizon , the schwarzschild radius , and the jeans length of any overdense region at radiation - dominated epoch are of the same order of magnitude . once an initially superhorizon - size density fluctuation crosses the particle horizon , competition between self gravity and centrifugal pressure force determines whether such a fluctuation collapses into black hole or not@xcite . if the self gravity overcomes the pressure force , then the fluctuation will form a black hole with the order of horizon mass . jedamzik @xcite found that , during cosmic qcd phase transition , there is almost no pressure response on adiabatic compression of fluctuation in the mixed phase of quark - gluon plasma and hadron gas .. Please generate the next two sentences of the article
this could occur because the enhanced energy density due to the compression of the fluctuations will turn into the high energy quark - gluon phase , instead of increasing centrifugal pressure force . as a result the overdense region from initially existing fluctuations tends to collapse efficiently to form the pbh during the qcd epoch when horizon mass scale is around 1@xmath1 . pbh mass spectrum is most likely dominated by the horizon mass at the phase transition epoch @xcite .
11,787
Suppose that you have an abstract for a scientific paper: we calculate the decay rate of a yukawa fermion in a thermal bath using finite temperature cutting rules and effective green s functions according to the hard thermal loop resummation technique . we apply this result to the decay of a heavy majorana neutrino in leptogenesis . compared to the usual approach where thermal masses are inserted into the kinematics of final states , we find that deviations arise through two different leptonic dispersion relations . the decay rate differs from the usual approach by more than one order of magnitude in the temperature range which is interesting for the weak washout regime . we discuss how to arrive at consistent finite temperature treatments of leptogenesis . mpp-2010 - 31 + * decay of a yukawa fermion at finite temperature and applications to leptogenesis * + clemens p. kieig@xmath0 , michael plmacher@xmath0 , markus h. thoma@xmath1 , @xmath0 _ max - planck - institut fr physik ( werner - heisenberg - institut ) , + fhringer ring 6 , d-80805 mnchen , germany + @xmath1 _ max - planck - institut fr extraterrestrische physik , + giessenbachstrae , d-85748 garching , germany _ _ . And you have already written the first three sentences of the full article: leptogenesis @xcite is an extremely successful theory in explaining the baryon asymmetry of the universe by adding three heavy right - handed neutrinos @xmath2 to the standard model , @xmath3 with masses @xmath4 at the scale of grand unified theories ( guts ) and yukawa couplings @xmath5 similar to the other fermions . this also solves the problem of the light neutrino masses via the see - saw mechanism without fine - tuning @xcite . the heavy neutrinos decay into lepton and higgs boson after inflation , the decay is out of equilibrium since there are no gauge couplings to the standard model . if the cp asymmetry in the yukawa couplings is large enough , a lepton asymmetry is created by the decays which is then partially converted into a baryon asymmetry by sphaleron processes . as temperatures are high , interaction rates and the cp asymmetry need to be calculated using thermal field theory @xcite rather than vacuum quantum field theory .. Please generate the next two sentences of the article
however , in the conventional approach @xcite , thermal masses have been put in by hand without investigating the validity of this approach in detail . we have adressed this issue in @xcite and found that corrections arise through the occurence of two lepton dispersion relations in the thermal bath . in this paper
11,788
Suppose that you have an abstract for a scientific paper: models of sn driven galactic winds for ellipticals are presented . we assume that ellipticals formed at high redshift and suffered an intense burst of star formation . the role of supernovae of type ii and ia in the chemical enrichment and in triggering galactic winds is studied . in particular , several recipes for sn feed - back together with detailed nucleosynthesis prescriptions are considered . it is shown that sne of type ii have a dominant role in enriching the interstellar medium of elliptical galaxies whereas type ia sne dominate the enrichment and the energetics of the intracluster medium . # 1_#1 _ # 1_#1 _ = # 1 1.25 in .125 in .25 in . And you have already written the first three sentences of the full article: several mechanisms have been suggested so far for the formation and evolution of elliptical galaxies . one scenario is based on an early monolithic collapse of a gas cloud or early merging of lumps of gas where dissipation plays a fundamental role ( larson 1974 ; arimoto & yoshii 1987 ; matteucci & tornamb 1987 ) . in this scenario the star formation stops soon after a galactic wind develops and the galaxy evolves passively since then .. Please generate the next two sentences of the article
bursts of star formation in merging subsystems made of gas had been also suggested ( tinsley & larson 1979 ) ; in this picture star formation stops after the last burst and gas is lost via stripping or wind . the alternative and more recent scenario is the so - called hierarchical clustering scenario , where merging of early formed stellar systems in a wide redshift range and preferentially at late epochs , is expected ( kauffmann et al .
11,789
Suppose that you have an abstract for a scientific paper: the symmetry improved two - particle - irreducible ( si2pi ) formalism is a powerful tool to calculate the effective potential beyond perturbation theory , whereby infinite sets of selective loop - graph topologies can be resummed in a systematic and consistent manner . in this paper we study the renormalization - group ( rg ) properties of this formalism , by proving for the first time a number of new field - theoretic results . first , the rg runnings of all proper 2pi couplings are found to be uv finite , in the hartree fock and sunset approximations of the 2pi effective action . second , the si2pi effective potential is _ exactly _ rg invariant , in contrast to what happens in the ordinary one - particle - irreducible ( 1pi ) perturbation theory , where the effective potential is rg invariant only up to higher orders . finally , we show how the effective potential of an @xmath0 theory evaluated in the si2pi framework , appropriately rg improved , can reach a higher level of accuracy , even up to one order of magnitude , with respect to the corresponding one obtained in the 1pi formalism . @xmath1 man / hep/2017/02 , ulb - th/17 - 03 + march 2017 2pi effective action ; renormalization group ; effective potential . makefntext[1]=0em1emmakefnmark#1 . And you have already written the first three sentences of the full article: the effective potential constitutes a fundamental tool in quantum field theory , widely used to study a multitude of physical phenomena , such as spontaneous symmetry breaking , tunnelling rates due to vacuum instability , and thermal phase transitions . with the ever increasing experimental precision on the determination of the standard model ( sm ) higgs boson mass at the cern large hadron collider ( lhc ) , more accurate computations of the sm effective potential can now be performed up to next - to - next - to - leading order ( nnlo ) . assuming no new physics and ignoring quantum - gravity effects , such nnlo studies @xcite imply that the sm vacuum may well be metastable .. Please generate the next two sentences of the article
most remarkably , the actual profile of the effective potential at large higgs - field values was found to be extremely sensitive to very small variations of the input parameters at the electroweak scale @xcite , thereby rendering such precision computations rather subtle . interestingly enough , this unusual feature seems to have sparked in the last years renewed interest in studying several field - theoretical aspects of the effective potential , which include the development of new semi - analytical techniques beyond ordinary perturbation theory @xcite , the treatment of infrared ( ir ) divergences due to the masslessness of the sm would - be goldstone bosons @xcite , more precise field - theoretical approaches to estimating tunnelling rates @xcite and assessing their gauge dependence @xcite , as well as evaluations of sm metastability rates in the presence of planck - scale suppressed operators @xcite .
11,790
Suppose that you have an abstract for a scientific paper: we report the results of observations of the black hole binaries and h in their quiescent state using the _ chandra x - ray observatory_. both sources are detected at their faintest level of x - ray emission ever observed with a 0.510 kev unabsorbed luminosity of 2 @xmath0 10@xmath1 ( d/5 kpc)@xmath2 erg s@xmath3 for and 9 @xmath0 10@xmath4 ( d/8 kpc)@xmath2 erg s@xmath3 for h. these luminosities are in the upper range compared to the faintest levels observed in other black hole systems , possibly related to residual accretion for these sources with frequent outbursts . for , the _ chandra _ observations also constrain the x - ray spectrum as a fit with an absorbed power - law model yields a photon index of 2.25 @xmath5 0.08 , clearly indicating a softening of the x - ray spectrum at lower luminosities compared to the standard hard state . similar softening at low luminosity is seen for several black hole transients with orbital periods less than 60 hours . most of the current models of accreting black holes are able to reproduce such softening in quiescence . in contrast , we find that systems with orbital periods longer than 60 hours appear to have hard spectra in quiescence and their behaviour may be consistent with hardening in quiescence . . And you have already written the first three sentences of the full article: x - ray novae ( or soft x - ray transients ) are compact binaries in which a neutron star or black hole ( bh ) primary accretes from a donor star via roche - lobe over - flow . most of these systems are usually in a quiescent state with an x - ray luminosity of 10@xmath6 to 10@xmath7 erg s@xmath3 . however , they undergo episodic outbursts that last for months with x - ray luminosities that can sometime reach or exceed the eddington limit ( @xmath8 10@xmath9 erg s@xmath3 for a 10 m@xmath10 bh ) . despite the residual activity of these quiescent objects ,. Please generate the next two sentences of the article
very little is known about their emission properties at very low accretion rates ( mcclintock & remillard 2005 ) . with the sensitivity of current x - ray missions ( in particular _ chandra _ and _ xmm - newton _ ) , it is now possible to study in more detail the physical processes that take place in this accretion regime
11,791
Suppose that you have an abstract for a scientific paper: the thermodynamical stability of dna minicircles is investigated by means of path integral techniques . hydrogen bonds between base pairs on complementary strands can be broken by thermal fluctuations and temporary fluctuational openings along the double helix are essential to biological functions such as transcription and replication of the genetic information . helix unwinding and bubble formation patterns are computed in circular sequences with variable radius in order to analyze the interplay between molecule size and appearance of helical disruptions . the latter are found in minicircles with @xmath0 base pairs and appear as a strategy to soften the stress due to the bending and torsion of the helix . it is known that the helicoidal conformation of dna is essentially determined by the hydrophobicity of purine and pyrimidine bases and by the bond angles in the flexible sugar - phosphate backbone while sequence of the bases and environmental conditions due to the solvent also contribute to the molecule shape @xcite . as each strand bears a negative charge ( @xmath1 ) for each phosphate group and the rise distance between adjacent nucleotides is @xmath2 , the bare double helix has a high linear charge density of @xmath3 . although the effective charge density is reduced by the counterions in the solvent , the electrostatic strands repulsion is key to the stability of the helix and also affects the inter - helical chiral interactions in those condensed phases of dna assemblies ( such as liquid crystals ) which underlie the impressive growth of dna - based structures recently witnessed in materials science . base pairing and base stacking are the fundamental interactions which control the synthesis of dna and determine the thermodynamic stability both of single helices and of helix aggregates . however even stable duplexes at room temperature show local openings , temporary bubbles , which are intrinsic to the biological functioning as they permit the transcription and replication of.... And you have already written the first three sentences of the full article: let s arrange @xmath4 base pairs ( _ bps _ ) on a circle with radius @xmath5 such that , @xmath6 , as depicted in fig . [ fig:1 ] . when all _ bps _ centers of mass lie on the circumference , which represents the molecule backbone , the system is in the ground state . say @xmath7 , the inter - strand fluctuation for the _. Please generate the next two sentences of the article
i_-base pair ( @xmath8 ) with respect to the ground state . we define the vector @xmath9 : @xmath10 the ground state is recovered once all _ bps_-fluctuations vanish hence , @xmath11 .
11,792
Suppose that you have an abstract for a scientific paper: we introduce a new class of countably infinite random geometric graphs , whose vertices @xmath0 are points in a metric space , and vertices are adjacent independently with probability @xmath1 if the metric distance between the vertices is below a given threshold . if @xmath0 is a countable dense set in @xmath2 equipped with the metric derived from the @xmath3-norm , then it is shown that with probability @xmath4 such infinite random geometric graphs have a unique isomorphism type . the isomorphism type , which we call @xmath5 is characterized by a geometric analogue of the existentially closed adjacency property , and we give a deterministic construction of @xmath6 in contrast , we show that infinite random geometric graphs in @xmath7 with the euclidean metric are not necessarily isomorphic . . And you have already written the first three sentences of the full article: the last decade has seen the emergence of the study of large - scale complex networks such as the web graph consisting of web pages and the links between them , the friendship network on facebook , and networks of interacting proteins in a cell . several new random graph models were proposed for such networks , and existing models were modified in order to fit the data gathered from these real - life networks . see the books @xcite for surveys of such models . a recent trend in stochastic graph modelling is the study of _ geometric graph models_. in geometric graph models , vertices are embedded in a metric space , and the formation of edges is influenced by the relative position of the vertices in this space .. Please generate the next two sentences of the article
geometric graph models have found applications in modelling wireless networks ( see @xcite ) , and in modelling the web graph and other complex networks ( @xcite ) . in real - world networks , the underlying metric space is a representation of the hidden reality that leads to the formation of edges .
11,793
Suppose that you have an abstract for a scientific paper: the standard analytical approach for studying gravity free - surface waves generated by a moving body often relies upon a linearization of the physical geometry , where the body is considered asymptotically small in one or several of its dimensions . in this paper , a methodology that avoids any such geometrical simplification is presented for the case of flows at low speeds . the approach is made possible through a reduction of the water - wave equations to a complex - valued integral equation that can be studied using the method of steepest descents . the main result is a theory that establishes a correspondence between a given physical flow geometry , with the topology of the riemann surface formed by the steepest descent paths . then , when a geometrical feature of the body is modified , a corresponding change to the riemann surface is observed , and the resultant effects to the water waves can be derived . this visual procedure is demonstrated for the case of two - dimensional free - surface flow past a surface - piercing ship and over an angled step in a channel . . And you have already written the first three sentences of the full article: let us consider the problem of determining the surface gravity waves generated by a body moving in a two - dimensional potential fluid . at the free surface , @xmath0 , bernoulli s equation requires that @xmath1 where @xmath2 is the fluid speed and @xmath3 is the gravitational parameter . the nonlinearity of forms the primary difficulty of analysis ; in order to make any sort of progress , the equation must usually be linearized . as explained by tuck @xcite , this linearization will typically involve making one of two possible assumptions . in the first. Please generate the next two sentences of the article
, the relevant flow quantities are expressed as a series expansion in powers of a geometric parameter such as @xmath4 . for instance , one might assume that at leading order , the object is asymptotically thin or streamline in one or several of its dimensions .
11,794
Suppose that you have an abstract for a scientific paper: a large - n diagrammatic approach is used to study coupled quantum dots in a parallel geometry . we show that the friedel sum rule ( fsr ) holds at lowest order in a 1/n expansion for this system , thereby suggesting that the ground state is a fermi liquid . using the fsr together with the dot occupancy , we compute the dot system s conductance . our finding that the @xmath0 expansion indicates the system is a fermi liquid is in agreement with both prior results based on the bethe ansatz and slave boson mean field theory . . And you have already written the first three sentences of the full article: strong correlations in impurity problems have been of tremendous theoretical interest.@xcite this interest has been spurred by the ability to engineer semiconducting quantum dots which are highly tunable.@xcite in such realizations of quantum dots , both the gate voltages of the quantum dot as well as the tunneling amplitudes between the dots and the leads can be tuned . this gives one the ability to study these systems throughout their entire parameter space . while the first reported dot systems involved single dots,@xcite more complicated dot structures can now be fabricated.@xcite with this ability to engineer multi - quantum dot systems comes the ability to realize more exotic forms of kondo physics . in double dot systems , for example , one can explore the competition between the kondo effect and the ruderman - kittel - kasuya - yosida ( rkky ) interaction.@xcite because of the non - perturbative nature of the kondo effect at low temperatures , one might expect this competition to be non - trivial .. Please generate the next two sentences of the article
we argue that it is in fact so for double dots arranged in parallel . the geometry that we wish to consider is sketched in fig . 1 . here two closely spaced , single level dots do not directly interact with one another , either via tunneling or via a capacitive coupling .
11,795
Suppose that you have an abstract for a scientific paper: the spirex telescope , located at the amundsen scott south pole station , was a prototype system developed to exploit the excellent conditions for ir observing at the south pole . observations over two winter seasons achieved remarkably deep , high - resolution , wide - field images in the 35@xmath0 m wavelength regime . several star forming complexes were observed , including ngc 6334 , chamaeleon i , @xmath1 chamaeleontis , the carina nebula , 30 doradus , rcw 57 , rcw 38 , as well as the galactic centre . images were obtained of lines at 2.42@xmath0mh@xmath2 , 3.29@xmath0 m pah and 4.05@xmath0 m br @xmath3 , as well as 3.5@xmath0 m l band and 4.7@xmath0 m m band continuum emission . these data , combined with near ir , mid ir , and radio continuum maps , reveal the environments of these star forming sites , as well as any protostars lying within them . the spirex project , its observing and reduction methods , and some sample data are summarized here . # 1_#1 _ # 1_#1 _ = # 1 1.25 in .125 in .25 in . And you have already written the first three sentences of the full article: the south pole infrared explorer ( spirex ) telescope was a prototype system , developed to test the feasibility of building , operating , and maintaining an infrared ( ir ) telescope during an antarctic winter . the initial driver for the spirex project was to exploit the conditions at the south pole that make it an excellent site for 35@xmath0 m observations that is the high altitude , the low temperatures , low precipitable water vapour content in the atmosphere , and the stable weather conditions ( burton et al . 1994 ; marks et al . 1996 ; hidas et al .. Please generate the next two sentences of the article
2000 ; marks 2002 ) . the spirex 60 cm telescope began operations at the amundsen scott south pole station in 1994 ( hereld et al . spirex was initially used as part of a campaign to measure the south pole s thermal background , sky transparency , and the fraction of time useful for ir observations ( see e.g. ashley et al .
11,796
Suppose that you have an abstract for a scientific paper: we compare the extraction of the ground - state decay constant from the two - point correlator in qcd and in potential models and show that the results obtained at each step of the extraction procedure follow a very similar pattern . we prove that allowing for a borel - parameter - dependent effective continuum threshold yields two essential improvements compared to employing a borel - parameter - independent quantity : ( i ) it reduces considerably the ( unphysical ) dependence of the extracted bound - state mass and the decay constant on the borel parameter . ( ii ) in a potential model , where the actual value of the decay constant is known from the schrdinger equation , a borel - parameter - dependent threshold leads to an improvement of the accuracy of the extraction procedure . our findings suggest that in qcd a borel - parameter dependent threshold leads to a more reliable and accurate determination of bound - state characteristics by the method of sum rules . . And you have already written the first three sentences of the full article: in a series of recent publications we studied the extraction of the ground - state parameters from svz sum rules @xcite ( see also , e.g. , @xcite ) . we made use of a quantum - mechanical potential model since this is essentially the only case where the standard procedures adopted in the method of sum rules may be tested : the estimates for the ground - state parameters obtained by these procedures may be compared with the actual values of the ground - state parameters calculated from the schrdinger equation , thus providing an unambiguous check of the reliability of the method . the main results of our papers may be summarized as follows : ( i ) the standard approximation of a constant effective continuum threshold does not allow one to probe the accuracy of the extracted hadron parameter @xcite .. Please generate the next two sentences of the article
( ii ) allowing for a borel - parameter - dependent effective continuum threshold ( we denote the borel parameter @xmath0 in qcd and @xmath1 in the potential model ) and fixing this quantity by using the information on the ground - state mass leads to a considerable improvement of the accuracy of the method @xcite . the goal of this letter is to demonstrate that the results obtained at each step of the extraction procedure both in qcd and in potential models follow the same pattern .
11,797
Suppose that you have an abstract for a scientific paper: in this work the charm and bottom quark masses are determined from qcd moment sum rules for the charmonium and upsilon systems . in our analysis we include both the results from non - relativistic qcd and perturbation theory at next - next - to - leading order . for the pole masses we obtain @xmath0 gev and @xmath1 gev . using the potential - subtracted mass in intermediate steps of the calculation the @xmath2-masses are determined to @xmath3 gev and @xmath4 gev . . And you have already written the first three sentences of the full article: an important task within modern particle phenomenology consists in the determination of the quark masses , being fundamental parameters of the standard model . in the past , qcd moment sum rule analyses have been successfully applied for extracting the charm and bottom quark masses from experimental data on the charmonium and bottomium systems respectively @xcite . the basic quantity in these investigations is the vacuum polarisation function @xmath5 : @xmath6 where the relevant vector current is represented either by the charm @xmath7 or the bottom current @xmath8 . via the optical theorem , the experimental cross section @xmath9 is related to the imaginary part of @xmath10 : @xmath11 usually , moments of the vacuum polarisation are defined by taking derivatives of the correlator at @xmath12 . however , in this work we allow for an arbitrary evaluation point @xmath13 to define the dimensionless moments @xcite : @xmath14 where @xmath15 is the velocity of the heavy quark . the parameter @xmath16 encodes much information about the system . by taking @xmath16 larger. Please generate the next two sentences of the article
the evaluation point moves further away from the threshold region . consequently , the theoretical expansions show a better convergence , but at the same time the sensitivity on the mass is reduced .
11,798
Suppose that you have an abstract for a scientific paper: recent observation of pulsar psr j1614 - 2230 with mass about 2 solar masses poses a severe constraint on the equations of state ( eos ) of matter describing stars under extreme conditions . neutron stars ( ns ) can reach the mass limits set by psr j1614 - 2230 . but stars having hyperons or quark stars ( qs ) having boson condensates , with softer eos can barely reach such limits and are ruled out . qs with pure strange matter also can not have such high mass unless the effect of strong coupling constant or color superconductivity are considered . in this work i try to calculate the upper mass limit for a hybrid stars ( hs ) having a quark - hadron mixed phase . the hadronic matter ( having hyperons ) eos is described by relativistic mean field theory and for the quark phase i use the simple mit bag model . i construct the intermediate mixed phase using glendenning construction . hs with a mixed phase can not reach the mass limit set by psr j1614 - 2230 unless i assume a density dependent bag constant . for such case the mixed phase region is small . the maximum mass of a mixed hybrid star obtained with such mixed phase region is @xmath0 . . And you have already written the first three sentences of the full article: neutron stars ( ns ) are gravitationally bound , therefore the precise measurement of mass and radius of a ns should provide a very fine probe for the equation of state ( eos ) of dense matter . the first reasonable ideas about the composition of compact stars argued that matter are under extreme densities and is mainly composed of neutrons with small fractions of protons and electrons . further theoretical developments and modern experimental results opened the window to other possibilities .. Please generate the next two sentences of the article
the densities in the interior of neutron stars is about @xmath1 times that of the nuclear saturation density ( @xmath2@xmath3 ) . at such high densities in their interiors , the matter there is likely to be in a deconfined and chirally restored quark phase @xcite .
11,799
Suppose that you have an abstract for a scientific paper: in this paper we propose a distributed algorithm for the estimation and control of the connectivity of ad - hoc networks in the presence of a random topology . first , given a generic random graph , we introduce a novel stochastic power iteration method that allows each node to estimate and track the algebraic connectivity of the underlying expected graph . using results from stochastic approximation theory , we prove that the proposed method converges almost surely ( a.s . ) to the desired value of connectivity even in the presence of imperfect communication scenarios . the estimation strategy is then used as a basic tool to adapt the power transmitted by each node of a wireless network , in order to maximize the network connectivity in the presence of realistic medium access control ( mac ) protocols or simply to drive the connectivity toward a desired target value . numerical results corroborate our theoretical findings , thus illustrating the main features of the algorithm and its robustness to fluctuations of the network graph due to the presence of random link failures . spectral graph theory , random graph , stochastic power iteration , algebraic connectivity , fiedler vector , topology control , stochastic approximation , distributed computation . . And you have already written the first three sentences of the full article: ad - hoc wireless networks are composed of a set of nodes that exchange data with each other through wireless links without relying on any pre - existing infrastructure @xcite . typically , the network topology follows a nearest neighbor criterion to allow for low - power communications . the diffusion of information through a network presumes connectivity of the network .. Please generate the next two sentences of the article
furthermore , many distributed algorithms running over a graph , such as consensus , diffusion , or swarming algorithms , have a convergence time strictly dependent on the graph connectivity @xcite-@xcite . for example , highly connected networks generally have significantly faster convergence thanks to a more efficient in - network information diffusion . in many practical examples