id
int64 0
203k
| input
stringlengths 66
4.29k
| output
stringlengths 0
3.83k
|
---|---|---|
9,500 | Suppose that you have an abstract for a scientific paper: this paper introduces a novel variational approach for image compression motivated by recent pde - based approaches combining edge detection and laplacian inpainting .
the essential feature is to encode the image via a sparse vector field , ideally concentrating on a set of measure zero .
an equivalent reformulation of the compression approach leads to a variational model resembling the rof - model for image denoising , hence we further study the properties of the effective regularization functional introduced by the novel approach and discuss similarities to tv and tgv functionals .
moreover we computationally investigate the behaviour of the model with sparse vector fields for compression in particular for high resolution images and give an outlook towards denoising .
+ + * keywords : * image compression , denoising , reconstruction , diffusion inpainting , sparsity , total variation .
And you have already written the first three sentences of the full article: image compression is a topic of interest since the beginning of digital imaging , remaining relevant continuously due to the ongoing improvement in image resolution .
while standard approaches are based on orthogonal bases and frames like cosine transforms or wavelets , an alternative route based on ideas from partial differential equations has emerged recently ( cf .
@xcite ) . in the latter case particular attention.
Please generate the next two sentences of the article | is paid to compressions from which cartoons can be reconstructed accurately avoiding the artefacts of the above mentioned standard approaches by a direct treatment of edges .
roughly speaking , their idea is to detect edges and store the image value in pixels on both sides of an edge . |
9,501 | Suppose that you have an abstract for a scientific paper: directed - ratchet transport ( drt ) in a one - dimensional lattice of spherical beads , which serves as a prototype for granular crystals , is investigated .
we consider a system where the trajectory of the central bead is prescribed by a biharmonic forcing function with broken time - reversal symmetry . by comparing the mean integrated force of beads equidistant from the forcing bead , two distinct types of directed transport can be observed _spatial _ and _ temporal _ drt . based on the value of the frequency of the forcing function relative to the cutoff frequency ,
the system can be categorized by the presence and magnitude of each type of drt .
furthermore , we investigate and quantify how varying additional parameters such as the biharmonic weight affects drt velocity and magnitude . finally , friction is introduced into the system and is found to significantly inhibit spatial drt .
in fact , for sufficiently low forcing frequencies , the friction may even induce a switching of the drt direction . .
And you have already written the first three sentences of the full article: granular media are large conglomerations of discrete , solid particles , such as sand , gravel , or powder , with unusual , interesting dynamics @xcite .
a one - dimensional system of spherical beads in a lattice is one of the simplest representation of granular media substrates , wherein each bead represents a grain of material . in this approximation ,
the position of a particular bead is based on forces resulting from its interaction with its two nearest neighbors @xcite ..
Please generate the next two sentences of the article | this context has proved especially fruitful for investigating numerous aspects of the nonlinear dynamic response of such bead chain systems @xcite .
a particular focal point of emphasis has been on the study of one - dimensional granular crystals . |
9,502 | Suppose that you have an abstract for a scientific paper: the most recent observational results on the search for high redshift field ellipticals are reviewed in the context of galaxy formation scenarios .
the perspectives for large binocular telescope ( lbt ) observations are also discussed . .
And you have already written the first three sentences of the full article: the question on the formation of the present - day massive spheroidals is one of the most debated issues of galaxy evolution and it is strongly linked to the general problem of structure formation in the universe ( see @xcite for a recent review ) . in one scenario , massive spheroidals are formed at early cosmological epochs ( e.g. @xmath0 ) through the `` monolithic '' collapse of the whole gas mass@xcite .
such a formation would be characterized by an episode of intense star formation , followed by a passive evolution ( or pure luminosity evolution , ple ) of the stellar population to nowadays . in marked contrast
, the hierarchical scenarios predict that massive spheroidals are the product of rather recent merging of pre - existing disk galaxies taking place mostly at @xmath1@xcite . in hierarchical scenarios ,.
Please generate the next two sentences of the article | fully assembled massive field spheroidals at @xmath2 are rare objects@xcite , and the spheroids of cluster ellipticals were assembled before those of field ellipticals@xcite . from an observational point of view ,
a direct way to test the above scenarios is to search for massive field ellipticals at @xmath2 and to compare their number with the model predictions ( see the introduction of @xcite for a recent review on observational tests ) . |
9,503 | Suppose that you have an abstract for a scientific paper: the statistical mechanics of @xmath0 cold dark matter ( cdm ) particles interacting via a softened gravitational potential is reviewed in the microcanonical ensemble and mean - field limit .
a phase diagram for the system is computed as a function of the total energy @xmath1 and gravitational softening length @xmath2 . for softened systems ,
two stable phases exist : a collapsed phase , whose radial density profile @xmath3 is a central dirac cusp , and an extended phase , for which @xmath3 has a central core and @xmath3 @xmath4 @xmath5 at large @xmath6 . it is shown that many @xmath0-body simulations of cdm haloes in the literature inadvertently sample the collapsed phase only , even though this phase is unstable when there is zero softening . consequently , there is no immediate reason to expect agreement between simulated and observed profiles unless the gravitational potential is appreciably softened in nature . .
And you have already written the first three sentences of the full article: cold dark matter ( cdm ) theory successfully describes many aspects of the formation of large - scale structure in the universe @xcite .
however , mismatches do exist between its predictions and observations , such as the cusp - core controversy , missing satellites @xcite and the angular momentum problem @xcite . in particular
, the cusp - core issue has provoked much debate ..
Please generate the next two sentences of the article | cdm simulations consistently yield density profiles with steeper inner slopes ( power - law exponent between @xmath71 and @xmath71.5 ) than observational studies which have found a range of slopes , including constant density cores in dark matter dominated low surface brightness galaxies @xcite and shallow slopes in clusters with gravitationally lensed arcs @xcite . these results , among others , have initiated discussion about the role baryons play in softening simulated cores @xcite and the observational effects that may mask cusps in low surface brightness galaxies @xcite .
recent @xmath0-body results demonstrate that , at the current resolution of simulations , the central power - law exponent does not converge to a universal value @xcite . in numerical simulations , |
9,504 | Suppose that you have an abstract for a scientific paper: to achieve the extremely high luminosity for colliding electron - positron beams at the future international linear collider @xcite ( ilc ) an undulator - based source with about 230 meters helical undulator and a thin titanium - alloy target rim rotated with tangential velocity of about 100 meters per second are foreseen .
the very high density of heat deposited in the target has to be analyzed carefully .
the energy deposited by the photon beam in the target has been calculated in fluka . the resulting stress in the target material after one bunch train has been simulated in ansys .
desy-12 - 018 .
And you have already written the first three sentences of the full article: the positron - production target for the ilc positron source is driven by a photon beam generated in an helical undulator placed at the end of main electron linac @xcite .
the undulator length is chosen to provide the required positron yield .
the source is designed to deliver 50% overhead of positrons ..
Please generate the next two sentences of the article | therefore , the positron yield has to be 1.5 positrons per electron passing the undulator .
the required active length of the undulator is about 75 meters for the nominal electron energy of 250 gev , the undulator @xmath0-value has been chosen to be 0.92 , the undulator period is 11.5 mm and a quarter - wave transformer is used as optical matching device ( omd ) . |
9,505 | Suppose that you have an abstract for a scientific paper: we study the network of type - i cosmic strings using the field - theoretic numerical simulations in the abelian - higgs model . for type - i strings
, the gauge field plays an important role , and thus we find that the correlation length of the strings is strongly dependent upon the parameter @xmath0 , the ratio between the masses of the scalar field and the gauge field , namely , @xmath1 .
in particular , if we take the cosmic expansion into account , the network becomes densest in the comoving box for a specific value of @xmath0 for @xmath2 . .
And you have already written the first three sentences of the full article: cosmic strings are one - dimensional topological defects formed after phase transitions .
they are considered to make up a weblike structure in the universe , so - called _ the cosmic - string network_. cosmic strings could be a probe for the early phases of the universe long before the cosmic microwave background ( cmb ) epoch .
they have a potential to reveal the physics during the phase transition of fields in the early universe , and also be a potential source of gravitational waves @xcite and an extra source of cmb anisotropy @xcite ..
Please generate the next two sentences of the article | the simplest classical field - theoretic model to describe the string formation is the abelian - higgs ( ah ) model , where there are a complex scalar field with the self - coupling constant @xmath3 and a @xmath4 gauge field with the gauge coupling constant @xmath5 ( see e.g. the textbook @xcite ) .
the basic properties of cosmic strings in the ah model can be classified by a single parameter , @xmath1 , where @xmath6 and @xmath7 are the masses of the scalar field and the gauge field , respectively , acquired after the spontaneous breaking of @xmath4 . |
9,506 | Suppose that you have an abstract for a scientific paper: we describe the evolution of the carbon dust shells around very late thermal pulse ( vltp ) objects as seen at infrared wavelengths .
this includes a 20-year overview of the evolution of the dust around sakurai s object ( to which olivier made a seminal contribution ) and fg sge .
vltps may occur during the endpoint of as many as 25% of solar mass stars , and may therefore provide a glimpse of the possible fate of the sun . .
And you have already written the first three sentences of the full article: it is well - known that the fate of a star after it has evolved away from the main sequence ( ms ) depends on its mass .
the accepted scenario for the post - ms evolution of low to intermediate mass stars is that , following the helium flash , burnout of he occurs in the core on the horizontal branch .
after evolution up the asymptotic giant branch , the star sheds its outer envelope , which is illuminated as a planetary nebula ( pn ) by the still - hot stellar core ..
Please generate the next two sentences of the article | however , in as many as 20% of cases ( blcker @xcite ) the star , as it evolves towards the white dwarf ( wd ) region of the hr diagram , re - ignites a residual helium shell in a vltp and retraces its evolutionary track to the right to become a born again red giant ( bag ; see herwig @xcite and references therein ) .
the final evolution to a wd is predicted to take roughly a few centuries , thus representing a very rapid ( and hence seldom seen ) phase of stellar evolution . |
9,507 | Suppose that you have an abstract for a scientific paper: our current view of galaxies considers them as systems of stars and gas embedded in extended halos of dark matter , much of it formed by the infall of smaller systems at earlier times .
the true extent of a galaxy remains poorly determined , with the virial radius " ( @xmath0 ) providing a characteristic separation between collapsed structures in dynamical equilibrium and external infalling matter .
other physical estimates of the extent of gravitational influence include the gravitational radius , gas accretion radius , and galactopause " arising from outflows that stall at 100 - 200 kpc over a range of outflow parameters and confining gas pressures .
physical criteria are proposed to define bound structures , including a more realistic definition of @xmath1 for stellar mass @xmath2 and halo mass @xmath3 , half of which formed at assembly redshifts " ranging from @xmath4 .
we estimate the extent of bound gas and dark matter around @xmath5 galaxies to be @xmath6 kpc .
the new virial radii , with mean @xmath7 kpc , are 40 - 50% smaller than values estimated in recent hst / cos detections of and absorbers around galaxies . in the new formalism
, the milky way stellar mass , @xmath8 , would correspond to @xmath9 kpc for half - mass halo assembly at @xmath10 .
the frequency per unit redshift of low - redshift absorption lines in qso spectra suggests absorber sizes @xmath11 kpc when related to intervening @xmath12 galaxies .
this formalism is intended to clarify semantic differences arising from observations of extended gas in galactic halos , circumgalactic medium ( cgm ) , and filaments of the intergalactic medium ( igm ) .
astronomers should refer to _ bound gas _ in the galactic halo or cgm , and _ unbound _ gas at the cgm - igm interface , on its way into the igm . .
And you have already written the first three sentences of the full article: over the last several decades , with new evidence , the objects we call galaxies " have become much larger . extended dark - matter halos " were proposed to produce flat rotation curves at large radii in disk galaxies ( rubin et al . 1980 ) , and
a corona " of hot interstellar gas at the galaxy s virial temperature was predicted by spitzer ( 1956 ) to provide pressure confinement of high - latitude clouds .
more recently , astronomers have observed galactic kinematic tracers ( blue horizontal branch stars , globular clusters , satellite galaxies ) to distances of 50 - 250 kpc , and x - ray absorption - line ( ) spectroscopy and stacked soft x - ray emission have provided evidence of large reservoirs of hot ionized gas in milky way halo ( miller & bregman 2013 ) and the outskirts of external galaxies ( soltan 2006 ; anderson et al ..
Please generate the next two sentences of the article | 2013 ) . in ultraviolet spectroscopy , the cosmic origins spectrograph ( cos ) on the _ hubble space telescope _
( hst ) has recently detected extended ( 100 - 150 kpc ) reservoirs of highly ionized oxygen ( ) around star - forming galaxies ( tumlinson et al . |
9,508 | Suppose that you have an abstract for a scientific paper: we proposed a general transformation in probabilistic teleportation , which is based on different entanglement matching coefficients @xmath0 corresponding to different unitary evolution which provides one with more flexible evolution method experimentally . through analysis based on the bell basis and generalized bell basis measurement for two probabilistic teleportation
, we suggested a general probability of successful teleportation , which is not only determined by the entanglement degree of transmission channels and measurement methods , but also related to the unitary transformation in the teleportation process .
epsf = 18truecm = 24truecm = -10 mm -2.0truecm pacs : 03.67.hk , 03.65.ta .
And you have already written the first three sentences of the full article: quantum entanglement is one of the most fascinating characteristic of quantum physics , a fantastic application of entanglement @xcite is quantum teleportation , which plays a key role in the field of quantum communication . since the seminal work of bennett et al .
@xcite , teleportation has been the research interest of researchers and a number of work both in theory and experiments has been devoted to it @xcite . up to now the teleportation has been studied in different branches , such as directly and network controlled teleportation @xcite ; discrete - variables and continuous - variables teleportation @xcite ; prefect and probabilistic teleportation @xcite and so on .
in fact , one of the key problem of teleportation is how to construct an usefulness quantum channel , different channels will yield different results , some channels can be used to realize perfect teleportation , while some others can only enable probabilistic teleportation ..
Please generate the next two sentences of the article | because of the inevitable interaction with its surroundings , correlations in quantum states are difficult to maintain @xcite , therefore the probabilistic teleportation @xcite has been widely discussed in recent years . a necessary and sufficient condition for realizing perfect teleportation and
successful teleportation has been given in @xcite . |
9,509 | Suppose that you have an abstract for a scientific paper: we report the results of broadband ( 0.952.46 ) near - infrared spectroscopic observations of the cassiopeia a supernova remnant . using a clump - finding algorithm in two - dimensional dispersed images ,
we identify 63 ` knots ' from eight slit positions and derive their spectroscopic properties .
all of the knots emit [ ] lines together with other ionic forbidden lines of heavy elements , and some of them also emit h and he lines .
we identify 46 emission line features in total from the 63 knots and measure their fluxes and radial velocities .
the results of our analyses of the emission line features based on principal component analysis show that the knots can be classified into three groups : ( 1 ) he - rich , ( 2 ) s - rich , and ( 3 ) fe - rich knots .
the he - rich knots have relatively small , @xmath0 , line - of - sight speeds and radiate strong he i and [ ] lines resembling closely optical quasi - stationary flocculi of circumstellar medium , while the s - rich knots show strong lines from o - burning material with large radial velocities up to @xmath1 indicating that they are supernova ejecta material known as fast - moving knots .
the fe - rich knots also have large radial velocities but show no lines from o - burning material .
we discuss the origin of the fe - rich knots and conclude that _ they are most likely `` pure '' fe ejecta synthesized in the innermost region during the supernova explosion . _ the comparison of [ ] images with other waveband images shows that these dense fe ejecta are mainly distributed along the southwestern shell just outside the unshocked @xmath2ti in the interior , supporting the presence of unshocked fe associated with @xmath2ti . .
And you have already written the first three sentences of the full article: a massive star builds up onion - like layers of different chemical elements synthesized by hydrostatic nuclear burning processes during its lifetime . at the end of its evolution ,
the innermost fe core collapses into a neutron star , which triggers a core - collapse supernova ( sn ) explosion .
the detailed process of the explosion is complicated and poorly understood , but a consensus from theoretical studies suggests that the explosion should be asymmetric and turbulent , especially near the core ( e.g. , * ? ? ?.
Please generate the next two sentences of the article | * ; * ? ? ?
* ; * ? ? ? |
9,510 | Suppose that you have an abstract for a scientific paper: in this article , we reveal how benjamin franklin constructed his second @xmath0 magic square .
we also construct two new @xmath0 franklin squares . .
And you have already written the first three sentences of the full article: [ n1n2n3table ] @xmath1 [ brandnewsquaretable ] @xmath2 the well - known squares in figure [ franklinsquares ] were constructed by benjamin franklin .
the square f2 was introduced separately and hence is generally known as _
the other 8-square_. the entries of the squares are from the set @xmath3 , where @xmath4 or @xmath5 . every integer in this set occurs in the square exactly once . for these squares ,.
Please generate the next two sentences of the article | the entries of every row and column add to a common sum called the _ magic sum_. the @xmath6 squares have magic sum 260 and the @xmath7 square has magic sum 2056 . in every half row and half column the entries add to half the magic sum .
the entries of the main bend diagonals and all the bend diagonals parallel to it add to the magic sum . |
9,511 | Suppose that you have an abstract for a scientific paper: the discovery of a powerful and transient iron line feature in the x ray afterglow spectrum of grb 970508 and grb 970828 , if confirmed , would be a major breakthrough for understanding the nature the progenitor of gamma
ray bursts .
we show that a large mass of iron very close to the burster is necessary to produce the emission line .
this in itself strongly limits the possible progenitor of the gamma ray event , suggesting the former explosion of a supernova , as predicted in the supranova model ( vietri & stella 1998 ) . the line emission process and the line intensity depend strongly on the age , density and temperature of the remnant .
the simultaneous observation of the iron line and of a power law optical afterglow lasted for one year strongly suggest that the burst emission is isotropic .
recent observations of grb 990123 are also discussed . .
And you have already written the first three sentences of the full article: piro et al . ( 1999 ) and yoshida et al . ( 1999 ) report the detection of an iron emission line feature in the x ray afterglow spectra of grb 970508 and grb 970828 , respectively .
both lines are characterized by a large flux and equivalent width ( ew ) compared with the theoretical previsions made in the framework of the hypernova and compact merger grb progenitor models ( ghisellini et al .
1999 , bttcher et al ..
Please generate the next two sentences of the article | the line detected in grb 970508 is consistent with an iron @xmath0 line redshifted to the rest frame of the candidate host galaxy ( @xmath1 , metzger et al .
1997 ) , while grb 970828 has no measured redshift and the identification of the feature with the same line would imply a redshift @xmath2 . the line fluxes ( equivalent widths ) are @xmath3 erg @xmath4 s@xmath5 ( @xmath6 kev ) and @xmath7 erg @xmath4 s@xmath5 ( @xmath8 kev ) for grb 970508 and grb 970828 , respectively . |
9,512 | Suppose that you have an abstract for a scientific paper: the size - ramsey number of a graph @xmath0 is the minimum number of edges in a graph @xmath1 such that every 2-edge - coloring of @xmath1 yields a monochromatic copy of @xmath0 .
size - ramsey numbers of graphs have been studied for almost 40 years with particular focus on the case of trees and bounded degree graphs .
we initiate the study of size - ramsey numbers for @xmath2-uniform hypergraphs .
analogous to the graph case , we consider the size - ramsey number of cliques , paths , trees , and bounded degree hypergraphs .
our results suggest that size - ramsey numbers for hypergraphs are extremely difficult to determine , and many open problems remain . .
And you have already written the first three sentences of the full article: given graphs @xmath0 and @xmath1 , say that @xmath3 if every 2-edge - coloring of @xmath1 results in a monochromatic copy of @xmath0 in @xmath1 .
using this notation , the ramsey number @xmath4 of @xmath0 is the minimum @xmath5 such that @xmath6 . instead of minimizing the number of vertices , one can minimize the number of edges .
define the _ size - ramsey number _ @xmath7 of @xmath0 to be the minimum number of edges in a graph @xmath1 such that @xmath3 ..
Please generate the next two sentences of the article | more formally , @xmath8 the study of size - ramsey numbers was proposed by erds , faudree , rousseau and schelp @xcite in 1978 . by definition of @xmath4
, we have @xmath9 . |
9,513 | Suppose that you have an abstract for a scientific paper: in the context of quantum gravity for spacetimes of dimension @xmath0 , we describe progress in the construction of a quantum goldman bracket for intersecting loops on surfaces . using piecewise linear paths in @xmath1
( representing loops on the spatial manifold , i.e. the torus ) and a quantum connection with noncommuting components , we review how holonomies and wilson loops for two homotopic paths are related by phases in terms of the signed area between them . paths rerouted at intersection points with other paths occur on the r.h.s . of the goldman bracket . to better understand their nature we introduce the concept of integer points inside the parallelogram spanned by two intersecting paths , and
show that the rerouted paths must necessarily pass through these integer points .
+ + pacs numbers : 04.60.kz , 02.20.uw + mathematics subject classification : 83c45 .
And you have already written the first three sentences of the full article: in previous work @xcite we have investigated quantum gravity in @xmath0 dimensions with negative cosmological constant on the torus , using an approach involving quantum holonomy matrices .
this followed on from earlier work by one of us with regge and zertuche @xcite based on the traces of the holonomies . in @xcite we focused on the quantum geometry that arises from introducing a constant quantum connection , from which the holonomy matrices and their traces are obtained ( in the sector where these matrices are diagonal ) .
some interesting features emerged , in particular a quantum version of the well - known goldman bracket for loops on a surface . in the present article.
Please generate the next two sentences of the article | we describe some new developments in the understanding of this quantum geometrical picture .
the classical action of @xmath0 gravity with negative cosmological constant @xmath2 , in the dreibein formulation , was related by witten @xcite to chern - simons theory for the gauge group @xmath3 . |
9,514 | Suppose that you have an abstract for a scientific paper: we consistently analyse for the first time the impact of survey depth and spatial resolution on the most used morphological parameters for classifying galaxies through non - parametric methods : abraham and conselice - bershady concentration indices , gini , m20 moment of light , asymmetry , and smoothness .
three different non - local datasets are used , alhambra and sxds ( examples of deep ground - based surveys ) , and cosmos ( deep space - based survey ) .
we used a sample of 3000 local , visually classified galaxies , measuring their morphological parameters at their real redshifts ( z@xmath00 ) .
then we simulated them to match the redshift and magnitude distributions of galaxies in the non - local surveys .
the comparisons of the two sets allow to put constraints on the use of each parameter for morphological classification and evaluate the effectiveness of the commonly used morphological diagnostic diagrams .
all analysed parameters suffer from biases related to spatial resolution and depth , the impact of the former being much stronger . when including asymmetry and smoothness in classification diagrams , the noise effects must be taken into account carefully , especially for ground - based surveys .
m20 is significantly affected , changing both the shape and range of its distribution at all brightness levels .
we suggest that diagnostic diagrams based on 2 - 3 parameters should be avoided when classifying galaxies in ground - based surveys , independently of their brightness ; for cosmos they should be avoided for galaxies fainter than f814=23.0 .
these results can be applied directly to surveys similar to alhambra , sxds and cosmos , and also can serve as an upper/ lower limit for shallower / deeper ones .
[ firstpage ] surveys ; galaxies : morphology ; galaxies : fundamental parameters ; .
And you have already written the first three sentences of the full article: morphology is one of the main characteristics of galaxies , and the morphological classification has been central to many advances in the picture of galaxy formation and evolution .
different correlations between morphology and other galaxy properties have been studied , including the relation with stellar mass ( e.g. , * ? ? ?
* ) , colour ( e.g. , * ? ? ?.
Please generate the next two sentences of the article | * ; * ? ? ?
* ; * ? ? ? |
9,515 | Suppose that you have an abstract for a scientific paper: a simple scheme for communication over mimo broadcast channels is introduced which adopts the lattice reduction technique to improve the naive channel inversion method . lattice basis reduction helps us to reduce the average transmitted energy by modifying the region which includes the constellation points .
simulation results show that the proposed scheme performs well , and as compared to the more complex methods ( such as the perturbation method @xcite ) has a negligible loss .
moreover , the proposed method is extended to the case of different rates for different users . the asymptotic behavior ( snr@xmath0 ) of the symbol error rate of the proposed method and the perturbation technique , and also the outage probability for the case of fixed - rate users is analyzed .
it is shown that the proposed method , based on lll lattice reduction , achieves the optimum asymptotic slope of symbol - error - rate ( called the precoding diversity ) .
also , the outage probability for the case of fixed sum - rate is analyzed .
0.6 cm * communication over mimo broadcast channels using lattice - basis reduction * + 0.8 cm mahmoud taherzadeh , amin mobasher , and amir k. khandani 0.5 cm coding & signal transmission laboratory + department of electrical & computer engineering + university of waterloo + waterloo , ontario , canada , n2l 3g1 + .
And you have already written the first three sentences of the full article: in the recent years , communications over multiple - antenna fading channels has attracted the attention of many researchers .
initially , the main interest has been on the point - to - point multiple - input multiple - output ( mimo ) communications @xcite . in @xcite and @xcite ,
the authors have shown that the capacity of a mimo point - to - point channel increases linearly with the minimum number of the transmit and the receive antennas ..
Please generate the next two sentences of the article | more recently , new information theoretic results @xcite , @xcite , @xcite , @xcite have shown that in multiuser mimo systems , one can exploit most of the advantages of multiple - antenna systems .
it has been shown that in a mimo broadcast system , the sum - capacity grows linearly with the minimum number of the transmit and receive antennas @xcite , @xcite , @xcite . to achieve the sum capacity , some information theoretic schemes , based on dirty - paper coding , are introduced . |
9,516 | Suppose that you have an abstract for a scientific paper: the ground and first excited @xmath0 states of the @xmath1er isotopes are analyzed in the framework of the generator coordinate method .
the shape parameter @xmath2 is used to generate wave functions with different deformations which together with the two - quasiparticle states built on them provide a set of states . an angular momentum and particle number projection of the latter
spawn the basis states of the generator coordinate method . with this ansatz and using the separable pairing plus quadrupole interaction we obtain a good agreement with the experimental spectra and e2 transition rates up to moderate spin values .
the structure of the wave functions suggests that the first excited @xmath0 states in the soft er isotopes are dominated by shape fluctuations , while in the well deformed er isotopes the two - quasiparticle states are more relevant . in between both degrees of freedom are necessary . .
And you have already written the first three sentences of the full article: the nature of the lowest - lying @xmath0 excited states ( denoted as @xmath3 state in the following ) in deformed nuclei has been a long standing problem in nuclear physics and studied by various approaches @xcite .
traditionally they have been considered to be collective excitations such as the @xmath2-vibration@xcite . in recent years
there has been calculations along this line based based on the algebraic collective model of rowe and co - workers @xcite as well as analytical solutions of the bohr hamiltonian with a certain kind of potential and a deformation - dependent mass term @xcite ..
Please generate the next two sentences of the article | there are also calculations using the interacting boson model ( ibm ) with different truncated hamiltonians @xcite . in these calculations
the @xmath3 state is supposed to be a pure collective excitation , and its excitation energy can be fitted together with the yrast and the @xmath4-bands . |
9,517 | Suppose that you have an abstract for a scientific paper: we report the near - field ablation of material from cellulose acetate coverslips in water and myoblast cell samples in growth media , with a spot size as small as 1.5 @xmath0 m under 3 @xmath0 m wavelength radiation .
the power dependence of the ablation process has been studied and comparisons have been made to models of photomechanical and plasma - induced ablation .
the ablation mechanism is mainly dependent on the acoustic relaxation time and optical properties of the materials .
we find that for all near - field experiments , the ablation thresholds are very high , pointing to plasma - induced ablation as the dominant mechanism .
near - field scanning optical microscopy ( nsom)@xcite is a promising technique that overcomes the diffraction limit of conventional optical microscopy @xcite and by doing so has created a number of potential applications in biological imaging .
the combination of nsom for ablation with mass spectrometry is of particular interest to obtain detailed molecular information with spatial resolution better than that of the conventional optical spectrometry .
a step in this direction , ultraviolet - nsom - based mass spectrometry with a lateral resolution of 170 nm in ambient conditions , has demonstrated soft ablation capabilities @xcite .
an improvement would be ablating in the infrared rather than the uv regime so that the native water in a cell plays the role of the ablation matrix due to its strong absorption at 2940 nm @xcite . by this approach
, cells can be probed _ in - vivo _ or _ in - vitro _ , but in the far - field , the spatial resolution is limited by diffraction effects and by the quality of available optics to a spot size of about 50@xmath1 .
there are in the literature a number of reports of ablation of conventional solids@xcite and organic molecules @xcite .
there are fewer in which the energy delivered to the sample is characterized well enough to measure ablation thresholds @xcite . in these ,
the ablation thresholds are many orders of....
And you have already written the first three sentences of the full article: the authors would like to thank the w.m .
keck foundation for the financial support , akos vertes who provided the laser for our experiments , infrared fiber systems , silver spring , md , who provided the optical fibers for this study , william rutkowsky for helping with the instrumentation , andrew gomella and craig s pelissier for helping with the detector calibration process , jyoti jaiswal , mary ann stepp and gauri tadvalkar for helping with the cell culture , and alexander jeremic for providing us the facility to culture cell samples ..
Please generate the next two sentences of the article | |
9,518 | Suppose that you have an abstract for a scientific paper: planets like the earth can not form unless elements heavier than helium are available .
these heavy elements , or ` metals ' , were not produced in the big bang .
they result from fusion inside stars and have been gradually building up over the lifetime of the universe .
recent observations indicate that the presence of giant extrasolar planets at small distances from their host stars , is strongly correlated with high metallicity of the host stars .
the presence of these close - orbiting giants is incompatible with the existence of earth - like planets .
thus , there may be a goldilocks selection effect : with too little metallicity , earths are unable to form for lack of material , with too much metallicity giant planets destroy earths . here
i quantify these effects and obtain the probability , as a function of metallicity , for a stellar system to harbour an earth - like planet .
i combine this probability with current estimates of the star formation rate and of the gradual build up of metals in the universe to obtain an estimate of the age distribution of earth - like planets in the universe .
the analysis done here indicates that three quarters of the earth - like planets in the universe are older than the earth and that their average age is @xmath0 billion years older than the earth .
if life forms readily on earth - like planets as suggested by the rapid appearance of life on earth this analysis gives us an age distribution for life on such planets and a rare clue about how we compare to other life which may inhabit the universe .
psfig = .
And you have already written the first three sentences of the full article: observations of protoplanetary disks around young stars in star - forming regions support the widely accepted idea that planet formation is a common by - product of star formation ( e.g. beckwith 2000 ) .
our solar system may be a typical planetary system in which earth - like planets accrete near the host star from rocky debris depleted of volatile elements , while giant gaseous planets accrete in the ice zones ( @xmath1 au ) around rocky cores ( boss 1995 , lissauer 1996 ) .
when the rocky cores in the ice zones reach a critical mass ( @xmath2 ) runaway gaseous accretion ( formation of jupiters ) begins and continues until a gap in the protoplanetary disk forms or the disk dissipates ( papaloizou and terquem 1999 , habing 1999 ) ..
Please generate the next two sentences of the article | the presence of metals is then a requirement for the formation of both earths and jupiters .
we can not yet verify if our solar system is a typical planetary system or how generic the pattern described above is . |
9,519 | Suppose that you have an abstract for a scientific paper: based on a large number of observations carried out in the last decade it appears that the fraction of stars with protoplanetary disks declines steadily between @xmath01myr and @xmath010myr .
we do , however , know that the multiplicity fraction of star - forming regions can be as high as @xmath150% and that multiples have reduced disk lifetimes on average . as a consequence
, the observed roughly exponential disk decay can be fully attributed neither to single nor binary stars and its functional form may need revision .
observational evidence for a non - exponential decay has been provided by @xcite , who statistically correct previous disk frequency measurements for the presence of binaries and find agreement with models that feature a constantly high disk fraction up to @xmath03myr , followed by a rapid ( @xmath22myr ) decline .
we present results from our high angular resolution observational program to study the fraction of protoplanetary disks of single and binary stars separately .
we find that disk evolution timescales of stars bound in close binaries ( @xmath3100au ) are significantly reduced compared to wider binaries .
the frequencies of accretors among single stars and wide binaries appear indistinguishable , and are found to be lower than predicted from planet forming disk models governed by viscous evolution and photoevaporation . .
And you have already written the first three sentences of the full article: the formation of gas giant planets requires significant amounts of gas and dust to be present in the circumstellar environment of a young ttauri star .
the lifetime of protoplanetary disks is accordingly an important observable to constrain planet formation . to infer disk lifetimes ,
a number of previous studies have targeted young star - forming regions to measure the fraction of stars that exhibit either ongoing accretion or hot circumstellar dust or both ..
Please generate the next two sentences of the article | these fractions appear to be a strong function of the age of a star - forming region , monotonically decreasing from @xmath480% to 0% within @xmath010myr ( e.g. , * ? ? ?
* ; * ? ? ? |
9,520 | Suppose that you have an abstract for a scientific paper: we present an analysis of the secular variability of the longitudinal magnetic field @xmath0 in the roap star @xmath1 equ ( hd 201601 ) .
measurements of the stellar magnetic field @xmath0 were mostly compiled from the literature , and append also our 33 new @xmath0 measurements which were obtained with the 1-m optical telescope of special astrophysical observatory ( russia ) .
all the available data cover the time period of 58 years , and include both phases of the maximum and minimum @xmath0 .
we determined that the period of the long - term magnetic @xmath0 variations equals @xmath2 years , with @xmath3 g and @xmath4 g. [ firstpage ] stars : magnetic fields stars : chemically peculiar
stars : individual : hd 201601 .
And you have already written the first three sentences of the full article: the ap star @xmath1 equ ( hd 201601 , bs 8097 ) is one of the brightest objects of this class , with the apparent luminosity @xmath5 mag .
the exact spectral type of this object is a9p ( srcreu subclass ) .
the magnetic field of @xmath1 equ has been studied for more than 50 years , starting from october 1946 ( see babcock 1958 ) ..
Please generate the next two sentences of the article | the longitudinal magnetic field @xmath0 of this star does not exhibit periodic variations in time scales typical of stellar rotation , @xmath6 days .
such a variability of the @xmath0 field was observed in most ap stars . |
9,521 | Suppose that you have an abstract for a scientific paper: the complete @xmath0 corrections including soft - photon bremsstrahlung to the process @xmath1 in the mssm are calculated for on - shell w bosons .
the relative difference between the mssm and standard model corrections is generally quite small .
the maximum deviation from the standard model within the scanned region of parameter space is @xmath2 for unpolarized and transversally polarized w bosons , and @xmath3 for longitudinal w bosons .
+ pacs numbers : 12.60.jv , 13.10.+q , 12.15.lk . .
And you have already written the first three sentences of the full article: the process @xmath1 is already one of the key processes at lep2 , and will be of similar importance at future linear @xmath4 colliders . hence it is not surprising that considerable theoretical effort has gone into the precise prediction of the cross - section in the standard model ( sm ) , both for on- and off - shell w bosons ( @xcite , see @xcite for a review ) . for a process well accessible both experimentally and theoretically in the sm ,
one of the obvious questions to ask is whether it can tell us anything about physics beyond the sm .
supersymmetric extensions play a special role because they , like the sm , allow to make precise predictions in terms of a set of input parameters ..
Please generate the next two sentences of the article | previous calculations in supersymmetric theories include the complete one - loop corrections in spontaneously broken supersymmetry @xcite , sfermion - loop effects in the mssm @xcite , and also the complete mssm corrections to the closely related triple - gauge - boson vertex @xcite . in this paper
the complete one - loop corrections for @xmath1 in the mssm including real bremsstrahlung in the soft - photon approximation are presented . |
9,522 | Suppose that you have an abstract for a scientific paper: we use the method of maximum ( relative ) entropy to process information in the form of observed data and moment constraints .
the generic canonical form of the posterior distribution for the problem of simultaneous updating with data and moments is obtained .
we discuss the general problem of non - commuting constraints , when they should be processed sequentially and when simultaneously . as an illustration , the multinomial example of die tosses
is solved in detail for two superficially similar but actually very different problems .
address = department of physics , university at albany
suny , albany , ny 12222,usa .
And you have already written the first three sentences of the full article: the original method of maximum entropy , maxent @xcite , was designed to assign probabilities on the basis of information in the form of constraints .
it gradually evolved into a more general method , the method of maximum relative entropy ( abbreviated me ) @xcite - caticha07 , which allows one to update probabilities from arbitrary priors unlike the original maxent which is restricted to updates from a uniform background measure .
the realization @xcite that me includes not just maxent but also bayes rule as special cases is highly significant ..
Please generate the next two sentences of the article | first , it implies that me is _ capable of reproducing every aspect of orthodox bayesian inference _ and proves the complete compatibility of bayesian and entropy methods .
second , it opens the door to tackling problems that could not be addressed by either the maxent or orthodox bayesian methods individually . |
9,523 | Suppose that you have an abstract for a scientific paper: we consider possible detector designs for short - baseline neutrino experiments using neutrino beams produced at the first muon collider complex .
the high fluxes available at the muon collider make possible high statistics deep - inelastic scattering neutrino experiments with a low - mass target . a design of a low - energy neutrino oscillation experiment on the `` tabletop '' scale is also discussed . .
And you have already written the first three sentences of the full article: this contribution considers the problem of constructing detectors appropriate for doing short - baseline neutrino physics at the first muon collider complex .
the physics motivations for these detectors are discussed elsewhere in these proceedings@xcite .
since the proposed experiments are short - baseline , the physics being considered is primarily the high - energy physics of neutrino - nucleon deep - inelastic scattering ; however , the final section of the paper considers an oscillation experiment possible with the lowest energy neutrino beam ..
Please generate the next two sentences of the article | the muon collider is expected to use a series of recirculating linacs to accelerate the muons before injection into a collider ring .
any segment along the muon s trajectory that is straight will necessarily create a collimated neutrino beam with an angular divergence of approximately @xmath0 . |
9,524 | Suppose that you have an abstract for a scientific paper: we give an alternative proof of madsen - weiss _ generalized mumford conjecture_. our proof is based on ideas similar to madsen - weiss original proof , but it is more geometrical and less homotopy theoretical in nature . at the heart of the argument is a geometric version of _ harer stability _ , which we formulate as a theorem about folded maps . .
And you have already written the first three sentences of the full article: our main theorem gives a relation between _ fibrations _ ( or _ surface bundles _ ) and a related notion of _ formal fibrations_. by a fibration we shall mean a smooth map @xmath0 , where @xmath1 and @xmath2 are smooth , oriented , compact manifolds and @xmath3 is a submersion ( i.e. @xmath4 is surjective ) .
a cobordism between two fibrations @xmath5 and @xmath6 is a triple @xmath7 where @xmath8 is a cobordism from @xmath9 to @xmath10 , @xmath11 is a cobordism from @xmath12 to @xmath13 , and @xmath14 is a submersion which extends @xmath15 .
[ defn : formal - fib ] a. an _ unstable formal fibration _ is a pair @xmath16 , where @xmath17 is a smooth proper map , and @xmath18 is a bundle epimorphism ..
Please generate the next two sentences of the article | b. a _ stable formal fibration _ ( henceforth just a formal fibration ) is a pair @xmath16 , where @xmath3 is as before , but @xmath19 is defined only as a _
stable _ bundle map . |
9,525 | Suppose that you have an abstract for a scientific paper: we study the problem of a backscattering impurity coupled to the edge states of a two - dimensional topological insulator . in the regime
where the backscattering potential is larger than the band gap and accounting for electron - electron interactions , it is shown that the system can be described as a resonant level coupled to the one - dimensional ( 1d ) channel of interacting edge electrons .
we discuss the relationship of this system to the model of a ( structureless ) impurity in a 1d interacting electron liquid .
different from the latter model , in the resonant regime transmission is suppressed also for weak to moderately attractive interactions . at zero temperature ,
charge transport in two - dimensional topological insulators ( 2dti ) takes place through 1d edge states , whose metallic character is protected by symmetry @xcite .
for instance , time reversal symmetry ( trs ) prevents the ubiquitous anderson localization from happening in the 1d edge channels of quantum spin hall insulators ( qshi ) in the presence of scalar and spin - orbit ( so ) disorder , and for weak to moderate interactions @xcite .
the qsh effect exhibited by qshi has been observed in hgte / cdte and inas / gasb / alsb quantum wells @xcite .
however , for longer ( @xmath0 m ) edge channels , deviations from the expected conductance quantization of @xmath1 and relatively short mean - free paths have been measured @xcite . despite intense theoretical investigation @xcite on the backscattering ( bs ) mechanisms , a complete understanding of the origin of the finite edge channel resistance has not been achieved .
the proposed mechanisms involve electron - electron scattering , often in combination with scalar , so , or magnetic disorder @xcite . magnetic impurities break trs ( above the kondo temperature ) and lead to bs @xcite . yet ,
whether electron bs yields or not corrections to the dc conductance depends on the microscopic details of the scatterer @xcite . + for the simplest model of....
And you have already written the first three sentences of the full article: this supplementary contains details about our analytical approach to solve for the spectrum and the wavefunctions of both bulk and edge states for a generalized kane - mele ( km ) model @xcite .
we shall consider two kinds of boundary conditions , corresponding to the zigzag and beard edges .
the model hamiltonian reads : @xmath160 we have included a staggered potential where @xmath161 for @xmath162 and @xmath163 for @xmath164 sublattice ..
Please generate the next two sentences of the article | this staggered potential can be used to drive a transition between a trivial and a topological phase . in the above expression , @xmath165 denotes the spin and @xmath166 describes the sublattice pseudo spin components corresponding to the @xmath167 sublattices . as discussed in the main text , there is a gauge degree of freedom for fourier transformation of the fermion creation and destruction operators due to the bi - particle structure of lattice .
the two types of edges considered below correspond to two different gauge choices : 1 ) zigzag edge : @xmath27 and 2 ) beard edge : @xmath168 . |
9,526 | Suppose that you have an abstract for a scientific paper: in supersymmetric theory , the sfermion - fermion - gaugino interactions conserve the chirality of ( s)fermions .
the effect appears as the charge asymmetry in @xmath0 distributions at the cern large hadron collider where jets and leptons arise from the cascade decay @xmath1 .
furthermore , the decay branching ratios and the charge asymmetries in @xmath0 distributions are flavor non - universal due to the @xmath2 and @xmath3 mixing . when @xmath4 is large , the non - universality between @xmath5 and @xmath6 becomes @xmath7 level .
we perform a monte carlo simulation for some minimal supergravity benchmark points to demonstrate the detectability . .
And you have already written the first three sentences of the full article: supersymmetry ( susy ) is one of the promising candidates of the physics beyond the standard model .
no signature of susy has been found yet . however , the discovery of the susy particles is guaranteed for @xmath8 tev at the cern large hadron collider ( lhc ) for the minimal supergravity ( msugra ) model . date taking is expected to start from 2007 .
masses of the sparticles will also be measured at the lhc @xcite with reasonable accuracy , which is important to distinguish various susy breaking models ..
Please generate the next two sentences of the article | the lagrangian of supersymmetric theory is highly constrained .
for example the sfermion - fermion - gaugino interaction is restricted to be of the form @xmath9-@xmath10-@xmath11 because @xmath12 and @xmath13 belong to the same chiral multiplet in supersymmetry . on the other hand , @xmath14 and @xmath15 mixing terms |
9,527 | Suppose that you have an abstract for a scientific paper: we present results from three - dimensional general relativistic simulations of binary neutron star coalescences and mergers using public codes .
we considered equal mass models where the baryon mass of the two neutron stars ( ns ) is @xmath0 , described by four different equations of state ( eos ) for the cold nuclear matter ( apr4 , sly , h4 , and ms1 ; all parametrized as piecewise polytropes ) .
we started the simulations from four different initial interbinary distances ( @xmath1 , and @xmath2 km ) , including up to the last 16 orbits before merger .
that allows to show the effects on the gravitational wave phase evolution , radiated energy and angular momentum due to : the use of different eoss , the orbital eccentricity present in the initial data and the initial separation ( in the simulation ) between the two stars .
our results show that eccentricity has a major role in the discrepancy between numerical and analytical waveforms until the very last few orbits , where `` tidal '' effects and missing high - order post - newtonian coefficients also play a significant role .
we test different methods for extrapolating the gravitational wave signal extracted at finite radii to null infinity .
we show that an effective procedure for integrating the newman - penrose @xmath3 signal to obtain the gravitational wave strain @xmath4 is to apply a simple high - pass digital filter to @xmath4 after a time domain integration , where only the two physical motivated integration constants are introduced .
that should be preferred to the more common procedures of introducing additional integration constants , integrating in the frequency domain or filtering @xmath3 before integration .
_ keywords _ : numerical relativity , gravitational wave , neutron star binaries , einstein toolkit . .
And you have already written the first three sentences of the full article: the recent , first , direct detection @xcite of gravitational waves ( gw ) from a binary black hole merger by advanced ligo @xcite has opened a new window for the investigation of astrophysical compact objects .
the new generation gw detectors advanced ligo and advanced virgo @xcite are also expected to reveal an incoming gravitational transient signal from binary neutron stars ( bns ) coalescence and merger , once their sensitivity at higher frequencies will increase . at design sensitivity , the rate of bns signals detected is predicted to be in the interval ( 0.2 - 200 ) per year @xcite , making them the next target for gw detection .
these expected detections present a unique way to learn about the physics of matter at the extreme conditions present in neutron stars and the eos of nuclear matter above the nuclear density ..
Please generate the next two sentences of the article | fully general relativistic simulations of bns started in 1999 @xcite but it is since the crucial breakthroughs of 2005 @xcite that numerical relativity is the main instrument to study the dynamics of the merger of compact objects .
this has lead to the development of community driven public software like the einstein toolkit @xcite and the lorene library @xcite that allow to openly simulate such systems and in particular bns coalescence and merger @xcite . |
9,528 | Suppose that you have an abstract for a scientific paper: the author thinks that the main ideas or relativity theory can be explained to children ( around the age of 15 or 16 ) without complicated calculations , by using very simple arguments of affine geometry .
the proposed approach is presented as a conversation between the author and one of his grand - children . limited here to the special theory ,
it will be extended to the general theory elsewhere , as sketched in conclusion . for agathe , florent , basile , mathis , gabrielle , morgane , quitterie and my future other grand - children .
And you have already written the first three sentences of the full article: maybe one day , one of my grand - children , at the age of 15 or 16 , will ask me : grand - father , could you explain what is relativity theory ?
my physics teacher lectured about it , talking of rolling trains and of lightnings hitting the railroad , and i understood almost nothing !
this is the discussion i would like to have with her ( or him ) ..
Please generate the next two sentences of the article | do you know the theorem : the diagonals of a parallelogram meet at their middle point ?
yes , i do ! |
9,529 | Suppose that you have an abstract for a scientific paper: the possibility of existence of hyperons in the recently measured @xmath0 pulsar psrj1614 - 2230 is explored using a diverse set of nuclear equations of state calculated within the relativistic mean - field models .
our results indicate that the nuclear equations of state compatible with heavy - ion data allow the hyperons to exist in the psrj1614 - 2230 only for significantly larger values for the meson - hyperon coupling strengths .
the maximum mass configurations for these cases contain sizable hyperon fractions ( @xmath1 ) and yet masquared their counterpart composed of only nucleonic matter . .
And you have already written the first three sentences of the full article: the latest measurement of the shapiro delay for the millisecond pulsar psrj1614 - 2230 provides reliable lower bound on the maximum mass to be @xmath2 @xcite . this measurement rules out all the equations of state ( eoss ) yielding the maximum mass less than that of the psrj1614 - 2230
of course , the eoss for the nucleonic matter can readily yield the compact stars with masses @xmath3 .
the eoss with hadron - quark phase transition are also compatible with the mass measurement of the psrj1614 - 2230 , provided , the quarks are assumed to be strongly interacting and are in colour superconducting phase @xcite ..
Please generate the next two sentences of the article | however , at large , the maximum mass of the compact stars are found to be well below @xmath0 when the non - nucleonic degrees of freedom like hyperons and kaon condensates are considered @xcite . one might thus infer in the backdrop of previous calculations that the existence of hyperons and kaon condensates are unlikely in the psrj1614 - 2230 .
recently , studies involving role of hyperons on the maximum mass of the compact stars are revisited @xcite . |
9,530 | Suppose that you have an abstract for a scientific paper: a geometrical analysis of the bulk and anti - de sitter boundary unitarity conditions of 3d `` minimal massive gravity '' ( mmg ) ( which evades the `` bulk / boundary clash '' of topologically massive gravity ) is used to extend and simplify previous results , showing that unitarity selects , up to equivalence , a connected region in parameter space .
we also initiate the study of flat - space holography for mmg .
its relevant flat space limit is a deformation of 3d conformal gravity ; the deformation is both non - linear and non - conformal , implying a linearisation instability .
= 4 damtp-2014 - 80 .
And you have already written the first three sentences of the full article: a recently proposed model of 3d massive gravity @xcite , dubbed `` minimal massive gravity '' ( mmg ) , has bulk properties that are identical to those of `` topologically massive gravity '' ( tmg ) ( which propagates a single massive spin-@xmath0 mode @xcite ) but its boundary properties ( for ads asymptotics ) are different .
specifically , mmg evades the `` bulk / boundary clash '' of tmg ; this is the impossibility ( for tmg ) of arranging for both central charges of the asymptotic conformal symmetry algebra to be positive while also arranging for the bulk mode to have positive energy . in this paper
we present a greatly simplified , and geometrical , analysis of the unitarity conditions of mmg ..
Please generate the next two sentences of the article | our results confirm those of @xcite but we also consider a slightly larger class of models by leaving free the normalisation of the parameters of the mmg action , and we cut in half the relevant parameter space by establishing equivalence under a `` duality '' transformation in the full parameter space .
the final result is that unitarity restricts the parameters to a connected region of parameter space , up to equivalence . |
9,531 | Suppose that you have an abstract for a scientific paper: we consider the problem of transmitting data at rate @xmath0 over a state dependent channel @xmath1 with state information available at the sender and at the same time conveying the information about the channel state itself to the receiver .
the amount of state information that can be learned at the receiver is captured by the mutual information @xmath2 between the state sequence @xmath3 and the channel output @xmath4 .
the optimal tradeoff is characterized between the information transmission rate @xmath0 and the state uncertainty reduction rate @xmath5 , when the state information is either causally or noncausally available at the sender . in particular ,
when state transmission is the only goal , the maximum uncertainty reduction rate is given by @xmath6 .
this result is closely related and in a sense dual to a recent study by merhav and shamai , which solves the problem of _ masking _ the state information from the receiver rather than conveying it . .
And you have already written the first three sentences of the full article: a channel @xmath1 with noncausal state information at the sender has capacity @xmath7 as shown by gelfand and pinsker @xcite .
transmitting at capacity , however , obscures the state information @xmath3 as received by the receiver @xmath4 . in some instances we wish to convey the state information @xmath3 itself , which could be time - varying fading parameters or an original image that we wish to enhance .
for example , a stage actor with face @xmath8 uses makeup @xmath9 to communicate to the back row audience @xmath10 . here.
Please generate the next two sentences of the article | @xmath9 is used to enhance and exaggerate @xmath8 rather than to communicate new information .
another motivation comes from cognitive radio systems @xcite with the additional assumption that the secondary user @xmath11 communicates its own message and at the same time facilitates the transmission of the primary user s signal @xmath3 . |
9,532 | Suppose that you have an abstract for a scientific paper: s3 t ( stochastic structural stability theory ) employs a closure at second order to obtain the dynamics of the statistical mean turbulent state . when s3 t is implemented as a coupled set of equations for the streamwise mean and perturbation states , nonlinearity in the dynamics is restricted to interaction between the mean and perturbations .
the s3 t statistical mean state dynamics can be approximately implemented by similarly restricting the dynamics used in a direct numerical simulation ( dns ) of the full navier - stokes equations ( referred to as the ns system ) .
although this restricted nonlinear system ( referred to as the rnl system ) is greatly simplified in its dynamics in comparison to the associated ns , it nevertheless self - sustains a turbulent state in wall - bounded shear flow with structures and dynamics comparable to that in observed turbulence .
moreover , rnl turbulence can be analyzed effectively using theoretical methods developed to study the closely related s3 t system . in order to better understand rnl turbulence and its relation to ns turbulence
, an extensive comparison is made of diagnostics of structure and dynamics in these systems .
although quantitative differences are found , the results show that turbulence in the rnl system closely parallels that in ns and suggest that the s3t / rnl system provides a promising reduced complexity model for studying turbulence in wall - bounded shear flows . .
And you have already written the first three sentences of the full article: the navier - stokes equations ( ns ) , while comprising the complete dynamics of turbulence , have at least two disadvantages for theoretical investigation of the physics of turbulence : ns lacks analytical solution for the case of the fully turbulent state and the nonlinear advection term results in turbulent states of high complexity which tends to obscure the fundamental mechanisms underlying the turbulence .
one approach to overcoming these impediments has been the search for simplifications of ns that retain essential features of the turbulence dynamics .
the linearized navier - stokes equations ( lns ) provide one example of the successful application of this approach in which the power of linear systems theory is made available to the study of turbulence @xcite ..
Please generate the next two sentences of the article | the lns system captures the non - normal mechanism responsible for perturbation growth in ns @xcite .
this linear mechanism retained in lns underlies both the process of subcritical transition to turbulence and the maintenance of the turbulent state @xcite |
9,533 | Suppose that you have an abstract for a scientific paper: in this report , mathematical model for generalized nonlinear three dimensional wave breaking equations was developed analytically using fully nonlinear extended boussinesq equations to encompass rotational dynamics in wave breaking zone .
the three dimensional equations for vorticity distributions are developed from reynold based stress equations .
vorticity transport equations are also developed for wave breaking zone .
this equations are basic model tools for numerical simulation of surf zone to explain wave breaking phenomena .
the model reproduces most of the dynamics in the surf zone .
non linearity for wave height predictions is also shown close to the breaking both in shoaling as well as surf zone .
* keyword * wave breaking , boussinesq equation , shallow water , surf zone .
pacs : 47.32-y . .
And you have already written the first three sentences of the full article: wave breaking is one of the most complex phenomena that occurs in the near shore region . during propagation of wave from deep to shallow water
, the wave field is transformed due to shoaling .
close to the shoreline , they become unstable and break . in the process of breaking.
Please generate the next two sentences of the article | , energy is redistributed from fairly organized wave motion to small scale turbulence , large scale currents and waves .
+ classical boussinesq theory provides a set of evolution equations for sueface water waves in the combined limit of weak nonlinearity ( characterized by @xmath0 ) and weak dispersion ( @xmath1 ) with the raio @xmath2 . |
9,534 | Suppose that you have an abstract for a scientific paper: we present a new implementation of the monte carlo method to simulate the evolution of star clusters .
the major improvement with respect to the previously developed codes is the treatment of the external tidal field taking into account for both the loss of stars from the cluster boundary and the disk / bulge shocks .
we provide recipes to handle with eccentric orbits in complex galactic potentials .
the first calculations for stellar systems containing 21000 and 42000 equal - mass particles show good agreement with direct n - body simulations in terms of the evolution of both the enclosed mass and the lagrangian radii provided that the mass - loss rate does not exceed a critical value .
[ firstpage ] methods : numerical methods : statistical stars : kinematics and dynamics globular clusters : general .
And you have already written the first three sentences of the full article: the dynamical evolution of dense star clusters is a problem of fundamental importance in theoretical astrophysics .
star clusters like open and globular clusters are among the simplest stellar systems : they are spherical , they contain no dust to confuse the observations and they appear to have no dark matter .
moreover , they are dynamically old : a typical star in a globular cluster has completed some @xmath0 orbits since the cluster was formed and processes like gravothermal collapse and two - body relaxation occur on timescales comparable with their ages ..
Please generate the next two sentences of the article | thus , they provide the best physical realization of the gravitational n - body problem i.e. to understand the evolution of a system of n point masses interacting only by gravitational forces . in spite of the many advances made in the recent past , many aspects of the problem have remained unresolved like the production of exotic objects ( ferraro et al .
2012 ) , the importance of tidal - shocks in the long term evolution and survival of star clusters in the galaxy ( gnedin , lee & ostriker 1999 ) and the ability to retain dark remnants ( morscher et al . 2013 ; sippel & hurley 2013 ) . |
9,535 | Suppose that you have an abstract for a scientific paper: we report on the serendipitous discovery of a 442-hz pulsar during a _ rossi x - ray timing explorer _ ( _ rxte _ )
observation of the globular cluster ngc 6440 .
the oscillation is detected following a burst - like event which was decaying at the beginning of the observation .
the time scale of the decay suggests we may have seen the tail - end of a long - duration burst .
low - mass x - ray binaries ( lmxbs ) are known to emit thermonuclear x - ray bursts that are sometimes modulated by the spin frequency of the star , the so called burst oscillations .
the pulsations reported here are peculiar if interpreted as canonical burst oscillations . in particular
, the pulse train lasted for @xmath0500 s , much longer than in standard burst oscillations .
the signal was highly coherent and drifted down by @xmath02@xmath1hz , much smaller than the @xmath0hz drifts typically observed during normal bursts .
the pulsations are reminiscent of those observed during the much more energetic `` superbursts '' , however , the temporal profile and the energetics of the burst suggest that it was not the tail end nor the precursor feature of a superburst .
it is possible that we caught the tail end of an outburst from a new ` intermittent' accreting x - ray millisecond pulsar , a phenomenon which until now has only been seen in hete j1900.1@xmath22455 @xcite . we note that @xcite reported the discovery of a 409.7 hz burst oscillation from sax j1748.9@xmath22021 , also located in ngc 6440 .
however , _ chandra x - ray observatory _ imaging indicates it contains several point - like x - ray sources , thus the 442 hz object is likely a different source . .
And you have already written the first three sentences of the full article: the discovery of millisecond spin periods of neutron stars in low mass x - ray binaries ( lmxbs ) with the _ rossi x - ray timing explorer _ ( _ rxte _ ) has helped elucidate the nature of these sources .
neutron star lmxbs consist of a neutron star accreting from a low mass companion .
as material ( mostly h and he ) is accreted onto the star and gets compressed , it eventually ignites and burns unstably ( see * ? ? ?.
Please generate the next two sentences of the article | this phenomenon is observed as a type i x - ray burst .
type i x - ray bursts have been observed from over @xmath070 lmxbs ( see * ? ? ? * and references therein ) . |
9,536 | Suppose that you have an abstract for a scientific paper: the problem of publishing personal data without giving up privacy is becoming increasingly important . a clean formalization that has been recently proposed
is the @xmath0-anonymity , where the rows of a table are partitioned in clusters of size at least @xmath0 and all rows in a cluster become the same tuple , after the suppression of some entries .
the natural optimization problem , where the goal is to minimize the number of suppressed entries , is hard even when the stored values are over a binary alphabet and as well as on a table consists of a bounded number of columns . in this paper
we study how the complexity of the problem is influenced by different parameters .
first we show that the problem is w[1]-hard when parameterized by the value of the solution ( and @xmath0 ) .
then we exhibit a fixed - parameter algorithm when the problem is parameterized by the number of columns and the maximum number of different values in any column . finally , we prove that @xmath0-anonymity is still apx - hard even when restricting to instances with @xmath1 columns and @xmath2 . .
And you have already written the first three sentences of the full article: in epidemic studies the analysis of large amounts of personal data is essential . at the same time
the dissemination of the results of those studies , even in a compact and summarized form , can provide some information that can be exploited to identify the row pertaining to a certain individual .
for instance , zip code , gender and date of birth can uniquely identify 87% of individuals in the u.s ..
Please generate the next two sentences of the article | therefore when managing personal data it is of the utmost importance to effectively protect individuals privacy .
one approach to deal with such problem is the @xmath0-anonymity model @xcite . |
9,537 | Suppose that you have an abstract for a scientific paper: we analyze the quantum interference effects appearing in the charge current through the double quantum dots coupled in @xmath0-shape configuration to an isotropic superconductor and metallic lead . owing to proximity effect
the quantum dots inherit a pairing which has the profound influence on nonequilibrium charge transport , especially in the subgap regime @xmath1 .
we discuss under what conditions the fano - type lineshapes might appear in such andreev conductance and consider a possible interplay with the strong correlation effects . .
And you have already written the first three sentences of the full article: heterostructures with nanoobjects ( such as quantum dots , nanowires , molecules , etc ) hybridized to one conducting and another superconducting electrode seem to be promising testing fields where the strong electron correlations ( responsible e.g. for coulomb blockade and kondo physics @xcite ) can be confronted with the superconducting order @xcite .
coulomb repulsion between electrons in the solid state physics is known to suppress the local ( @xmath2-wave ) pairing and , through the spin exchange mechanism , eventually promotes the intersite ( @xmath3-wave ) superconductivity @xcite .
mutual relation between such repulsion and the local pairing is however rather difficult for studying , both on theoretical grounds and experimentally . in nanoscopic heterostructures.
Please generate the next two sentences of the article | some of these limitations can be overcome by a suitable adjustment of the hybridization and the gate - voltage positioning of energy levels involved in the charge transfer @xcite .
they enable a controllable changeover between the kondo regime and opposite case dominated by the induced on - dot pairing . |
9,538 | Suppose that you have an abstract for a scientific paper: we detect four isolated , x - ray over - luminous ( @xmath0^{-2}$]erg s@xmath1 ) elliptical galaxies ( olegs ) in our 160 square degree _ rosat _ pspc survey .
the extent of their x - ray emission , total x - ray luminosity , total mass , and mass of the hot gas in these systems correspond to poor clusters , and the optical luminosity of the central galaxies ( @xmath2 ) is comparable to that of cluster cds . however , there are no detectable fainter galaxy concentrations around the central elliptical .
the mass - to - light ratio within the radius of detectable x - ray emission is in the range @xmath3 , which is 23 times higher than typically found in clusters or groups .
these objects can be the result of galaxy merging within a group . however , their high @xmath4 values are difficult to explain in this scenario .
olegs must have been undisturbed for a very long time , which makes them the ultimate examples of systmes in hydrostatic equilibrium .
the number density of olegs is @xmath5 mpc@xmath6 at the 90% confidence .
they comprise 20% of all clusters and groups of comparable x - ray luminosity , and nearly all galaxies brighter than @xmath7 .
the estimated contirubution of olegs to the total mass density in the universe is close to that of @xmath8kev clusters . .
And you have already written the first three sentences of the full article: large concentrations of matter in the universe are found using optical galaxies as tracers of mass .
systems with a wide range of mass and size were discovered by this technique , from pairs and triplets of galaxies to filaments extending for hunderds of mpc .
do optical galaxy surveys detect all large - scale mass concentrations , or do there exist populations of `` dark '' massive objects ?.
Please generate the next two sentences of the article | one approach for detection of dark systems is through gravitational lensing .
week lensing observations can detect very large scale mass structures ( e.g. , schneider et al . |
9,539 | Suppose that you have an abstract for a scientific paper: we investigate the use of compressive sampling for networked feedback control systems .
the method proposed serves to compress the control vectors which are transmitted through rate - limited channels without much deterioration of control performance .
the control vectors are obtained by an @xmath0 optimization , which can be solved very efficiently by fista ( fast iterative shrinkage - thresholding algorithm ) .
simulation results show that the proposed sparsity - promoting control scheme gives a better control performance than a conventional energy - limiting @xmath1-optimal control . .
And you have already written the first three sentences of the full article: the objective of this article is to design a controller in a _ networked control system _ @xcite that produces sparse control vectors for effective compression before transmissions .
unfortunately , the calculation of optimal sparse vectors will , in general , require significant computational cost and may thereby introduce delays , which are unacceptable for closed - loop operation . to overcome this issue , we subsample the problem to reduce its size and adopt a fast algorithm called _ fista _ ( fast iterative shrinkage - thresholding algorithm ) @xcite .
networked control systems are those in which the controlled plants are located away from the controllers , and the communication should be made through rate - limited communication channels such as wireless networks or the internet @xcite . in networked control systems ,.
Please generate the next two sentences of the article | efficient signal compression or representation is essential to send control data through rate - limited communication channels . for this purpose
, we propose an approach of sparse control signal representation using the _ compressive sampling _ technique @xcite . |
9,540 | Suppose that you have an abstract for a scientific paper: this is a survey of recent studies of singularity formation in solutions of spherically symmetric yang - mills equations in higher dimensions .
the main attention is focused on five space dimensions because this case exhibits interesting similarities with einstein s equations in the physical dimension , in particular the dynamics at the threshold of singularity formation shares many features ( such as universality , self - similarity , and scaling ) with critical phenomena in gravitational collapse . the borderline case of four space dimensions
is also analyzed and the formation of singularities is shown to be intimately tied to the existence of the instanton solution . .
And you have already written the first three sentences of the full article: one of the most interesting features of many nonlinear evolution equations is the spontaneous onset of singularities in solutions starting from perfectly smooth initial data .
such a phenomenon , usually called `` blowup '' , has been a subject of intensive studies in many fields ranging from fluid dynamics to general relativity .
whether or not the blowup can occur for a given nonlinear evolution equation is the central mathematical question which , from the physical point of view , has a direct bearing on our understanding of the limits of validity of the corresponding model ..
Please generate the next two sentences of the article | unfortunately , this is often a difficult question .
two famous examples for which the answer is not known are the navier - stokes equation and the einstein equations . once the existence of blowup is established for a particular equation , many further questions come up , such as : when and where does the blowup occur ? what is the character of blowup and is it universal ? can a solution be continued past the singularity ? |
9,541 | Suppose that you have an abstract for a scientific paper: for quantum systems of zero - range interaction we discuss the mathematical scheme within which modelling the two - body interaction by means of the physically relevant ultra - violet asymptotics known as the `` ter - martirosyan skornyakov condition '' gives rise to a self - adjoint realisation of the corresponding hamiltonian .
this is done within the self - adjoint extension scheme of kren , viik , and birman .
we show that the ter - martirosyan skornyakov asymptotics is a condition of self - adjointness only when is imposed in suitable functional spaces , and not just as a point - wise asymptotics , and we discuss the consequences of this fact on a model of two identical fermions and a third particle of different nature .
* keywords : * point interactions , self - adjoint extensions , kren - viik - birman theory , ter - martirosyan skornyakov operators . .
And you have already written the first three sentences of the full article: according to a nomenclature that has emerged in various physical and mathematical contexts , one refers to the so - called ter - martirosyan skornyakov ( henceforth tms ) operators as a distinguished class of quantum hamiltonians for systems of non - relativistic particles with two - body `` _ _ zero - range _ _ '' ( or `` _ _ contact _ _ '' , or `` _ _ point _ _ '' ) interaction .
this terminology stems from early works in nuclear physics , where it was the nucleon - nucleon coupling to be initially modelled as a `` contact '' interaction .
nowadays the typical experimental realisation is that of ultra - cold atom systems where , by feshbach resonance methods , the two - body scattering length is tuned to a magnitude that exceeds by many orders its nominal value , and the effective range of the interaction shrinks correspondingly to a very small scale , so that to an extremely good approximation the interaction can be considered to be of infinite scattering length and/or zero range . in section [ sec : history_tms ].
Please generate the next two sentences of the article | we will provide a more diffuse context and references .
informally speaking , tms hamiltonians are qualified by the two characteristics of acting as the @xmath0-body @xmath1-dimensional _ free _ hamiltonian on functions that are supported _ away _ from the `` coincidence hyperplanes '' @xmath2 , and of having a domain that consists of square - integrable functions @xmath3 , possibly with fermionic or bosonic exchange symmetry , which satisfy specific asymptotics when @xmath4 for some or for all particle couples @xmath5 . |
9,542 | Suppose that you have an abstract for a scientific paper: we construct a warm inflation model with inflaton field non - minimally coupled to induced gravity on a warped dgp brane .
we incorporate possible modification of the induced gravity on the brane in the spirit of @xmath0-gravity .
we study cosmological perturbations in this setup . in the case of two field inflation such as warm inflation ,
usually entropy perturbations are generated . while it is expected that in the case of one field inflation these perturbations to be removed , we show that even in the absence of the radiation field , entropy perturbations are generated in our setup due to non - minimal coupling and modification of the induced gravity .
we study the effect of dissipation on the inflation parameters of this extended braneworld scenario . + * pacs * : 04.50.-h , 98.80.-k , 98.80.cq , 98.80.es + * key words * : braneworld gravity , scalar - tensor theories , induced gravity , warm inflation , perturbations + , and + + @xmath1_research institute for astronomy and astrophysics of maragha , + p. o. box 55134 - 441 , maragha , iran _ .
And you have already written the first three sentences of the full article: the idea of inflation is a very successful paradigm to solve the problems of the standard cosmology and it provides a basis for production and evolution of seeds for large scale structure of the universe [ 1,2 ] . from a thermodynamical viewpoint ,
there are two possible alternatives to inflationary dynamics : standard picture is isentropic inflation referred to as supercooled inflation . in this picture
, universe expands in inflation phase and its temperature decrease rapidly ..
Please generate the next two sentences of the article | when inflation ends , a reheating period introduces radiation into the universe .
the fluctuations in this type of inflation model are zero - point ground state fluctuations and evolution of the inflaton field is governed by ground state evolution equations . in this model |
9,543 | Suppose that you have an abstract for a scientific paper: for _ , is available as a technical report in stanford statistics achive ( report no .
2006 - 04 , june , 2006 ) . ]
dimension reduction in @xmath0 , the method of _ cauchy random projections _ multiplies the original data matrix @xmath1 with a random matrix @xmath2 ( @xmath3 ) whose entries are i.i.d
. samples of the standard cauchy @xmath4 .
because of the impossibility results , one can not hope to recover the pairwise @xmath0 distances in @xmath5 from @xmath6 , using linear estimators without incurring large errors . however , nonlinear estimators are still useful for certain applications in data stream computation , information retrieval , learning , and data mining .
we propose three types of nonlinear estimators : the bias - corrected sample median estimator , the bias - corrected geometric mean estimator , and the bias - corrected maximum likelihood estimator .
the sample median estimator and the geometric mean estimator are asymptotically ( as @xmath7 ) equivalent but the latter is more accurate at small @xmath8 .
we derive explicit tail bounds for the geometric mean estimator and establish an analog of the johnson - lindenstrauss ( jl ) lemma for dimension reduction in @xmath0 , which is weaker than the classical jl lemma for dimension reduction in @xmath9 .
asymptotically , both the sample median estimator and the geometric mean estimators are about @xmath10 efficient compared to the maximum likelihood estimator ( mle ) .
we analyze the moments of the mle and propose approximating the distribution of the mle by an inverse gaussian .
* keywords : * dimension reduction , @xmath0 norm , cauchy random projections , jl bound .
And you have already written the first three sentences of the full article: this paper focuses on dimension reduction in @xmath0 , in particular , on the method based on _ cauchy random projections _
@xcite , which is special case of _ linear random projections_. the idea of _ linear random projections _ is to multiply the original data matrix @xmath11 with a random projection matrix @xmath12 , resulting in a projected matrix @xmath13 . if @xmath14 , then it should be much more efficient to compute certain summary statistics ( e.g. , pairwise distances ) from @xmath15 as opposed to @xmath5 .
moreover , @xmath15 may be small enough to reside in physical memory while @xmath5 is often too large to fit in the main memory ..
Please generate the next two sentences of the article | the choice of the random projection matrix @xmath16 depends on which norm we would like to work with .
@xcite proposed constructing @xmath16 from i.i.d . |
9,544 | Suppose that you have an abstract for a scientific paper: with the observations of the solar dynamics observatory , we present the slipping magnetic reconnections with multiple flare ribbons ( frs ) during an x1.2 eruptive flare on 2014 january 7 .
a center negative polarity was surrounded by several positive ones , and there appeared three frs .
the three frs showed apparent slipping motions , and hook structures formed at their ends . due to the moving footpoints of the erupting structures , one tight semi - circular hook disappeared after the slippage along its inner and outer edge , and coronal dimmings formed within the hook .
the east hook also faded as a result of the magnetic reconnection between the arcades of a remote filament and a hot loop that was impulsively heated by the under flare loops .
our results are accordant with the slipping magnetic reconnection regime in 3d standard model for eruptive flares .
we suggest that complex structures of the flare is likely a consequence of the more complex flux distribution in the photosphere , and the eruption involves at least two magnetic reconnections . .
And you have already written the first three sentences of the full article: solar flares are most energetic magnetic explosions in the solar activities .
they can increase the emission in a broad range of the electromagnetic spectrum , from radio wavelengths to x- and @xmath0-rays ( fletcher et al . 2011 ) . in the standard solar flare model ,
i.e. the cshkp model ( named after carmichael 1964 ; sturrock 1966 ; hirayama 1974 ; kopp & pneuman 1976 ) , the erupting flux rope stretched magnetic filed lines to induce the magnetic reconnection ; due to the successive reconnections , the flare loops ( fls ; originally named as post - flare loops ) formed and straddled the magnetic polarity inversion line , and their footpoints are heated by the energy transport from the reconnection site to appear as flare ribbons ( frs ) ..
Please generate the next two sentences of the article | however , the standard flare model is basically two - dimensional , and it remains deficient to explain many inherent three - dimensional ( 3d ) observational features , such as the formation of coronal sigmoids ( aulanier et al .
2010 ; green et al . 2011 ; savcheva et al . 2015 ) , the erupting flux rope ( zhang et al . 2012 ) , the moving bright emissions along the frs ( fletcher & hudson 2002 ; del zanna et al . |
9,545 | Suppose that you have an abstract for a scientific paper: impurity diffusion coefficients are entirely obtained from a low cost classical molecular statics technique ( cmst ) . in particular , we show how cmst is appropriate in order to describe the impurity diffusion behavior mediated by a vacancy mechanism . in the context of the five - frequency model , cmst allows to calculate all the microscopic parameters , namely : the free energy of vacancy formation , the vacancy - solute binding energy and the involved jump frequencies , from them , we obtain the macroscopic transport magnitudes such as : correlation factor , solvent - enhancement factor , onsager and diffusion coefficients .
specifically , we perform our calculations in f.c.c . diluted @xmath0 and @xmath1 alloys .
results for the tracer diffusion coefficients of solvent and solute species are in agreement with available experimental data for both systems .
we conclude that in @xmath0 and @xmath1 systems solute atoms migrate by direct interchange with vacancies in all the temperature range where there are available experimental data . in the @xmath1 case
, a vacancy drag mechanism could occur at temperatures below @xmath2k .
diffusion , moddeling , numerical calculations , vacancy mechanism , diluted alloys , @xmath0 and @xmath1 systems . .
And you have already written the first three sentences of the full article: the low enrichment of @xmath3mo alloy dispersed in an @xmath4 matrix is a prototype for new experimental nuclear fuels @xcite .
when these metals are brought into contact , diffusion in the @xmath5 interface gives rise to interaction phases . also , when subjected to temperature and neutron radiation , phase transformation from @xmath6 to @xmath7 occurs and intermetallic phases develop in the u@xmath8mo@xmath9al interaction zone .
fission gas pores nucleate in these new phases during service producing swelling and deteriorating the alloy properties @xcite ..
Please generate the next two sentences of the article | an important technological goal is to delay or directly avoid undesirable phase formation by inhibiting interdiffusion of @xmath4 and @xmath10 components .
some of these compounds are believed to be responsible for degradation of properties @xcite . |
9,546 | Suppose that you have an abstract for a scientific paper: we investigate transport in several translationally invariant spin-@xmath0 chains in the limit of high temperatures .
we concretely consider spin transport in the anisotropic heisenberg chain , the pure heisenberg chain within an alternating field , and energy transport in an ising chain which is exposed to a tilted field .
our approach is essentially based on a connection between the evolution of the variance of an inhomogeneous non - equilibrium density and the current auto - correlation function at finite times .
although this relationship is not restricted to the case of diffusive transport , it allows to extract a quantitative value for the diffusion constant in that case . by means of
numerically exact diagonalization we indeed observe diffusive behavior in the considered spin chains for a range of model parameters and confirm the diffusion coefficients which were obtained for these systems from non - equilibrium bath scenarios . .
And you have already written the first three sentences of the full article: although transport in low - dimensional quantum systems has intensively been investigated theoretically in the past years , there still is an ongoing interest in understanding the transport phenomena in such systems , including their temperature and length scale dependence @xcite .
those works have often addressed a qualitative classification of the occurring transport types into ballistic or normal diffusive behavior and , in particular cases , the crucial mechanisms which are responsible for the emergence of diffusion have been studied . in this context
the role of non - integrability and quantum chaos is frequently discussed as an at least necessary condition @xcite ..
Please generate the next two sentences of the article | significant theoretical attention has been devoted to spin-@xmath0 chains @xcite , e.g. to the prominent anisotropic heisenberg chain ( xxz model ) @xcite .
most controversial appears the question whether or not the ( finite temperature ) transport in the pure heisenberg chain is ballistic @xcite . |
9,547 | Suppose that you have an abstract for a scientific paper: magnetic fluctuations in a non - magnetized gaseous plasma is revisited and calculated without approximations , based on the fluctuation - dissipation theorem . it is argued that the present results are qualitative and quantitative different form previous one based on the same theorem . in particular , it is shown that it is not correct that the spectral intensity does not vary sensitively with @xmath0 .
also the simultaneous dependence of this intensity on the plasma and on the collisional frequencies are discussed . .
And you have already written the first three sentences of the full article: fluctuations of physical quantities near zero frequency have been investigated by several authors since the papers of @xcite and @xcite .
a general theory on the fluctuation - dissipation theorem , which will be the starting point of this paper , was developed by @xcite . to the best of our knowledge ,
a concrete expression for the low - frequency spectrum of fluctuations of magnetic fields in a thermal plasma was obtained for the first time by @xcite ..
Please generate the next two sentences of the article | they found a peak around @xmath1 magnetic fluctuation which was interpreted as the evanescent energy component of electromagnetic fluctuations `` screened '' in a plasma below the plasma frequency .
the impact of such result into the cosmic microwave background was then investigated by @xcite . |
9,548 | Suppose that you have an abstract for a scientific paper: we present a comparison between the published optical , ir and co spectroscopic redshifts of 15 ( sub-)mm galaxies and their photometric redshifts as derived from long - wavelength ( radio mm fir ) photometric data .
the redshift accuracy measured for 12 sub - mm galaxies with at least one robustly - determined colour in the radio mm fir regime is @xmath0 ( r.m.s . ) . despite the wide range of spectral energy distributions in the local galaxies that are used in an un - biased manner as templates , this analysis demonstrates that photometric redshifts can be efficiently derived for sub - mm galaxies with a precision of @xmath1 using only the rest - frame fir to radio wavelength data .
= = = = = = = = # 1 # 1 # 1 # 1 @mathgroup@group @mathgroup@normal@groupeurmn @mathgroup@bold@groupeurbn @mathgroup@group @mathgroup@normal@groupmsamn @mathgroup@bold@groupmsamn = `` 019 = ' ' 016 = `` 040 = ' ' 336 = " 33e = = = = = = = = # 1 # 1 # 1 # 1 = = = = = = = = epsf [ firstpage ] surveys galaxies : evolution cosmology : miscellaneous infrared : galaxies submillimetre .
And you have already written the first three sentences of the full article: the next generation of wide - area extragalactic submillimetre and millimetre ( hereafter sub - mm ) surveys , for example from the balloon - borne large aperture submillimetre telescope ( blast , devlin et al 2001 ) , laboca on the atacama pathfinder experiment ( apex ) , the scuba2 camera on the james clerk maxwell telescope ( jcmt ) and bolocam - ii on the large millimetre telescope ( lmt ) , will produce large samples ( @xmath2 ) of distant , luminous starburst galaxies .
the dramatic increase in the number of submillimetre detected galaxies requiring follow - up observations makes it unreasonable to expect that a large fraction of their obscured or faint optical and ir counterparts will have unambiguous , spectroscopically - determined redshifts .
an alternative method to efficiently and robustly measure the redshift distribution for large samples of submillimetre galaxies is clearly necessary . given the underlying assumption that we are witnessing high rates of star formation in these submillimetre galaxies , then we expect them to have the characteristic fir peak and steep submillimetre ( rayleigh - jeans ) spectrum which is dominated by thermal emission from dust heated to temperatures in the range @xmath3k by obscured young , massive stars ..
Please generate the next two sentences of the article | the observed radio fir luminosity correlation in local starburst galaxies ( e.g. helou et al .
1985 ) , that links the radio synchrotron emission from supernova remnants with the later stages of massive star formation , is also expected to apply to the submillimetre galaxies . |
9,549 | Suppose that you have an abstract for a scientific paper: we have investigated the vortex state in a superconducting dice network using the bitter decoration technique at several magnetic frustrations @xmath0=1/2 and 1/3 .
in contrast to other regular network geometries where the existence of a commensurate state was previouly demonstrated , no ordered state was observed in the dice network at @xmath1 and the observed vortex - vortex correlation length is close to one lattice cell . .
And you have already written the first three sentences of the full article: in the past decades , the vortex state of superconducting networks has been investigated by several groups using different imaging techniques ( scanning hall microscopy @xcite , scanning squid microscopy@xcite or bitter decoration @xcite ) .
the vortex configuration was studied in square or triangular lattices as a function of the magnetic field . in superconducting arrays
the relevant variable is the magnetic frustration , @xmath2 , which represents the vortex filling factor ..
Please generate the next two sentences of the article | @xmath2 is defined as @xmath0 with @xmath3 , the flux quantum , and @xmath4 , the magnetic flux per elementary plaquette .
the vortex pattern reflects the spatial phase configuration of the superconducting order parameter resulting from the competition between the magnetic field and the underlying lattice @xcite . for rational frustration @xmath5 , with @xmath6 and @xmath7 integer numbers , |
9,550 | Suppose that you have an abstract for a scientific paper: we look at the relationship between the preparation method of si and ge nanostructures ( nss ) and the structural , electronic , and optical properties in terms of quantum confinement ( qc ) .
qc in nss causes a blue shift of the gap energy with decreasing ns dimension . directly measuring the effect of qc
is complicated by additional parameters , such as stress , interface and defect states .
in addition , differences in ns preparation lead to differences in the relevant parameter set . a relatively simple model of qc , using a ` particle - in - a - box'-type perturbation to the effective mass theory , was applied to si and ge quantum wells , wires and dots across a variety of preparation methods .
the choice of the model was made in order to distinguish contributions that are solely due to the effects of qc , where the only varied experimental parameter was the crystallinity .
it was found that the hole becomes de - localized in the case of amorphous materials , which leads to stronger confinement effects .
the origin of this result was partly attributed to differences in the effective mass between the amorphous and crystalline ns as well as between the electron and hole .
corrections to our qc model take into account a position dependent effective mass .
this term includes an inverse length scale dependent on the displacement from the origin .
thus , when the debroglie wavelength or the bohr radius of the carriers is on the order of the dimension of the ns the carriers ` feel ' the confinement potential altering their effective mass .
furthermore , it was found that certain interface states ( si - o - si ) act to pin the hole state , thus reducing the oscillator strength . .
And you have already written the first three sentences of the full article: semiconductor nanostructures ( nss ) exhibit increased oscillator strength due to electron hole wave function overlap , and band gap engineering due to the effect of quantum confinement ( qc ) .
thus , materials like si are a viable option for opto - electronics , photonics , and quantum computing.@xcite qc is defined as the modification in the free particle dispersion relation as a function of a system s spatial dimension.@xcite if a free electron is confined within a potential barrier , a shift in the band gap energy is observed , which is inversely proportional to the system size squared , in the effective mass approximation . as a result ,
the emitted photon energy is directly proportional to the gap energy ( @xmath0 ) ..
Please generate the next two sentences of the article | qc often manifests itself in optical experiments when the dimension of the system is systematically reduced and an increase in the absorbed / emitted photon energy is measured corresponding to electron transitional states . for practical applications , utilizing qc effects in nss
requires an understanding of the band structure of a low - dimensional material , how the method of preparation effects the final properties of the ns , and the kinetics/ dynamics of the absorption / emission process . |
9,551 | Suppose that you have an abstract for a scientific paper: we study the distribution of resonances for geometrically finite hyperbolic surfaces of infinite area by counting resonances numerically .
the resonances are computed as zeros of the selberg zeta function , using an algorithm for computation of the zeta function for schottky groups .
our particular focus is on three aspects of the resonance distribution that have attracted attention recently : the fractal weyl law , the spectral gap , and the concentration of decay rates . .
And you have already written the first three sentences of the full article: a smooth , geometrically finite hyperbolic surface @xmath0 has finite genus , with ends consist of a finite number of hyperbolic funnels and/or cusps .
we ll assume that @xmath1 has infinite area , so there is at least one funnel .
under this assumption , the ( positive ) laplacian @xmath2 has absolutely continuous spectrum @xmath3 and finitely many discrete eigenvalues in @xmath4 , with no embedded eigenvalues ..
Please generate the next two sentences of the article | the resolvent @xmath5 is well - defined for @xmath6 , as long as @xmath7 is not an @xmath8 eigenvalue of @xmath2 . by mazzeo - melrose @xcite and guillop - zworski @xcite
, @xmath9 admits a meromorphic continuation to @xmath10 , with poles of finite rank . |
9,552 | Suppose that you have an abstract for a scientific paper: we report results of our new spatially - resolved , optical spectroscopy of the giant ly@xmath0 nebula around a powerful radio galaxy 1243 + 036 ( 4c+03.24 ) at @xmath1 . the nebula is extended over @xmath2 kpc from the nucleus , and forms a pair of cones or elongated bubbles .
the high - velocity ( @xmath3 km s@xmath4 ; blueshifted with respect to the systemic velocity ) ly@xmath0-emitting components are detected at both sides of the nucleus along its major axis .
the northwestern nebula is more spectacular in its velocity shift ( blueshifted by @xmath5 km s@xmath4 to @xmath6 km s@xmath4 ) and in its width ( @xmath7 km s@xmath4 fwhm ) over @xmath8 kpc scale .
we discuss possible origin of the nebula ; 1 ) the shock - heated expanding bubble or outflowing cone associated with the superwind activity of the host galaxy , 2 ) halo gas photoionized by the anisotropic radiation from the active galactic nuclei ( agn ) , and 3 ) the jet - induced star - formation or shock . the last possibility may not be likely because ly@xmath0 emission is distributed out of the narrow channel of the radio jet .
we show that the superwind model is most plausible since it can explain both the characteristics of the morphology ( size and shape ) and the kinematical structures ( velocity shift and line width ) of the nebula although the photoionization by agn may contribute to the excitation to some extent . .
And you have already written the first three sentences of the full article: it is well known that images of the rest - frame uv and optical continua are elongated preferentially along the radio axis in powerful radio galaxies ( prgs ) at redshift ( @xmath9 ) @xmath10 ( e.g. , chambers et al .
1987 ; mccarthy et al .
1987 ) ; the so - called alignment effect ..
Please generate the next two sentences of the article | indeed , many high-@xmath9 ( @xmath11 ) prgs ( hzprgs ) show the alignment effect , and its origin has been in debate in this decade ( e.g. , mccarthy 1993 ) .
various models have been proposed to explain the alignment effect ; e.g. , ( 1 ) scattering of the anisotropic radiation from a central engine of active galactic nuclei ( agn : e.g. , di serego alighieri et al . |
9,553 | Suppose that you have an abstract for a scientific paper: since stellar populations enhance particular element abundances according to the yields and lifetimes of the stellar progenitors , the chemical evolution of galaxies serves as one of the key tools that allows the tracing of galaxy evolution . in order to deduce the evolution of separate galactic regions one has to account for the dynamics of the interstellar medium , because distant regions can interact by means of large - scale dynamics . to be able to interpret the distributions and ratios of the characteristic elements and their relation to e.g.the galactic gas content ,
an understanding of the dynamical effects combined with small - scale transitions between the gas phases by evaporation and condensation is essential . in this paper , we address various complex signatures of chemical evolution and present in particular two problems of abundance distributions in different types of galaxies : the discrepancies of metallicity distributions and effective yields in the different regions of our milky way and the n / o abundance ratio in dwarf galaxies .
these can be solved properly , if the chemodynamical prescription is applied to simulations of galaxy evolution . .
And you have already written the first three sentences of the full article: for the stellar populations of our milky way galaxy ( mwg ) - the halo , the bulge , and the disk ( thick plus thin disk ) - the fundamental questions that have to be addressed are : when , how and on what timescales did the galactic components form , and was there any connection between them ?
if yes , simultaneously or sequentially ? one possible approach to disentangle the evolutionary scenario is to look for evolutionary signatures in age , dynamics , and chemistry of long - lived stars , in the stellar populations within our mwg . at present , two major and basically different strategies for modelling galaxy evolution can be followed : dynamical investigations which include hydrodynamical simulations of isolated galaxy evolution and of protogalactic interactions reaching from cosmological perturbation scales to direct mergers , and , on the other hand , studies which neglect any dynamical effects but consider either the whole galaxy or particular regions and describe the temporal evolution of mass fractions and element abundances in detail . for the case of a closed box a linear relation between the time - dependent metallicity @xmath0 and the initial - to - temporal gas ratio @xmath1
$ ] follows analytically , where the slope is determined by the yield @xmath2 , i.e. the metallicity release per stellar population ..
Please generate the next two sentences of the article | deviations from this simple relation are explained by lower `` effective '' yields @xmath3 due to outflow of metal - rich gas from the ( now open ) volume or infall of low - metallicity ( presumably primordial ) gas .
such dynamical effects can only be properly treated if simulations can account for the energetics , the composition , and the dynamical state of the galactic gas , as well as the relevant interchange processes , in a self - consistent manner . |
9,554 | Suppose that you have an abstract for a scientific paper: this is a short review of the theoretical work on the two - dimensional hubbard model performed in sherbrooke in the last few years .
it is written on the occasion of the twentieth anniversary of the discovery of high - temperature superconductivity .
we discuss several approaches , how they were benchmarked and how they agree sufficiently with each other that we can trust that the results are accurate solutions of the hubbard model .
then comparisons are made with experiment .
we show that the hubbard model does exhibit d - wave superconductivity and antiferromagnetism essentially where they are observed for both hole and electron - doped cuprates .
we also show that the pseudogap phenomenon comes out of these calculations . in the case of electron - doped high temperature superconductors , comparisons with angle - resolved photoemission experiments are nearly quantitative .
the value of the pseudogap temperature observed for these compounds in recent photoemission experiments has been predicted by theory before it was observed experimentally .
additional experimental confirmation would be useful .
the theoretical methods that are surveyed include mostly the two - particle self - consistent approach , variational cluster perturbation theory ( or variational cluster approximation ) , and cellular dynamical mean - field theory . .
And you have already written the first three sentences of the full article: in the first days of the discovery of high - temperature superconductivity , anderson@xcite suggested that the two - dimensional hubbard model held the key to the phenomenon . despite its apparent simplicity ,
the two - dimensional hubbard model is a formidable challenge for theorists .
the dimension is not low enough that an exact solution is available , as in one dimension ..
Please generate the next two sentences of the article | the dimension is not high enough that some mean - field theory , like dynamical mean field theory@xcite ( dmft ) , valid in infinite dimension , can come to the rescue . in two dimensions , both quantum and thermal fluctuations are important .
in addition , as we shall see , it turns out that the real materials are in a situation where both potential and kinetic energy are comparable |
9,555 | Suppose that you have an abstract for a scientific paper: we compare electroweak baryogenesis in the mssm , nmssm and nmssm .
we comment on the different sources of cp violation , the phase transition and constraints from edm measurements . .
And you have already written the first three sentences of the full article: a viable baryogenesis mechanism aims to explain the observed asymmetry in the baryon density , @xmath0 , and the celebrated sakharov conditions state the necessary ingredients for baryogenesis : ( i ) c and cp violation , ( ii ) non - equilibrium , ( iii ) b number violation .
b number violation is present in the hot universe due to sphaleron processes while c is violated in the electroweak sector of the standard model ( sm ) .
the two important aspects of electroweak baryogenesis ( ewbg)@xcite are transport and cp violation ..
Please generate the next two sentences of the article | ewbg requires a strong first - order electroweak phase transition to drive the plasma out of equilibrium .
the cp violation is induced by the moving phase boundary . |
9,556 | Suppose that you have an abstract for a scientific paper: we calculate the one - photon loop radiative corrections to charged pion compton scattering , @xmath0 .
ultraviolet and infrared divergencies are both treated in dimensional regularization .
analytical expressions for the @xmath1 corrections to the invariant compton scattering amplitudes , @xmath2 and @xmath3 , are presented for 11 classes of contributing one - loop diagrams .
infrared finiteness of the virtual radiative corrections is achieved ( in the standard way ) by including soft photon radiation below an energy cut - off @xmath4 , and its relation to the experimental detection threshold is discussed .
we find that the radiative corrections are maximal in backward directions , reaching e.g. @xmath5 for a center - of - mass energy of @xmath6 and @xmath7mev .
furthermore , we extend our calculation of the radiative corrections by including the leading pion structure effect ( at low energies ) in form of its electric and magnetic polarizability difference , @xmath8@xmath9 .
we find that this structure effect does not change the relative size and angular dependence of the radiative corrections to pion compton scattering .
our results are particularly relevant for analyzing the compass experiment at cern which aims at measuring the pion electric and magnetic polarizabilities with high statistics using the primakoff effect .
24.5 cm 17.cm -2.2 cm -0.6 cm -0.6 cm # 1#2 # 2 cm n. kaiser and j.m .
friedrich + pacs : 12.20.-m , 12.20.ds , 13.40.ks , 14.70.bh .
And you have already written the first three sentences of the full article: pion compton scattering , @xmath10 , allows one to extract the electric and magnetic polarizabilities of the ( charged ) pion . in a classical picture
these polarizabilities characterize the deformation response ( i.e. induced dipole moments ) of a composite system in external electric and magnetic fields . in the proper quantum field
theoretical formulation the electric and magnetic polarizabilities , @xmath11 and @xmath12 , are defined as expansion coefficients of the compton scattering amplitudes at threshold . however , since pion targets are not directly available , real pion compton scattering has been approached using different artifices , such as high - energy pion - nucleus bremsstrahlung @xmath13 , radiative pion photoproduction off the proton @xmath14 , and the crossed channel two - photon reaction @xmath15 . from the theoretical side.
Please generate the next two sentences of the article | there is an extraordinary interest in a precise ( experimental ) determination of the pion polarizabilities . within the framework of current algebra
@xcite it has been shown ( long ago ) that the polarizability difference @xmath16 of the charged pion is directly related to the axial - vector - to - vector form factor ratio @xmath17 measured in the radiative pion decay @xmath18 @xcite . at leading ( nontrivial ) order the result of chiral perturbation theory @xcite , @xmath19 , |
9,557 | Suppose that you have an abstract for a scientific paper: we theoretically investigate the effects of charge order and spin frustration on the spin ordering in tmttf salts . using first - principles band calculations ,
we find that a diagonal inter - chain transfer integral @xmath0 , which causes spin frustration between the inter - chain dimers in the dimer - mott insulating state , strongly depends on the choice of anion . within the numerical lanczos exact diagonalization method ,
we show that the ferroelectric charge order changes the role of @xmath0 from the spin frustration to the enhancement of the two - dimensionality in spin sector .
the results indicate that @xmath0 assists the cooperative behavior between charge order and antiferromagnetic state observed in tmttf@xmath1sbf@xmath2 . , , , and charge ordering , spin frustration , exact diagonalization method , tmttf salts 71.10.fd , 71.20.rv , 71.30 , 75.30.kz .
And you have already written the first three sentences of the full article: low - dimensional molecular conductors provide a fruitful stage to study strong electron correlation effects leading to a wide variety of phase transitions @xcite . in this context ,
among the most - studied families are the quasi - one dimensional ( q1d ) molecular conductors tmttf@xmath1@xmath3 ( tmttf : tetramethyl - tetrathiofulvalene , @xmath3 : monovalent anion ) @xcite .
these salts form a q1d @xmath4-band at quarter - filling in terms of holes with intrinsic dimerization along the conduction axis ..
Please generate the next two sentences of the article | they exhibit various types of phase transitions such as ferroelectric charge ordering ( fco ) , spin - peierls ( sp ) , antiferromagnetic ( af ) , and superconducting ( sc ) transitions by applying pressure or replacement of @xmath3 @xcite . among them , ( tmttf)@xmath1sbf@xmath2 shows a peculiar behavior under pressure ; a cooperative reduction of fco and af phase transition temperatures by the application of pressure has been reported by nmr measurements @xcite .
this result naively does not coincide with the case for typical co transitions , where co suppresses the tendency toward magnetic ordering due to decrease of the effective spin exchange couplings @xcite . |
9,558 | Suppose that you have an abstract for a scientific paper: we present blue optical spectra of 92 members of @xmath0 and @xmath1 per obtained with the wiyn telescope at kitt peak national observatory . from these spectra ,
several stellar parameters were measured for the b type stars , including @xmath2 sin @xmath3 , @xmath4 , log @xmath5 , @xmath6 , and @xmath7 .
strmgren photometry was used to measure @xmath4 and log @xmath5 for the be stars .
we also analyze photometric data of cluster members and discuss the near - to - mid ir excesses of be stars . .
And you have already written the first three sentences of the full article: ngc 869 and ngc 884 ( @xmath0 and @xmath1 persei , respectively ) are a well known double cluster rich in massive b - type stars , and have been the focus of many studies over the years .
recent studies show that ngc 869 and ngc 884 have nearly identical ages of @xmath8 1314 myr , common distance moduli of dm @xmath8 11.85 , and common reddenings of e(b - v ) @xmath8 0.50.55 ( ( * ? ? ?
* currie et al . 2009 ) , ( * ? ? ?.
Please generate the next two sentences of the article | * slesnick et al . 2002 ) , ( * ? ? ?
* bragg & kenyon 2005 ) ) . |
9,559 | Suppose that you have an abstract for a scientific paper: the problem of the description of absorption and scattering losses in high-@xmath0 cavities is studied .
the considerations are based on quantum noise theories , hence the unwanted noise associated with scattering and absorption is taken into account by introduction of additional damping and noise terms in the quantum langevin equations and input output relations .
completeness conditions for the description of the cavity models obtained in this way are studied and corresponding replacement schemes are discussed .
pacs : 42.50.lc , 42.50.nn , 42.50.pq .
And you have already written the first three sentences of the full article: unwanted noise associated with absorption and scattering in high-@xmath0 cavities usually plays a crucial role in experiments in cavity quantum electrodynamics ( cavity qed ) @xcite .
even small values of the corresponding absorption / scattering coefficients may lead to dramatic changes of the quantum properties of the radiation . for typical high-@xmath0 cavities
the unwanted losses can be of the same order of magnitude as the wanted , radiative losses due to the input output coupling @xcite ..
Please generate the next two sentences of the article | in such a case the process of quantum - state extraction from a high-@xmath0 cavity is characterized by efficiency of about 50% , @xcite .
this feature gives a serious restriction for the implementation of many proposals in cavity qed . |
9,560 | Suppose that you have an abstract for a scientific paper: majorization - minimization algorithms consist of successively minimizing a sequence of upper bounds of the objective function .
these upper bounds are tight at the current estimate , and each iteration monotonically drives the objective function downhill .
such a simple principle is widely applicable and has been very popular in various scientific fields , especially in signal processing and statistics .
we propose an incremental majorization - minimization scheme for minimizing a large sum of continuous functions , a problem of utmost importance in machine learning .
we present convergence guarantees for non - convex and convex optimization when the upper bounds approximate the objective up to a smooth error ; we call such upper bounds `` first - order surrogate functions '' .
more precisely , we study asymptotic stationary point guarantees for non - convex problems , and for convex ones , we provide convergence rates for the expected objective function value .
we apply our scheme to composite optimization and obtain a new incremental proximal gradient algorithm with linear convergence rate for strongly convex functions .
our experiments show that our method is competitive with the state of the art for solving machine learning problems such as logistic regression when the number of training samples is large enough , and we demonstrate its usefulness for sparse estimation with non - convex penalties .
non - convex optimization , convex optimization , majorization - minimization . 90c06 , 90c26 , 90c25 .
And you have already written the first three sentences of the full article: the principle of successively minimizing upper bounds of the objective function is often called _ majorization - minimization _ @xcite or _ successive upper - bound minimization _ @xcite .
each upper bound is locally tight at the current estimate , and each minimization step decreases the value of the objective function . even though this principle does not provide any theoretical guarantee about the quality of the returned solution
, it has been very popular and widely used because of its simplicity ..
Please generate the next two sentences of the article | various existing approaches can indeed be interpreted from the majorization - minimization point of view .
this is the case of many gradient - based or proximal methods @xcite , expectation - maximization ( em ) algorithms in statistics @xcite , difference - of - convex ( dc ) programming @xcite , boosting @xcite , some variational bayes techniques used in machine learning @xcite , and the mean - shift algorithm for finding modes of a distribution @xcite . |
9,561 | Suppose that you have an abstract for a scientific paper: we extend the fuglede - putnam theorem from the algebra @xmath0 of all bounded operators on the hilbert space @xmath1 to the algebra of all locally measurable operators affiliated with a von neumann algebra . .
And you have already written the first three sentences of the full article: the ( first part of the ) following problem was suggested by von neumann ( see pp .
60 - 61 , appendix 3 in @xcite ) .
[ von neumann problem ] let @xmath2 if @xmath3 is normal and if @xmath4 does it follow that @xmath5 more generally , if @xmath3 and @xmath6 are normal and if @xmath7 does it follow that @xmath8 if the operators @xmath3 and @xmath9 belong to a finite factor @xmath10 then the first part of the problem was resolved ( in the affirmative ) by von neumann himself . in full generality ,.
Please generate the next two sentences of the article | a problem was resolved by fuglede @xcite .
furthermore , von neumann mentioned that a formal analogue of problem [ von neumann problem ] for unbounded operators can be _ non - rigorously _ answered in the negative due to the fact that a product of @xmath11 unbounded operators does not always exists . |
9,562 | Suppose that you have an abstract for a scientific paper: in this paper we prove the existence of extreme value laws for dynamical systems perturbed by instrument - like - error , also called observational noise .
an orbit perturbed with observational noise mimics the behavior of an instrumentally recorded time series .
instrument characteristics - defined as precision and accuracy - act both by truncating and randomly displacing the real value of a measured observable . here
we analyze both these effects from a theoretical and numerical point of view .
first we show that classical extreme value laws can be found for orbits of dynamical systems perturbed with observational noise
. then we present numerical experiments to support the theoretical findings and give an indication of the order of magnitude of the instrumental perturbations which cause relevant deviations from the extreme value laws observed in deterministic dynamical systems .
finally , we show that the observational noise preserves the structure of the deterministic attractor .
this goes against the common assumption that random transformations cause the orbits asymptotically fill the ambient space with a loss of information about any fractal structures present on the attractor . .
And you have already written the first three sentences of the full article: in two previous works @xcite , we investigated the persistence of extreme value laws ( evls ) whenever a dynamical system is perturbed throughout random transformations .
we considered an i.i.d .
stochastic process @xmath0 with values in the measurable space @xmath1 and with probability distribution @xmath2 . after associating to each @xmath3 a map @xmath4 acting on the measurable space @xmath5 into itself , we considered the random orbit starting from the point @xmath6 and generated by the realization @xmath7 : @xmath8 here , the transformations @xmath4 should be considered close to each other and the suitably rescaled scalar parameter @xmath9 is the strength of such a distance.
Please generate the next two sentences of the article | . we could therefore define a markov process @xmath10 on @xmath5 with transition function @xmath11 where @xmath12 is a measurable set , @xmath13 and @xmath14 is the indicator function of a set @xmath15 .
a probability measures @xmath16 is called a _ stationary measure _ if for any measurable @xmath15 we have : @xmath17 we call it an absolutely continuous stationary measure ( _ acsm _ ) , if it has a density with respect to the lebesgue measure whenever @xmath5 is a metric space . |
9,563 | Suppose that you have an abstract for a scientific paper: it is well known that the successful operation of cognitive radio ( cr ) between cr transmitter and cr receiver ( cr link ) relies on reliable spectrum sensing . to network crs requires more information from spectrum sensing beyond traditional techniques , executing at cr transmitter and further information regarding the spectrum availability at cr receiver . redefining the spectrum sensing along with statistical inference suitable for cognitive radio networks ( crn ) , we mathematically derive conditions to allow cr transmitter forwarding packets to cr receiver under guaranteed outage probability , and prove that the correlation of localized spectrum availability between a cooperative node and cr receiver determines effectiveness of the cooperative scheme . applying our novel mathematical model to potential hidden terminals in crn ,
we illustrate that the allowable transmission region of a cr , defined as neighborhood , is no longer circular shape even in a pure path loss channel model .
this results in asymmetric cr links to make bidirectional links generally inappropriate in crn , though this challenge can be alleviated with the aid of cooperative sensing . therefore , spectrum sensing capability determines crn topology . for multiple cooperative nodes , to fully utilize spectrum availability , the selection methodology of cooperative nodes is developed due to limited overhead of information exchange . defining reliability as information of spectrum availability at cr receiver provided by a cooperative node and by applying neighborhood area , we can compare sensing capability of cooperative nodes from both link and network perspectives .
in addition , due to dynamic network topology lack of centralized coordination in crn , crs can only acquire local and partial information in limited sensing duration , robust spectrum sensing is therefore proposed to ensure successful crn operation .
limits of cooperative schemes and their impacts on network operation are also derived .
spectrum sensing ,....
And you have already written the first three sentences of the full article: radios ( cr ) @xcite@xcite , having capable of sensing spectrum availability , is considered as a promising technique to alleviate spectrum scarcity due to current static spectrum allotment policy @xcite .
traditional cr link availability is solely determined by the spectrum sensing conducted at the transmitter ( i.e. cr - tx ) .
if the cr - tx with packets to relay senses the selected channel to be available , it precedes this opportunistic transmission . to facilitate the spectrum sensing , at time instant @xmath0.
Please generate the next two sentences of the article | , we usually use a hypothesis testing as follows .
@xmath1 where @xmath2 means the observation at cr - tx ; @xmath3 represents signal from primary system ( ps ) ; @xmath4 is the interference from co - existing multi - radio wireless networks ; @xmath5 is additive white gaussian noise ( awgn ) . |
9,564 | Suppose that you have an abstract for a scientific paper: in this paper , we study the weighted compositon operators on weighted bergman spaces of bounded symmetric domains .
the necessary and sufficient conditions for a weighted composition operator @xmath0 to be bounded and compact are studied by using the carleson measure techniques . in the last section
, we study the schatten @xmath1-class weighted composition operators .
[ theore]*theorem * [ theore]lemma .
And you have already written the first three sentences of the full article: let @xmath2 be a bounded symmetric domain in @xmath3 with bergman kernel @xmath4 we assume that @xmath2 is in its standard representation and the volume measure @xmath5 of @xmath2 is normalised so that @xmath6 for all @xmath7 and @xmath8 in @xmath9 by theorem 5.7 of @xcite and using the polar coordinates representation , there exists a positive number @xmath10 such that for @xmath11 we have @xmath12 for each @xmath13 define @xmath14 then @xmath15 defines a weighted family of probability measures on @xmath9 also , throughout the paper @xmath16 is fixed .
we define the weighted bergman spaces @xmath17 on @xmath2 , as the set of all holomorphic functions @xmath18 on @xmath2 so that @xmath19 note that @xmath20 is a closed subspace of @xmath21 for @xmath22 is just the usual bergman space . for @xmath23
there is an orthogonal projection @xmath24 from @xmath25 onto @xmath26 given by @xmath27 where @xmath28 is the reproducing kernel for @xmath29 suppose , @xmath30 are holomorphic mappings defined on @xmath31 such that @xmath32 then the weighted composition operator @xmath0 is defined as @xmath33 for the study of weighted composition operators one can refer to @xcite and references therein ..
Please generate the next two sentences of the article | recently , smith @xcite has made a nice connection between the brennan s conjecture and weighted composition operators .
he has shown that brennan s conjecture is equivalent to the existence of self - maps of unit disk that make certain weighted composition operators compact . |
9,565 | Suppose that you have an abstract for a scientific paper: we hereby propose a model of opinion dynamics where individuals update their beliefs because of interactions in acquaintances group .
the model exhibit a non trivial behavior that we discuss as a function of the main involved parameters .
results are reported on the average number of opinion clusters and the time needed to form such clusters . .
And you have already written the first three sentences of the full article: complex systems science ( css ) studies the behavior of a wide range of phenomena , from physics to social sciences passing through biology just to mention few of them .
the classical approach followed in the css consists first in a decomposition of the system into elementary blocksthat will be successively individually analyzed in details , then the properties determined at micro level are transported to the macro level .
this approach results very fruitful and shaped the css as an highly multidisciplinary field ..
Please generate the next two sentences of the article | recently models of opinion dynamics gathered a considerable amount of interest testified by the production of specialized reviews such as , reinforcing in this way the emergence of the _ sociophysics _
a basic distinction can be done in model of _ continuous opinion with threshold _ |
9,566 | Suppose that you have an abstract for a scientific paper: performing a shell model calculation for heavy nuclei has been a long - standing problem in nuclear physics . here
we propose one possible solution .
the central idea of this proposal is to take the advantages of two existing models , the projected shell model ( psm ) and the fermion dynamical symmetry model ( fdsm ) , to construct a multi - shell shell model .
the psm is an efficient method of coupling quasi - particle excitations to the high - spin rotational motion , whereas the fdsm contains a successful truncation scheme for the low - spin collective modes from the spherical to the well - deformed region .
the new shell model is expected to describe simultaneously the single - particle and the low - lying collective excitations of all known types , yet keeping the model space tractable even for the heaviest nuclear systems . .
And you have already written the first three sentences of the full article: except for a few nuclei lying in the vicinity of shell closures , most of the heavy nuclei are difficult to describe in a spherical shell model framework because of the unavoidable problem of dimension explosion .
therefore , the study of nuclear structure in heavy nuclei has relied mainly on the mean - field approximations , in which the concept of spontaneous symmetry breaking is applied @xcite .
however , there has been an increasing number of compelling evidences indicating that the nuclear many - body correlations are important ..
Please generate the next two sentences of the article | thus , the necessity of a proper quantum mechanical treatment for nuclear states has been growing , and we are facing the challenge of understanding the nuclear structure by going beyond the mean - field approximations .
demand for a proper shell model treatment arises also from the nuclear astrophysics . since |
9,567 | Suppose that you have an abstract for a scientific paper: the aim of this paper is to estimate the @xmath0-norms of vector - valued riesz transforms @xmath1 and the norms of riesz operators on cantor sets in @xmath2 , as well as to study the distribution of values of @xmath1 .
namely , we show that this distribution is `` uniform '' in the following sense .
the values of @xmath3 which are comparable with its average value are attended on a `` big '' portion of a cantor set .
we apply these results to give examples demonstrating the sharpness of our previous estimates for the set of points where riesz transform is large , and for the corresponding riesz capacities .
the cantor sets under consideration are different from the usual corner cantor sets .
they are constructed by means a certain process of regularization introduced in the paper . . .
And you have already written the first three sentences of the full article: let @xmath4 be a finite sequence of positive numbers such that @xmath5 this sequence determines the corner cantor set @xmath6 of generation @xmath7 in @xmath8 , such that the @xmath9-th generation consists of @xmath10 cubes of edge length @xmath11 , each of these cubes contains @xmath12 corner cubes of the @xmath13-th generation , and so on . for brevity , we will call @xmath6 `` a cantor set '' instead of `` a cantor set of generation @xmath7 '' .
there is a number of papers on estimates of various capacities , norms of integral transforms and operators , etc .
, on such cantor sets ..
Please generate the next two sentences of the article | these estimates demonstrate the sharpness of various inequalities where the bounds are attained on cantor sets ; they are also of independent interest . but besides the necessary condition , there are certain additional conditions on @xmath11 in many cases . in the present paper we associate with given numbers @xmath11 satisfying _ only _ the condition , the `` regularized '' sequence @xmath14 such that @xmath15 , @xmath16 , and construct the ( non - corner ) cantor set @xmath6 formed by @xmath17 cubes of edge length @xmath18 .
since the corner and non - corner cantor sets have similar structure , it is unimportant for applications which set to use . for a nonnegative finite borel measure @xmath19 in @xmath8 , @xmath20 , and @xmath21 , @xmath22 , define the @xmath23-truncated @xmath24-riesz transform of @xmath19 by @xmath25 where @xmath26 if the limit @xmath27 exists , we shall call it the @xmath24-riesz transform of @xmath19 at @xmath28 . to consider all finite borel measures and all points @xmath29 , one introduces the quantity that always makes sense , namely the so called maximal @xmath24-riesz transform @xmath30 ( note that @xmath31 and @xmath32 are vectors and @xmath33 is a number ) . besides @xmath34 and @xmath35 , we need the @xmath23-truncated @xmath24-riesz operator defined by @xmath36 for every @xmath22 , the operator @xmath37 is bounded on @xmath38 . |
9,568 | Suppose that you have an abstract for a scientific paper: an eulerian tvd code and a lagrangian sph code are used to simulate the off - axis collision of equal - mass main sequence stars in order to address the question of whether stellar mergers can produce a remnant star where the interior has been replenished with hydrogen due to significant mixing . each parent main sequence star is chosen to be found near the turnoff , with hydrogen depleted in the core , and is modelled with a @xmath0 realistic stellar model and as a @xmath1 polytrope . an ideal fluid description with adiabatic index @xmath2
is used for all hydrodynamic calculations .
we found good agreement between the simulations for the polytropic case , with the remnant showing strong , non - local mixing throughout . in the interior quarter of the mass ,
@xmath3 is mixed in from larger radii and on average the remnant is @xmath4 fully mixed . for the realistic model , we found less mixing , particularly in the interior and in the sph simulation . in the inner quarter ,
@xmath5 of the contained mass in the tvd case , but only @xmath6 in the sph one is mixed in from outside .
the simulations give consistent results for the overall profile of the merger remnant and the amount of mass loss , but the differences in mixing suggests that the intrinsic difference between grid and particle based schemes remains a possible artifact .
we conclude that both the tvd and sph schemes can be used equally well for problems that are best suited to their strengths and that care should be taken in interpreting results about fluid mixing .
blue stragglers globular clusters : general hydrodynamics methods : numerical stars : evolution stellar dynamics .
And you have already written the first three sentences of the full article: in dense stellar systems , such as globular clusters , galactic nuclei and star forming regions , direct collisions between stars occur quite frequently .
these collisions can modify the stellar populations of the system by creating objects ( e.g. blue stragglers , cataclysmic variables and millisecond pulsars ; * ? ? ?
* ) , or by destroying objects ( e.g. bright giants ; * ? ? ?.
Please generate the next two sentences of the article | these kinds of strong interactions between stars can also modify the overall evolution of the system by changing the dynamics of the system and its energy budget .
the study of stellar collisions has become critical to the study of dense stellar systems @xcite . in order to understand how collisions can modify stellar populations and dynamics , |
9,569 | Suppose that you have an abstract for a scientific paper: the optical afterglow of gamma - ray burst ( grb ) 000301c exhibited a significant , short - timescale deviation from the power - law flux decline expected in the standard synchrotron shock model .
garnavich , loeb & stanek found that this deviation was well - fit by an _
ad hoc _ model in which a thin ring of emission is microlensed by an intervening star .
we revisit the microlensing interpretation of this variability , first by testing whether microlensing of afterglow images with realistic surface brightness profiles ( sbps ) can fit the data , and second by directly inverting the observed light curve to obtain a non - parametric measurement of the sbp .
we find that microlensing of realistic sbps can reproduce the observed deviation , provided that the optical emission arises from frequencies above the cooling break .
conversely , if the variability is indeed caused by microlensing , the sbp must be significantly limb - brightened .
specifically , @xmath0 of the flux must originate from the outer @xmath1 of the area of the afterglow image .
the latter requirement is satisfied by the best fit theoretical sbp .
the underlying optical / infrared afterglow lightcurve is consistent with a model in which a jet is propagating into a uniform medium with the cooling break frequency below the optical band .
# 1#23.6pt 0(i / i)_0 .
And you have already written the first three sentences of the full article: the afterglows of gamma - ray bursts ( grbs ) are observed in the x - ray , optical , near - infrared , and radio , and appear to be well - described by the synchrotron blast - wave model in which the source ejects material with a relativistic bulk lorentz factor , driving a relativistic shock into the external medium ( see @xcite and references therein ) .
there is mounting evidence from the observed steepening of afterglow light curves that these ejecta are in many cases mildly to highly collimated , with opening angles @xmath2@xmath3 @xcite .
global fitting of the afterglow light curves over many decades in time and frequency , in the context of this model , can be used to derive constraints on the physical parameters of the model , i.e. , the energy and opening angle of the jet , the external density , the magnetic field strength and the energy distribution of the electrons behind the shock @xcite . in this model , the image of the afterglow is expected to appear highly limb brightened at frequencies above the peak synchrotron frequency @xmath4 , but more uniform at frequencies @xmath5 , especially below the self absorption frequency @xmath6 ( waxman 1997 ; sari 1998 ; panaitescu & mszros 1998 ; granot , piran , & sari 1999a , b ; granot & loeb 2001 ).
Please generate the next two sentences of the article | . a measurement of the surface brightness profile ( sbp ) at several frequencies would thus provide an important test of the model . for typical parameters ,
the afterglow image expands superluminally , and has an angular radius @xmath7 a few days after the grb . |
9,570 | Suppose that you have an abstract for a scientific paper: we explore the mass - assembly and chemical enrichment histories of star forming galaxies by applying a population synthesis method to a sample of 84828 galaxies from the sloan digital sky survey data release 5 .
our method decomposes the entire observed spectrum in terms of a sum of simple stellar populations spanning a wide range of ages and metallicities , thus allowing the reconstruction of galaxy histories .
a comparative study of galaxy evolution is presented , where galaxies are grouped onto bins of nebular abundances or mass .
we find that galaxies whose warm interstellar medium is poor in heavy elements are slow in forming stars .
their stellar metallicities also rise slowly with time , reaching their current values ( @xmath0 ) in the last @xmath1 myr of evolution .
systems with metal rich nebulae , on the other hand , assembled most of their mass and completed their chemical evolution long ago , reaching @xmath2 already at lookback times of several gyr .
these same trends , which are ultimately a consequence of galaxy downsizing , appear when galaxies are grouped according to their stellar mass .
the reconstruction of galaxy histories to this level of detail out of integrated spectra offers promising prospects in the field of galaxy evolution theories .
galaxies : evolution - galaxies : stellar content - galaxies : statistics .
And you have already written the first three sentences of the full article: one of the major challenges of modern astrophysics is to understand the physical processes involved in galaxy formation and evolution .
significant steps in this direction could be made by tracing the build up of stellar mass and metallicity as a function of cosmic time .
one way to address this issue is through cosmologically deep surveys which map how galaxy properties change for samples at different redshifts ( @xmath3 ) . among these properties ,.
Please generate the next two sentences of the article | the relation first observed by lequeux ( 1979 ) between heavy - element nebular abundance and galaxy mass , or its extension , the luminosity - metallicity relation ( e.g. , skillman 1989 ; zaritsky 1994 ) , are being extensively used to probe the metal enrichment along cosmic history .
clear signs of evolution are being revealed by studies of these relations at both intermediate ( savaglio 2005 ; lamareille 2006 ; mouhcine 2006 ) and high @xmath3 ( shapley 2005 ; maier 2006 ; erb 2006 ) , which generally find significant offsets in these relations when compared to their versions in the local universe . |
9,571 | Suppose that you have an abstract for a scientific paper: we define a hidden markov model ( hmm ) in which each hidden state has time - dependent _ activity levels _ that drive transitions and emissions , and show how to estimate its parameters .
our construction is motivated by the problem of inferring human mobility on sub - daily time scales from , for example , mobile phone records . .
And you have already written the first three sentences of the full article: hidden markov models ( hmms ) are stochastic models for systems with a set of unobserved states between which the system hops stochastically , sometimes emitting a signal from some alphabet , with probabilities that depend upon the current state .
the situation in which we are specifically interested is human mobility , partially observed , _
i.e. _ , occasional signals about a person s location . for example , consider the cells of a mobile phone network , from which a user can make calls . in this case.
Please generate the next two sentences of the article | the states of a hmm are the cells , and the emitted signals are the cell itself , if a call is made by a particular user during each of a sequence of time intervals , or nothing ( 0 ) , if that user does not make a call . in the latter case , the state ( location ) of the user is ` hidden ' , and must be inferred , while in the former case , assuming no errors in the data , the ` hidden ' state is revealed by the call record . since these are data from a _ mobile _ phone network , a user can move from cell to cell .
although many analyses of human mobility have estimated no more than rather crude statistics like the radius of gyration , the fraction of time spent at each location , or the entropy of the timeseries of locations @xcite , others have used hmms to describe partially observed human mobility and have estimated their parameters @xcite . with short time steps , however , a standard hmm ( with time - independent parameters ) is not a plausible model , since human mobility behavior changes according to , for example , the time of day @xcite . |
9,572 | Suppose that you have an abstract for a scientific paper: in this paper , we investigate the non - equilibrium quantum phases of the two - atom dicke model , which can be realized in a two species bose - einstein condensate interacting with a single light mode in an optical cavity . apart from the usual non - equilibrium normal and inverted phases , a non - equilibrium mixed phase is possible which is a combination of normal and inverted phase . a new kind of quantum phase transition is predicted from non - superradiant mixed phase to the superradiant phase which can be achieved by tuning the two different atom - photon couplings .
we also show that a quantum phase transition from the non - superradiant mixed phase to the superradiant phase is forbidden for certain values of the two atom - photon coupling strengths .
* keywords : * non - equilibrium dicke model , quantum phase transition . .
And you have already written the first three sentences of the full article: the interaction of a collection of atoms with a radiation field has always been an important topic in quantum optics . the dicke model ( dm ) which describes interaction of @xmath0 identical two level atoms with a single radiation field mode ,
established the importance of collective effects of atom - field interaction , where the intensity of the spontaneously emitted light is proportional to @xmath1 rather than @xmath0 @xcite .
the spatial dimensions of the ensemble of atoms are smaller than the wavelength of the radiation field . as a result , all the atoms experience the same field and this gives rise to the collective and cooperative interaction between light and matter ..
Please generate the next two sentences of the article | the dm exhibits a second - order quantum phase transition ( qpt ) from a non - superradiant normal phase to a superradiant phase when the atom - field coupling constant exceeds a certain critical value @xcite .
the experimental observation of the qpt predicted in the dm required that the collective atom - photon coupling strength to be of the same order of magnitude as the energy separation between the two atomic levels . in conventional atom - cavity setup |
9,573 | Suppose that you have an abstract for a scientific paper: we study the quantum mechanical generalization of force or pressure , and then we extend the classical thermodynamic isobaric process to quantum mechanical systems . based on these efforts , we are able to study the quantum version of thermodynamic cycles that consist of quantum isobaric process , such as quantum brayton cycle and quantum diesel cycle .
we also consider the implementation of quantum brayton cycle and quantum diesel cycle with some model systems , such as single particle in 1d box and single - mode radiation field in a cavity .
these studies lay the microscopic ( quantum mechanical ) foundation for szilard - zurek single molecule engine . .
And you have already written the first three sentences of the full article: quantum thermodynamics is the study of heat and work dynamics in quantum mechanical systems @xcite . in the extreme limit of small systems with only a few degrees of freedom , both the finite - size effect and quantum effects
influence the thermodynamic properties of the system dramatically @xcite . the traditional thermodynamic theory based on classical systems of macroscopic size
does not apply any more , and the quantum mechanical generalization of thermodynamics becomes necessary ..
Please generate the next two sentences of the article | the interplay between thermodynamics and quantum physics has been an interesting research topic since 1950s @xcite . in recent years , with the developments of nanotechnology and quantum information processing , the study of the interface between quantum physics and thermodynamics begins to attract more and more attention @xcite .
studies of quantum thermodynamics not only promise important potential applications in nanotechnology and quantum information processing , but also bring new insights to some fundamental problems of thermodynamics , such as maxwell s demon and the universality of the second law @xcite . among all the studies about quantum thermodynamics , |
9,574 | Suppose that you have an abstract for a scientific paper: we present a novel differential - difference system in ( 2 + 1)-dimensional space - time ( one discrete , two continuum ) , arisen from the bogoyavlensky s ( 2 + 1)-dimensional kdv hierarchy .
our method is based on the bilinear identity of the hierarchy , which is related to the vertex operator representation of the toroidal lie algebra @xmath0 . .
And you have already written the first three sentences of the full article: multi - dimensional generalization of classical soliton equations has been one of the most exciting topic in the field of integrable systems . among other things ,
calogero @xcite proposed an interesting example that is a ( 2 + 1)-dimensional extension of the korteweg - de vries equation , @xmath1 yu et al .
@xcite obtained multi - soliton solutions of the ( 2 + 1)-dimensional kdv equation by using the hirota s bilinear method ..
Please generate the next two sentences of the article | let us consider the following hirota - type equations , @xmath2 where we have used the @xmath3-operators of hirota defined as @xmath4 we remark that we have introduced auxiliary variables @xmath5 that is a hidden parameter in .
if we set @xmath6 and use to eliminate @xmath7 , then one can show that @xmath8 solves . |
9,575 | Suppose that you have an abstract for a scientific paper: many early - type galaxies are detected at 24 to 160 but the emission is usually dominated by an agn or heating from the evolved stellar population . here
we present mips observations of a sample of elliptical and lenticular galaxies which are rich in cold molecular gas , and we investigate how much of the mir to fir emission could be due to star formation activity .
the 24 images show a rich variety of structures , including nuclear point sources , rings , disks , and smooth extended emission , and comparisons to matched - resolution co and radio continuum images suggest that the bulk of the 24 emission could be traced to star formation .
the star formation efficiencies are comparable to those found in normal spirals .
some future directions for progress are also mentioned . .
And you have already written the first three sentences of the full article: in recent years , uv and optical photometry and spectroscopy of nearby elliptical galaxies has suggested that these galaxies , which have a reputation for being old , red , and dead , may not be quite as dead as previously assumed .
some 15% to 30% of local ellipticals seem to be experiencing small amounts of present day star formation activity ( schawinski et al . 2007a , 2007b ; kaviraj et al .
the star formation is not intense enough to cause serious problems for the galaxies morphological classification , as it only amounts to a few percent of the total stellar mass ..
Please generate the next two sentences of the article | however , this current day disk growth inside spheroidal galaxies may be a faint remnant of a process which was more vigorous in the past and may have played a role in establishing the spectrum of galaxy morphologies we observe today .
star formation of course requires cold gas , so interpreting the uv and optical data in terms of star formation activity has important implications both for the early - type galaxies and for a general understanding of the star formation process . |
9,576 | Suppose that you have an abstract for a scientific paper: the system of neutrino - antineutrino @xmath0 - plasma is considered taking into account their weak fermi interaction .
new fluid instabilities driven by strong neutrino flux in a plasma are found .
the nonlinear stationary as well as nonstationary waves in the neutrino gas are discussed .
it is shown that a bunch of neutrinos , drifting with a constant velocity across a homogeneous plasma , can also induce emission of lower energy neutrinos due to scattering , i.e. the decay of a heavy neutrino @xmath1 into a heavy and a light neutrino @xmath2 ( @xmath3 ) in a plasma .
furthermore we find that the neutrino production in stars does not lead in general to energy losses from the neutron stars . .
And you have already written the first three sentences of the full article: our understanding of the properties of neutrinos in a plasma has recently undergone some appreciable theoretical progress @xcite . the interaction of neutrinos with a plasma particles , the creation of @xmath4 pairs , the emission of neutrinos due to the collapse of a star are of primary interests @xcite in the description of some astrophysical events such as supernova explosion .
one of the key processes upon the explosions are the large - scale hydrodynamic instabilities as well as @xmath5 driven plasma instabilities .
these processes are also believed to have occurred during the lepton stage of the early universe . during the formation of a neutron star.
Please generate the next two sentences of the article | the collapsed core of the supernova is so dense and hot that @xmath5 and @xmath6 are trapped and are thus unable to leave the core region of the neutron star .
the rates of escape of @xmath5 and @xmath6 are very small , inside the star an equilibrium state is reached , which includes the @xmath4 concentration . |
9,577 | Suppose that you have an abstract for a scientific paper: as inductive inference and machine learning methods in computer science see continued success , researchers are aiming to describe even more complex probabilistic models and inference algorithms .
what are the limits of mechanizing probabilistic inference ?
we investigate the computability of conditional probability , a fundamental notion in probability theory and a cornerstone of bayesian statistics , and show that there are computable joint distributions with noncomputable conditional distributions , ruling out the prospect of general inference algorithms , even inefficient ones .
specifically , we construct a pair of computable random variables in the unit interval such that the conditional distribution of the first variable given the second encodes the halting problem . nevertheless , probabilistic inference is possible in many common modeling settings , and we prove several results giving broadly applicable conditions under which conditional distributions are computable .
in particular , conditional distributions become computable when measurements are corrupted by independent computable noise with a sufficiently smooth density . [ multiblock footnote omitted ] [ multiblock footnote omitted ] [ multiblock footnote omitted ] .
And you have already written the first three sentences of the full article: the use of probability to reason about uncertainty is key to modern science and engineering , and the operation of _ conditioning _ , used to perform bayesian inductive reasoning in probabilistic models , directly raises many of its most important computational problems .
faced with probabilistic models of increasingly complex phenomena that stretch or exceed the limitations of existing representations and algorithms , researchers have proposed new representations and formal languages for describing joint distributions on large collections of random variables , and have developed new algorithms for performing automated probabilistic inference .
what are the limits of this endeavor ?.
Please generate the next two sentences of the article | can we hope to automate probabilistic reasoning via a general inference algorithm that can compute conditional probabilities for an _ arbitrary _ computable joint distribution ?
we demonstrate that there are computable joint distributions with noncomputable conditional distributions . of course |
9,578 | Suppose that you have an abstract for a scientific paper: we report for the first time exact ground - states deduced for the @xmath0 dimensional generic periodic anderson model at finite @xmath1 , the hamiltonian of the model not containing direct hopping terms for @xmath2-electrons @xmath3 .
the deduced itinerant phase presents non - fermi liquid properties in the normal phase , emerges for real hybridization matrix elements , and not requires anisotropic unit cell . in order to deduce these results ,
the plaquette operator procedure has been generalised to a block operator technique which uses blocks higher than an unit cell and contains @xmath2-operator contributions acting only on a single central site of the block . .
And you have already written the first three sentences of the full article: the periodic anderson model ( pam ) is one of the basic models largely used in the study of strongly correlated systems whose properties can be described at the level of two effective bands , like heavy - fermion systems @xcite , intermediate - valence compounds @xcite , or even high critical temperature superconductors @xcite .
the model contains a free @xmath4 band hybridized with a correlated system of @xmath2 electrons for which the one - site coulomb repulsion in the form of the hubbard interaction is locally present .
seen from the theoretical side , pam has the peculiarity that even its one dimensional hamiltonian is sufficiently complicated to not allow the knowledge of its exact solutions even in 1d . as a consequence ,.
Please generate the next two sentences of the article | taking into account that the exact description possibilities increase in difficulty with the increase of the dimensionality of the system in the physical region @xmath5 , the physics provided by pam is almost exclusively interpreted based on approximations .
this situation enhance the difficulty of a good quality theoretical analysis , since exact bench - marks in testing the approximations or numerical simulations are almost completely missing . |
9,579 | Suppose that you have an abstract for a scientific paper: we investigate equilibrium statistical properties of urn models with disorder .
two urn models are proposed ; one belongs to the ehrenfest class , and the other corresponds to the monkey class .
these models are introduced from the view point of the power - law behavior and randomness ; it is clarified that quenched random parameters play an important role in generating power - law behavior .
we evaluate the occupation probability @xmath0 with which an urn has @xmath1 balls by using the concept of statistical physics of disordered systems . in the disordered urn model belonging to the monkey class
, we find that above critical density @xmath2 for a given temperature , condensation phenomenon occurs and the occupation probability changes its scaling behavior from an exponential - law to a heavy tailed power - law in large @xmath1 regime .
we also discuss an interpretation of our results for explaining of macro - economy , in particular , emergence of wealth differentials . .
And you have already written the first three sentences of the full article: a lot of techniques and concepts of statistical mechanics of disordered spin systems , in particular , the replica method originally used to analyze the thermodynamics of spin glass model by sherrington and kirkpatrick @xcite , have been applied to various research fields beyond conventional physics , i.e. information processing @xcite , game theory @xcite and so on . the exactly solvable mathematical model , which describes these problems , is categorized as mean - field class @xcite . on the other hand , as another exactly tractable model , in 1907 , paul and tatiana ehrenfest published a paper corroborating boltzmann s view of thermodynamics @xcite .
their urn model has been defined by kac @xcite as an exactly solvable example in statistical physics . while it has also been criticized as a marvelous exercise too far removed from reality ,
their urn model has been applied to modern problems such as complex networks @xcite or econophysics @xcite , etc ..
Please generate the next two sentences of the article | for instance , based on extensive simulations of the lennard - jones fluid requiring in part a parallel computer in juelich , an italian - german team has shown that the prediction of the ehrenfest urn effectively describes the behavior of the gas phase @xcite .
moreover , it has been revealed that the mathematical structure of equilibrium state of the urn model @xcite is similar to the zero - range process , which has been widely investigated in research fields of non - equilibrium statistical physics @xcite . |
9,580 | Suppose that you have an abstract for a scientific paper: estimating vaccination uptake is an integral part of ensuring public health . it was recently shown that vaccination uptake can be estimated automatically from web data , instead of slowly collected clinical records or population surveys @xcite .
all prior work in this area assumes that features of vaccination uptake collected from the web are temporally regular .
we present the first ever method to remove this assumption from vaccination uptake estimation : our method dynamically adapts to temporal fluctuations in time series web data used to estimate vaccination uptake .
we show our method to outperform the state of the art compared to competitive baselines that use not only web data but also curated clinical data .
this performance improvement is more pronounced for vaccines whose uptake has been irregular due to negative media attention ( hpv-1 and hpv-2 ) , problems in vaccine supply ( ditekipol ) , and targeted at children of 12 years old ( whose vaccination is more irregular compared to younger children ) . .
And you have already written the first three sentences of the full article: vaccination programs are an efficient and cost effective method to improve public health . with sufficiently many people vaccinated the population gains herd immunity , meaning the disease can not spread .
timely actions to avoid drops in vaccination coverage are therefore of great importance .
many countries have no registries of timely vaccination uptake information , but rely for example on yearly surveys . in such countries estimations of near real - time vaccination uptake based solely on web data are valuable ..
Please generate the next two sentences of the article | we extend prior work in this area @xcite , which showed that vaccination uptake can be estimated sufficiently accurately from web search data .
our extension consists of a new estimation method that adapts dynamically to temporal fluctuations in the signal ( web search queries in our case ) instead of assuming temporal stationarity as in @xcite . |
9,581 | Suppose that you have an abstract for a scientific paper: we report a study of the intensity and time dependence of scintillation produced by weak @xmath0-particle sources in superfluid helium in the presence of an electric field ( @xmath1 kv / cm ) in the temperature range of 0.2 k to 1.1 k at the saturated vapor pressure .
both the prompt and the delayed components of the scintillation exhibit a reduction in intensity with the application of an electric field .
the reduction in the intensity of the prompt component is well approximated by a linear dependence on the electric field strength with a reduction of 15% at 45 kv / cm .
when analyzed using the kramers theory of columnar recombination , this electric field dependence leads to the conclusion that roughly 40% of the scintillation results from species formed from atoms originally promoted to excited states and 60% from excimers created by ionization and subsequent recombination with the charges initially having a cylindrical gaussian distribution about the @xmath0 track of 60 nm radius .
the intensity of the delayed component of the scintillation has a stronger dependence on the electric field strength and on temperature .
the implications of these data on the mechanisms affecting scintillation in liquid helium are discussed . .
And you have already written the first three sentences of the full article: the phenomenon of liquid helium ( lhe ) scintillation due to passage of charged particles was discovered in the late 1950 s @xcite .
since then rather extensive studies have been conducted , motivated both by its intrinsic interest ( including an interest in illuminating the behavior of ions and neutrals in superfluid helium ) and by the application of liquid helium as a particle detector @xcite .
( for a brief review of early work , see ref ..
Please generate the next two sentences of the article | @xcite . a more recent review
can be found in the introduction of ref . |
9,582 | Suppose that you have an abstract for a scientific paper: the mass distribution of galaxy clusters can be determined from the study of the projected phase - space distribution of cluster galaxies .
the main advantage of this method as compared to others , is that it allows determination of cluster mass profiles out to very large radii . here
i review recent analyses and results on this topic .
in particular , i briefly describe the jeans and caustic methods , and the problems one has to face in applying these methods to galaxy systems .
then , i summarize the most recent and important results on the mass distributions of galaxy groups , clusters , and superclusters .
additional covered topics are the relative distributions of the dark and baryonic components , and the orbits of galaxies in clusters . .
And you have already written the first three sentences of the full article: knowledge of the mass distribution within clusters , ( also in relation to the distributions of the different cluster components ) , gives important clues about the formation process of the clusters and of the galaxies in them , as well as on the nature of dark matter .
there have been many studies of the mass distribution in galaxy systems over the last decade , stimulated by the discovery of the universal nfw mass density profile of dark matter haloes by navarro et al .
( 1996 , 1997 ) ..
Please generate the next two sentences of the article | a cluster mass distribution can be derived in three ways : 1 ) through the gravitational lensing of distant objects , 2 ) using the spatial distribution and temperature profile of the x - ray emitting , intra - cluster ( ic hereafter ) gas , and 3 ) through the kinematics and spatial distribution of tracer particles moving in the cluster potential . lensing mostly works for clusters at intermediate and large redshifts , and only few nearby cluster lenses are known ( see e.g. cypriano et al .
x - ray observations only sample the inner ( see , e.g. , pratt & arnaud 2002 ) or , at best , the virialized ( neumann 2005 ) cluster regions . the third method is particularly suited for studying the mass profiles of relatively nearby galaxy clusters , out to large radii , well beyond the virialized region ( see , e.g. , reisenegger et al . |
9,583 | Suppose that you have an abstract for a scientific paper: the observation of strongly interacting many - body phenomena in atomic gases typically requires ultracold samples . here
we show that the strong interaction potentials between rydberg atoms enable the observation of many - body effects in an atomic vapor , even at room temperature .
we excite rydberg atoms in cesium vapor and observe in real - time an out - of - equilibrium excitation dynamics that is consistent with an aggregation mechanism .
the experimental observations show qualitative and quantitative agreement with a microscopic theoretical model .
numerical simulations reveal that the strongly correlated growth of the emerging aggregates is reminiscent of soft - matter type systems .
due to their exaggerated properties , rydberg atoms find applications in various research fields ranging from cavity qed @xcite , quantum information @xcite , quantum optics @xcite , microwave sensing @xcite to molecular physics @xcite .
one particular area of research employs the strong interactions between rydberg atoms to create strongly interacting many - body quantum systems for quantum simulation @xcite , quantum phase transitions @xcite and the realization of correlated or spatially ordered states @xcite . in any system ,
gaseous , liquid , glass or solid , spatial correlations can only arise if interactions are present . in rydberg gases these correlations
have recently been revealed by the direct imaging of the resonant excitation blockade effect @xcite . in our experiment an initially nearly ideal gas of thermal atoms at room temperature
is excited into a strongly interacting rydberg state .
the rydberg excitations show correlated many - body dynamics that shares similarities with that of soft - matter systems .
, the atoms are exactly shifted in resonance with the excitation laser , red - detuned by @xmath0 .
the grey shaded area symbolizes the excitation bandwidth , a gaussian whose width is given by the dephasing rate @xmath1 .
( b ) left : typical time evolution of the....
And you have already written the first three sentences of the full article: the 455 nm laser is frequency - stabilized with a blue detuning of @xmath59 ghz to the @xmath60 transition ( see fig . [
fig : fig2 ] ) .
the 1070 nm laser is scanned over the two - photon resonance to the rydberg state @xmath61 ..
Please generate the next two sentences of the article | both lasers have an estimated linewidth below 5 mhz .
the frequency of the 1070 nm laser is calibrated for each measurement using a fabry - prot interferometer and additionally by an eit - signal @xcite to fix the origin of the frequency axis . |
9,584 | Suppose that you have an abstract for a scientific paper: we discuss the quantification of the local galaxy population and the impact of the `` new era of wide - field astronomy '' on this field , and , in particular , systematic errors in the measurement of the luminosity function .
new results from the 2dfgrs are shown in which some of these selection effects have been removed .
we introduce an int - wfs project which will further reduce the selection biases .
we show that there is a correlation between the surface brightness and the luminosity of galaxies and that new technologies are having a big impact on this field .
finally selection criteria from different surveys are modelled and it is shown that some of the major selection effects are surface brightness selection effects . .
And you have already written the first three sentences of the full article: galaxy populations were first studied by hubble ( 1926 ) , who developed the familiar tuning fork diagram of ellipticals , spirals and barred spirals .
most bright galaxies can be morphologically classified by their hubble type .
however , many types of galaxy have been found that do nt fit the tuning fork ..
Please generate the next two sentences of the article | these occur both at low redshift and at high redshift where the galaxies can be intrinsically different due to evolution .
some of these galaxies are shown in fig . |
9,585 | Suppose that you have an abstract for a scientific paper: the @xmath0-skeleton of the canonical cubulation @xmath1 of @xmath2 into unit cubes is called the _ canonical scaffolding _
@xmath3 . in this paper , we prove that any smooth , compact , closed , @xmath0-dimensional submanifold of @xmath2 with trivial normal bundle can be continuously isotoped by an ambient isotopy to a cubic submanifold contained in @xmath3 .
in particular , any smooth knot @xmath4 can be continuously isotoped to a knot contained in @xmath3 . .
And you have already written the first three sentences of the full article: in this paper we consider smooth higher dimensional knots , that is , spheres @xmath5 smoothly embedded in @xmath6 . in @xmath6 we have the canonical cubulation @xmath1 by translates of the unit @xmath7-dimensional cube .
we will call the @xmath0-skeleton @xmath3 of this cubulation the _ canonical scaffolding _ of @xmath6 ( see section 2 for precise definitions ) .
we consider the question of whether it is possible to continuously deform the smooth knot by an ambient isotopy so that the deformed knot is contained in the scaffolding ..
Please generate the next two sentences of the article | in particular , a positive answer to this question implies that knots can be embedded as cubic sub - complexes of @xmath6 , which in turn implies the well - known fact that smooth knots can be triangulated by a pl triangulation ( @xcite ) .
the problem of embedding an abstract cubic complex into some skeleton of the canonical cubulation can be traced back to s.p . |
9,586 | Suppose that you have an abstract for a scientific paper: we provide a preliminary estimate of the performance of reflex astrometry on earth - like planets in the habitable zones of nearby stars . in monte carlo experiments ,
we analyze large samples of astrometric data sets with low to moderate signal - to - noise ratios .
we treat the idealized case of a single planet orbiting a single star , and assume there are no non - keplerian complications or uncertainties
. the real case can only be more difficult .
we use periodograms for discovery and least - squares fits for estimating the keplerian parameters .
we find a completeness for detection compatible with estimates in the literature .
we find mass estimation by least squares to be biased , as has been found for noisy radial - velocity data sets ; this bias degrades the completeness of accurate mass estimation .
when we compare the true planetary position with the position predicted from the fitted orbital parameters , at future times , we find low completeness for an accuracy goal of 0.3 times the semimajor axis of the planet , even with no delay following the end of astrometric observations .
our findings suggest that the recommendation of the exoplanet task force ( lunine et al .
2008 ) for `` the capability to measure convincingly wobble semi - amplitudes down to 0.2 @xmath0as integrated over the mission lifetime , '' may not be satisfied by an instrument characterized by the noise floor of the _ space interferometry mission _
, @xmath1as .
an important , unsolved , strategic challenge for the exoplanetary science program is figuring out how to predict the future position of an earth - like planet with accuracy sufficient to ensure the efficiency and success of the science operations for follow - on spectroscopy , which would search for biologically significant molecules in the atmosphere . .
And you have already written the first three sentences of the full article: today , the question of life in the universe is a compelling goal of scientific research .
finding such life would have vast implications for the human mind , consequences that seem real to everyone , even if difficult to express .
down through the ages , the uniqueness of earth has been a stimulating issue for philosophers and ordinary people alike ..
Please generate the next two sentences of the article | today , inspired by the discovery of many large planets around nearby stars , and propelled by progress in utilizing broad wavefronts of astronomical light , we can at last prepare to test the earth s exceptionalism by means of large optical systems in space .
an important , specific goal is to search for evidence of biologically significant molecules , particularly free oxygen , in spectra of the atmospheres of earth - size planets located in the habitable zones of nearby stars |
9,587 | Suppose that you have an abstract for a scientific paper: finite temperature local dynamical spin correlations @xmath0 are studied numerically within the random spin@xmath1 antiferromagnetic heisenberg chain .
the aim is to explain measured nmr spin
lattice relaxation times in , which is the realization of a random spin chain . in agreement with experiments
we find that the distribution of relaxation times within the model shows a very large span similar to the stretched exponential form .
the distribution is strongly reduced with increasing @xmath2 , but stays finite also in the high@xmath2 limit .
anomalous dynamical correlations can be associated to the random singlet concept but not directly to static quantities .
our results also reveal the crucial role of the spin anisotropy ( interaction ) , since the behavior is in contrast with the ones for xx model , where we do not find any significant @xmath2 dependence of the distribution .
one dimensional ( 1d ) quantum spin systems with random exchange couplings reveal interesting phenomena fundamentally different from the behavior of ordered chains .
since the seminal studies of antiferromagnetic ( afm ) random heisenberg chains ( rhc ) by dasgupta and ma @xcite using the renormalization group approach and further development by fisher @xcite , it has been recognized that the quenched disorder of exchange couplings @xmath3 leads at lowest energies to the formation of random singlets with vanishing effective @xmath4 at large distances .
the consequence for the uniform static susceptibility @xmath5 is the singular curie type temperature ( @xmath2 ) dependence , dominated by nearly uncoupled spins at low@xmath2 and confirmed by numerical studies of model systems @xcite , as well by measurements of @xmath6 on the class of materials being the realizations of rhc physics , in particular the mixed system @xcite .
recent measurements of nmr spin lattice relaxation times @xmath7 in @xcite reveal a broad distribution of different @xmath7 resulting in a nonexponential magnetization decay being....
And you have already written the first three sentences of the full article: looking at the diverging uniform ( @xmath210 ) susceptibility @xmath6 as @xmath14 ( fig . [ chi ] ) intuitively suggests large low@xmath30 response and in turn increasing contribution of @xmath211 to the spin relaxation rate @xmath29 as @xmath14 .
this is not what is observed , since we see no increase of @xmath94 ( fig .
4b in the main text ) as @xmath14 , but instead @xmath94 decreases with decreasing @xmath2 , which is in agreement also with experimental data ( ref ..
Please generate the next two sentences of the article | @xcite , fig .
3a ) . this dichotomy can be partly understood by exploring the connection between static uniform spin susceptibility @xmath6 with the static spin structure factor ( equal - time correlation ) @xmath115 , representing also the frequency integral of dynamical spin structure factor @xmath212 . |
9,588 | Suppose that you have an abstract for a scientific paper: for a given rotation number we compute the hausdorff dimension of the set of well approximable numbers .
we use this result and an inhomogeneous version of jarnik s theorem to show strong recurrence properties of the billiard flow in certain polygons . .
And you have already written the first three sentences of the full article: in the past decade in four independent articles it was observed that the billiard orbit of any point which begins perpendicular to a side of a polygon and at a later instance hits some side perpendicularly retraces its path infinitely often in both senses between the two perpendicular collisions and thus is periodic .
the earliest of these articles is a numerical work of ruijgrok which conjectures that every triangle has perpendicular periodic orbits @xcite . in 1992 boshernitzan
@xcite and independently galperin , stepin and vorobets @xcite proved that for any rational polygon , for every side of the polygon , the billiard orbit which begins perpendicular to that side is periodic for all but finitely many starting points on the side ..
Please generate the next two sentences of the article | finally for an irrational right triangle cipra , hansen and kolan have considered points which are perpendicular to one of the legs of the triangle .
they showed that for almost every such point the billiard orbit is periodic @xcite . here |
9,589 | Suppose that you have an abstract for a scientific paper: window profiles of amino acids in protein sequences are taken as a description of the amino acid environment .
the relative entropy or kullback - leibler distance derived from profiles is used as a measure of dissimilarity for comparison of amino acids and secondary structure conformations .
distance matrices of amino acid pairs at different conformations are obtained , which display a non - negligible dependence of amino acid similarity on conformations .
based on the conformation specific distances clustering analysis for amino acids is conducted .
= -2 cm = -1 cm = 16.5 cm .
And you have already written the first three sentences of the full article: the similarity of amino acids(@xmath0 ) is the basis of protein sequence alignment , protein design and protein structure prediction .
several scoring schemes have been proposed based on amino acid similarity .
the mutation data matrices of dayhoff @xcite and the substitution matrices of henikoff @xcite are standard choices of scores for sequence alignment and amino acid similarity evaluation.
Please generate the next two sentences of the article | . however , these matrices , focusing on the whole protein database , pay little attention on protein secondary structures(@xmath1 ) .
how the amino acid similarity is influenced by different secondary structures is an interesting question . |
9,590 | Suppose that you have an abstract for a scientific paper: we present exact solutions of the massless klein - gordon equation in a spacetime in which an infinite straight cosmic string resides .
the first solution represents a plane wave entering perpendicular to the string direction .
we also present and analyze a solution with a static point - like source . in the short wavelength limit
these solutions approach the results obtained by using the geometrical optics approximation : magnification occurs if the observer lies in front of the string within a strip of angular width @xmath0 , where @xmath1 is the string tension . we find that when the distance from the observer to the string is less than @xmath2 , where @xmath3 is the wave length , the magnification is significantly reduced compared with the estimate based on the geometrical optics due to the diffraction effect . for gravitational waves from neutron star(ns)-ns mergers the several lensing events per year may be detected by decigo / bbo . .
And you have already written the first three sentences of the full article: typical wavelength of gravitational waves from astrophysical compact objects such as bh(black hole)-bh binaries is in some cases very long so that wave optics must be used instead of geometrical optics when we discuss gravitational lensing .
more precisely , if the wavelength becomes comparable or longer than the schwarzschild radius of the lens object , the diffraction effect becomes important and as a result the magnification factor approaches unity @xcite .
mainly due to the possibility that the wave effects could be observed by future gravitational wave observations , several authors @xcite have studied wave effects in gravitational lensing in recent years . in most of the works which studied gravitational lensing phenomenon in the framework of wave optics , isolated and.
Please generate the next two sentences of the article | normal astronomical objects such as galaxies are concerned as lens objects . recently
yamamoto and tsunoda@xcite studied wave effects in gravitational lensing by an infinite straight cosmic string . |
9,591 | Suppose that you have an abstract for a scientific paper: i present results from an approach that extends the eliashberg theory by systematic expansion in the vertex function ; an essential extension at large phonon frequencies , even for weak coupling . in order to deal with computationally expensive double sums over momenta , a dynamical cluster approximation ( dca ) approach
is used to incorporate momentum dependence into the eliashberg equations .
first , i consider the effects of introducing partial momentum dependence on the standard eliashberg theory using a quasi - local approximation ; which i use to demonstrate that it is essential to include corrections beyond the standard theory when investigating @xmath0-wave states .
using the extended theory with vertex corrections , i compute electron and phonon spectral functions .
a kink in the electronic dispersion is found in the normal state along the major symmetry directions , similar to that found in photo - emission from cuprates .
the phonon spectral function shows that for weak coupling @xmath1 , the dispersion for phonons has weak momentum dependence , with consequences for the theory of optical phonon mediated d - wave superconductivity , which is shown to be 2nd order in @xmath2 .
in particular , examination of the order parameter vs. filling shows that vertex corrections lead to @xmath0-wave superconductivity mediated via simple optical phonons .
i map out the order parameters in detail , showing that there is significant induced anisotropy in the superconducting pairing in quasi-2d systems . *
[ published as : journal of physics and chemistry of solids , vol .
69 , 2982 - 2985 ( 2008 ) ] * extended eliashberg theory , superconductivity , spectroscopy , unconventional pairing .
And you have already written the first three sentences of the full article: angle - resolved photo - emission spectroscopy ( arpes ) directly probes the dispersion of electrons ; identifying a kink associated with the active optical phonon in the cuprates @xcite .
neutron scattering has shown an anomalous change in the phonon spectrum at the superconducting transition , indicating an interesting role for phonons in the cuprates @xcite .
estimates of the magnitude of the electron - phonon coupling , and the energy of the phonon mode put the problem outside the limited region of applicability for bcs theory , so schemes to cope with a wider range of parameters need to be developed ..
Please generate the next two sentences of the article | moreover , since there a node in the superconducting gap consistent with d - wave symmetry @xcite , any theory implicating phonons as the mechanism must be able to explain the unconventional order .
electron - phonon ( e - ph ) interactions can be described by the following generic model , @xmath3 here , @xmath4 and @xmath5 with @xmath6 and @xmath7 ( representing a quasi-2d system and taming the van - hove singularities ) . |
9,592 | Suppose that you have an abstract for a scientific paper: based on their relatively isolated environments , we argue that luminous blue variables ( lbvs ) must be primarily the product of binary evolution , challenging the traditional single - star view wherein lbvs mark a brief transition between massive o - type stars and wolf - rayet ( wr ) stars .
if the latter were true , then lbvs should be concentrated in young massive clusters like early o - type stars .
this is decidedly not the case .
examining locations of lbvs in our galaxy and the magellanic clouds reveals that , with only a few exceptions , lbvs systematically avoid clusters of o - type stars . in the large magellanic cloud ,
lbvs are statistically much more isolated than o - type stars , and ( perhaps most surprisingly ) even more isolated than wr stars .
this makes it impossible for lbvs to be single `` massive stars in transition '' to wr stars .
instead , we propose that massive stars and supernova ( sn ) subtypes are dominated by bifurcated evolutionary paths in interacting binaries , wherein most wr stars and sne ibc correspond to the mass donors , while lbvs ( and their lower - mass analogs like b[e ] supergiants , which are even more isolated ) are the mass gainers . in this view , lbvs are evolved massive blue stragglers . through binary mass transfer ,
rejuvinated mass gainers get enriched , spun up , and sometimes kicked far from their clustered birthsites by their companion s sn .
this scenario agrees better with lbvs exploding as type iin sne in isolation , and it predicts that many massive runaway stars may be rapid rotators .
mergers or blue thorne - zykow - like objects might also give rise to lbvs , but these scenarios may have a harder time explaining why lbvs avoid clusters .
[ firstpage ] binaries : general stars : evolution stars : winds , outflows .
And you have already written the first three sentences of the full article: mass loss is inexorably linked to evolution for high - mass stars .
in fact , it has a _
deterministic _ influence , which in turn has tremendous impact on other areas of astronomy influenced by stellar feedback ( regulating star formation , galaxy evolution , chemical evolution , reionization , etc . ) . for most of their lives ,.
Please generate the next two sentences of the article | massive stars above @xmath020 @xmath1 shed mass in fast winds that affect their subsequent evolution , but in post - main sequence ( post - ms ) phases the mass loss becomes critical in determining the type of resulting supernova ( sn ) .
the most dramatic mass loss in post - ms evolution is during the luminous blue variable ( lbv ) phase . |
9,593 | Suppose that you have an abstract for a scientific paper: we present one - dimensional simulation results for the cold atom tunneling experiments by the heidelberg group [ g. zrn _ et al .
_ , phys . rev
. lett . * 108 * , 075303 ( 2012 ) and g. zrn _
et al . _ ,
phys .
rev . lett . * 111 * , 175302 ( 2013 ) ] on one or two @xmath0li atoms confined by a potential that consists of an approximately harmonic optical trap plus a linear magnetic field gradient . at the non - interacting particle level , we find that the wkb ( wentzel - kramers - brillouin ) approximation may not be used as a reliable tool to extract the trapping potential parameters from the experimentally measured tunneling data .
we use our numerical calculations along with the experimental tunneling rates for the non - interacting system to reparameterize the trapping potential . the reparameterized trapping potentials serve as input for our simulations of two interacting particles . for two interacting ( distinguishable ) atoms on the upper branch ,
we reproduce the experimentally measured tunneling rates , which vary over several orders of magnitude , fairly well . for infinitely strong interaction strength
, we compare the time dynamics with that of two identical fermions and discuss the implications of fermionization on the dynamics . for two attractively - interacting atoms on the molecular branch
, we find that single - particle tunneling dominates for weakly - attractive interactions while pair tunneling dominates for strongly - attractive interactions .
our first set of calculations yields qualitative but not quantitative agreement with the experimentally measured tunneling rates .
we obtain quantitative agreement with the experimentally measured tunneling rates if we allow for a weakened radial confinement . .
And you have already written the first three sentences of the full article: open quantum systems are at the heart of many physical phenomena from nuclear physics to quantum information theory @xcite . in fact , all `` real '' quantum systems are , to some extent , open systems .
interactions with the environment cause decoherence , resulting in non - equilibrium dynamics .
it is often simpler to design experiments that probe non - equilibrium physics than it is to design experiments that probe equilibrium physics ..
Please generate the next two sentences of the article | conversely , the theoretical toolkit for describing systems in equilibrium is generally much farther developed than that for describing systems in non - equilibrium .
ultracold atom systems provide a platform for realizing clean and tunable quantum systems @xcite . over the past few years |
9,594 | Suppose that you have an abstract for a scientific paper: spectrophotometry in the @xmath0 3400 - 7400 range is presented for 13 areas of the brightest h region in the smc : ngc 346 .
the observations were obtained at ctio with the 4-m telescope . based on these observations
its chemical composition is derived .
the helium and oxygen abundances by mass are given by : @xmath1(smc)@xmath2 and @xmath3(smc)@xmath4 . from models and observations of irregular and blue compact galaxies
it is found that @xmath5 and consequently that the primordial helium abundance by mass is given by : @xmath6 .
this result is compared with values derived from big bang nucleosynthesis , and with other determinations of @xmath7 . .
And you have already written the first three sentences of the full article: the determination of @xmath7 based on the small magellanic cloud can have at least four significant advantages and one disadvantage with respect to those based on distant h region complexes : a ) no underlying absorption correction for the helium lines is needed because the ionizing stars can be excluded from the observing slit , b ) the determination of the helium ionization correction factor can be estimated by observing different lines of sight of a given h region , c ) the accuracy of the determination can be estimated by comparing the results derived from different points in a given h region , d ) the electron temperature is generally smaller than those of metal poorer h regions reducing the effect of collisional excitation from the metastable 2 @xmath8s level of he , and e ) the disadvantage is that the correction due to the chemical evolution of the smc is in general larger than for the other systems .
the determination of the pregalactic , or primordial , helium abundance by mass @xmath7 is paramount for the study of cosmology , the physics of elementary particles , and the chemical evolution of galaxies ( e. g. * ? ? ?
* ; * ? ? ?.
Please generate the next two sentences of the article | * ; * ? ? ?
* and references therein ) . in this paper |
9,595 | Suppose that you have an abstract for a scientific paper: we propose a two - photon beating experiment based upon biphotons generated from a resonant pumping two - level system operating in a backward geometry . on the one hand
, the linear optical - response leads biphotons produced from two sidebands in the mollow triplet to propagate with tunable refractive indices , while the central - component propagates with unity refractive index .
the relative phase difference due to different refractive indices is analogous to the pathway - length difference between long - long and short - short in the original franson interferometer . by subtracting the linear rayleigh scattering of the pump
, the visibility in the center part of the two - photon beating interference can be ideally manipulated among [ 0 , 100@xmath0 by varying the pump power , the material length , and the atomic density , which indicates a bell - type inequality violation . on the other hand , the proposed experiment may be an interesting way of probing the quantum nature of the detection process .
the interference will disappear when the separation of the mollow peaks approaches the fundamental timescales for photon absorption in the detector . .
And you have already written the first three sentences of the full article: entangled paired photons or biphotons provide an unprecedented tools for research in fundamental physics such as quantum information processing @xcite and tests of fundamentals of quantum mechanics @xcite .
biphotons produced from either spontaneous parametric down conversion ( spdc ) @xcite or atomic cascade transitions have already been used in optical experimental tests of fundamentals of quantum theory , and have demonstrated violations of bell s inequalities @xcite .
most of experimental tests of the inequalities have involved the polarization entanglement ..
Please generate the next two sentences of the article | rarity and tapster @xcite have reported experiments with momentum entanglement of the beams .
the experiment proposed by franson @xcite concerning a bell inequality for nonpolarization variables , relies on the entanglement of a continuous variable , energy . |
9,596 | Suppose that you have an abstract for a scientific paper: we test the plausibility that a majorana fermion dark matter candidate with a scalar mediator explains the gamma ray excess from the galactic center . assuming that the mediator couples to all third generation fermions we calculate observables for dark matter abundance and scattering on nuclei , gamma , positron , and anti - proton cosmic ray fluxes , radio emission from dark matter annihilation , and the effect of dark matter annihilations on the cmb . after discarding the controversial radio observation
the rest of the data prefers a dark matter ( mediator ) mass in the 10100 ( 31000 ) gev region and weakly correlated couplings to bottom quarks and tau leptons with values of @xmath01 at the @xmath1 credibility level . .
And you have already written the first three sentences of the full article: since 2009 an increasingly significant deviation from background expectations has been identified in the data of the large area telescope ( lat ) on board the fermi gamma ray space telescope satellite @xcite .
the deviation appears around 2 gev in the energy spectrum of gamma ray flux originating from an extended region centered in the galactic center .
the source of the excess photons is unknown ..
Please generate the next two sentences of the article | their origin can be dark matter ( dm ) annihilation , a population of millisecond pulsars or supernova remnants @xcite , or cosmic rays injected in a burst - like or continuous event at the galactic center @xcite .
it is , however , challenging to explain the excess with millisecond pulsars @xcite based on their luminosity function . |
9,597 | Suppose that you have an abstract for a scientific paper: this study investigates the potential of a photon collider for measuring the two photon partial width times the branching ratio of a light higgs boson .
the analysis is based on the reconstruction of the higgs events produced in the @xmath0h process , followed by higgs decay into a b@xmath1 pair . a statistical error of the measurement of the two - photon width times the b@xmath1 branching ratio of the higgs boson
is found to be 1.7 @xmath2 with an integrated luminosity of 80 fb@xmath3 in the high energy part of the spectrum . .
And you have already written the first three sentences of the full article: the central challenge for particle physics nowadays is the origin of mass . in the standard model
both fermions and gauge boson masses are generated through interactions with the same scalar particle , the higgs boson , h. if it exists , the higgs boson will certainly be discovered by the time a photon collider is constructed .
the aim of this machine will be then a precise measurement of the higgs properties . the photon scattering can be used to produce the higgs particles singly in the s - channel of the colliding photons ..
Please generate the next two sentences of the article | this facility permits a high precision measurement of the h @xmath4 partial width , which is sensitive to new charged particles .
the measurement is significantly important . |
9,598 | Suppose that you have an abstract for a scientific paper: information management is one of the most significant issues in nowadays data centers .
selection of appropriate software , security mechanisms and effective energy consumption management together with caring for the environment enforces a profound analysis of the considered system . besides these factors ,
financial analysis of data center maintenance is another important aspect that needs to be considered .
data centers are mission - critical components of all large enterprises and frequently cost hundreds of millions of dollars to build , yet few high - level executives understand the true cost of operating such facilities .
costs are typically spread across the it , networking , and facilities , which makes management of these costs and assessment of alternatives difficult .
this paper deals with a research on multilevel analysis of data center management and presents an approach to estimate the true total costs of operating data center physical facilities , taking into account the proper management of the information flow . .
And you have already written the first three sentences of the full article: the challenges faced by companies working in nowadays complex it environments pose the need for comprehensive and dynamic systems to cope with the information flow requirements @xcite , @xcite , @xcite .
planning can not answer all questions : we must take a step further and discuss a model for application management .
one of the possible approaches to deal with this problem , is to use the decision support system that is capable of supporting decision - making activities . in @xcite , we proposed the foundations of our decision support system for complex it environments . developing our framework.
Please generate the next two sentences of the article | , we examined the time , energy usage , qop , finance and carbon dioxide emissions .
regarding financial and economic analyzes , we considered only _ |
9,599 | Suppose that you have an abstract for a scientific paper: we study the maximum weight matching problem in the semi - streaming model , and improve on the currently best one - pass algorithm due to zelke ( proc .
stacs 08 , pages 669680 ) by devising a deterministic approach whose performance guarantee is @xmath0 . in addition , we study _ preemptive _ online algorithms , a sub - class of one - pass algorithms where we are only allowed to maintain a feasible matching in memory at any point in time .
all known results prior to zelke s belong to this sub - class .
we provide a lower bound of @xmath1 on the competitive ratio of any such deterministic algorithm , and hence show that future improvements will have to store in memory a set of edges which is not necessarily a feasible matching . .
And you have already written the first three sentences of the full article: the computational task of detecting maximum weight matchings is one of the most fundamental problems in discrete optimization , attracting plenty of attention from the operations research , computer science , and mathematics communities .
( for a wealth of references on matching problems see @xcite . ) in such settings , we are given an undirected graph @xmath2 whose edges are associated with non - negative weights specified by @xmath3 .
a set of edges @xmath4 is a _ matching _ if no two of the edges share a common vertex , that is , the degree of any vertex in @xmath5 is at most @xmath6 ..
Please generate the next two sentences of the article | the weight @xmath7 of a matching @xmath8 is defined as the combined weight of its edges , i.e. , @xmath9 .
the objective is to compute a matching of maximum weight . |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.